 All right, welcome to a deep dive.net and the CF pipeline. So, a little bit about us. I'm Larry Smithmire. I've been with Majinik for seven years. Over 20 years in software development. I'm based in New York City. And my co-host. I am Nathan Audenweigner. I've also been with Majinik for about six years. 20 years in the industry here. And I am an accidental resident of Cleveland, Ohio. Okay. So, who is this talk for? What is this talk about? This talk is kind of for a couple of different audiences. And now I see that there are five of you. Hopefully, you'll either be someone that's been doing .NET for a while, or someone that's been doing Cloud Foundry for a while. So, how many people do .NET programming? Okay. About half. And, you know, how many people are doing Java? Java, mainly? Okay. Anyone new to Cloud Foundry? New to Cloud Foundry? Okay. And are you using Git? Everybody's using Git. It gets the standard Git flow. How do you do branching? Do you do branching with Git flow or just branch, however? Okay. We'll talk. And then for your pipeline, are you using, anyone using Azure DevOps already? Or do you use Jenkins to do your builds? Jenkins? Okay. So, this is kind of a 100 level talk. So, we're going to keep it light and fluffy at the beginning. And then at the end, we're going to show a demo. And we'll show Azure DevOps all the way through. So, moving on. We're going to start out with the process. We're going to move to decomposition. And then patterns for .NET teams that are working on Cloud Foundry. We've been doing .NET for a while. And, you know, we're kind of coming into Cloud Foundry and with fresh eyes looking at it, saying this is how you can transform a .NET practice into a practice that can work within the Cloud Foundry realm. And, finally, DevOps and how to actually get your code published. Okay? Processes. Thank you. So, we just want to talk a little bit on process. There's a lot of different flavors of Agile. Who here is using like a scrum type model? Kind of sorta, yeah. Anybody using Kanban? Kind of sorta. We've got one. Okay. These are kind of the two general flavors that we see out there. We just wanted to highlight that the pros and cons on either side of these, they do have some differences. So, Kanban, you're trying to maximize flow and minimize like the number of things you have in progress at a time. Versus a scrum model, you've got a nice iteration boundary and you plan the work for the two weeks. The point that we're getting at here though is Kanban seems to work well with teams that have a fair bit of maturity. There's a few less guardrails in place so you can move a little faster. But the fewer guardrails kind of means that it works better when you've got a mature team. The thing that I think that scrum helps with is if you've got a lot of chaos up above the team with the requirements, scrum has some ceremonies and processes that protects the team from too much churn. You commit to two weeks of work, you know what you're doing, you finish it up and move on while the business is figuring out what's next. So these are the two flavors. Use what makes the mess sense based on context. Decomposition. So this is a mess. This is where you maybe are right now. If you're doing software in the traditional way and you look at what you have, you do a survey of the software that you have in place and you're wondering, how do I break this down into microservices? It's like bowling the ocean, right? Eating the elephant. You've got to start out, do it one bite at a time. And the way you do that is by decomposing your problem. And how many people have done or heard of event storming? Is event storming in your vocabulary? No. Okay. Event storming is a workshop method that helps you define the actual business problems. And it's a good method that maps very well to microservices. So you bring all of the people into the room that are involved in the business. And you're not solving a technical problem, you're solving a business problem. At this level. Okay. So you take all of the technology out of the room. Event storming, you're using sticky notes and butcher's paper, right? The big long sheets. And what you do is you set down with all of the people that are involved in the business and you start defining what the business actually does. And you work from the beginning to the end. And you use sticky notes, orange notes for events, blue for commands, yellow for aggregates, and purple for trouble spots. Okay. And look up event storming. There's a great talk. There's a lot of great talks about how to actually do it. So Alberto Brandolini has a talk called 500 orange stickies later. Or 5,000, 50,000 orange stickies later. And he's the person that kind of came up with this and has put some structure around it. He's got a book that he's not finished writing yet, but eventually we'll be finished writing on this. And it's great. So let's look quickly at decomposing with event storm. We start out, we put down these commands, these things that are happening like pay a bill. Then we layer in these events that are happening like our shipment is delivered. You know, we want to receive a shipment. So the shipments delivered. These are actions. Then we come back and we add in problem spots, things that we need, where we need to do checks. Okay. So we're just breaking a problem down into smaller pieces, decomposing it. Once we've done that, we break them down into kind of flows. We put these lines in place. Now we're going to pick one. We're going to say that we're having the most problem or our business can be best served by us automating this shipment receiving. Okay. So the idea with event storming is that you're going to break out all of the problems and you're going to focus on the one that gives you the most business value with the least risk. Okay. So that big mess of problems up there, you don't want to do all of them. You can't do all of them at once. Pick the one that's going to give you the most bang for the buck. Okay. So for us, we're Microsoft's focus, right? So once we've decomposed it, we're going to drop our stories. I mean, excuse me, we're going to drop our cards over. We're going to put them in as stories into Azure. The keys for decomposing don't decompose things too small. Don't leave them too big and really rely on the event storming to outline the functionality that you need. Okay. You're not solving technical problems with the event storming. You're identifying business needs. Okay. Next source code. All right. We touched on this a little bit. It sounds like most everybody in the room here is using Git. That's kind of the message here is it seems to be the industry trend. Beyond that, what we want to talk a little bit about is the branching model because that impacts your pipeline quite a bit. The patterns that we see being preferred here is a trunk based development model, right? So typically you have a develop branch and everybody's bringing their code back into develop regularly, right? A couple of days for a branch maybe. But short lifetimes. So you want everything in a single branch. And then you want a stable master. So stable master says you've got a branch that represents what's in prod. And at any point you could build whatever's in that branch and that's your production, right? It's all good builds. Gitflow is a model. It's not the only one, but it's a model that's out there. From what I've seen, it really does a great job of doing these three goals here. It's well proven, well practiced. There's tooling out there, but you don't have to use it. And I will suggest here that the Atlassian documentation on this. They've got a really great walkthrough that goes through how Gitflow operates. Really great diagrams. Lots and lots of information here. But Gitflow looks like a mess. If you look at some of the diagrams, right? It's branches and everything everywhere. This kind of takes it piecemeal and says, okay, what's the point of why we're doing these different branches? What types of branches do we have? Feature branches, release branches, bug fix branches, right? And there's patterns that they recommend for how you do this, okay? So good resources there, but main message, Git and Gitflow. Okay, I'm back. Enablement. So enablement talks about how we, how those of us that are here, the smart people in the room, the people that are our leaders enable our teams to work well together. And so you're not an army of one. You're not going to do this by yourself. You're not going to be able to transform a business as a singleton. The whole is greater than the sum of the parts, right? We need our teams to work together. And the way that we do that is we enable them, we give them the tools that they need. And one of the ways that we do that is we, and I'm going to say extreme programming and it makes me cringe when I say it, okay? When you get two people on a computer together though, there's a synergy that happens. It's not just one brain working anymore. It's one brain that's typing and actually implementing the code. The other brain can be used to solve the problem. And that cognitive overhead, right? The amount that you have to keep in your brain when you've got a partner sitting beside you goes down. And it frees you to work on the piece of the problem that you're working on and let them worry about the other part. Now, Nathan and I are working from different cities. How do we handle that? Well, Visual Studio Code and Visual Studio itself has a shared programming model so that we can do live sharing and we can both be working within the same environment at the same time. It's really cool. Been out since 2017. Nobody's really using it. Okay? They just released a new version of it last, or actually yesterday. Yesterday was the Visual Studio launch. They've doubled down on it and now they're making it even easier to have multiple people working within the same environment. So we're sharing code. We're not on the same desktop, but we're within the same code environment. So Visual Studio Live. So we also talk about building, kind of building ramps for people that need to be onboarded. And we do that through things like test driven design. Okay? And I'm going to bring testing in as an on-ramp because if you're new to a project, what's the first thing that you do? Well, the first thing that I do is I look for tests because tests should show me how the programs work. I should be able to follow through the tests and see what this program was intended to do. I can also run them and make sure that the program works on my machine and works in my environment. So we also plan feedback loops. And this, again, is a way that we... So I develop code, and even if Nathan and I are not paired, when I develop code, I send it to Nathan for a review. And he reviews my code. While he's reviewing my code, I'm reviewing his code. So we start out our morning, and I start with the controllers. And I'm going to spend probably until 2 o'clock, 2 in the afternoon, working on code. Then around 2, I'll ship my code over to Nathan. He ships his code over to me. And we spend the last bit of the day coming up to speed with what the other person's done. So this is a code review, looking... Not for flaws, but looking for ways that we can enhance and ways that we can extend the model. We also build enablement cookbooks. So write down the steps that you need to do so that you can hand it to someone like a new developer that doesn't have a background in it so they can do it without having to bother you. It's self-preservation. I don't want someone to have to keep coming to me every time they need to know how to do something. So I put it out in documentation. Personally, I like to do videos. I like to just do a screen capture. Someone asked me a question. I say, great. Let me answer that for you on Skype. And I'm going to record it. And I'm going to put it on a share. So the next person that needs that question answered, they can find it without even bothering me. If they do ask, then I can send them the link. Answer questions one time. None of this is .NET related. We're not to .NET yet. But it's the process that we've been using in .NET that we're bringing to Cloud Foundry. Hopefully some of this sticks and helps. Cookbooks, tools. There's a lot of tools around building out cookbooks. Hugo is really easy to build a static site. So just some things that you can use. Conclusions. There have been studies where a senior developer spending five minutes with a junior developer in the morning can improve their productivity by 50%. You just need, if you're the senior leader, you need to talk to your people in the morning. For us, we use Scrum and Kanban. We have standups. Spend the time, spend the five minutes to talk and find out what they're doing and point them in the right direction. Make sure that they're arrow focused on what their tasks are for that day. Spend some time pairing with them. If they're having problems, don't just, you know, spend the time to teach them how to fish, if you will. Don't feed them. Teach them how to fish. Teach them how to learn what they need to do. Cookbooks. Capture your key things. Cookbooks don't have to be text. They can be videos. And formalize and clarify. That's it. DevOps. Yes, we are. Take your time. All right. So this is hopefully or maybe what you guys were coming to see. We're going to talk a little bit more about the actual pipeline. We're almost going to be using Azure DevOps, formerly known as Visual Studio Team Services. I'll ask again, who has used it or currently using? Okay, a couple of people. The patterns we're talking about here could be done anywhere for those using Jenkins or whatever. A lot of these hopefully can be used in different tooling. We're just using Azure DevOps because it's Microsoft land, right? Familiar to many people. Before we start showing you Azure DevOps, we just want to talk a little bit about a couple of aspects here. So .NET Core. Who else is using .NET Core so far? Anybody on .NET Core? We got one. Full framework. Anybody still on full framework? Okay. The message here with .NET Core is it is a brand new world with .NET Core. It's been rewritten. There's a lot of performance enhancements. I'm on a project right now. We took the same bits, right? We're targeting .NET standard. We put the same bits and ran them in the full framework runtime. Then we ran them again in the .NET Core runtime. The performance was like a magnitude better under .NET Core because of all the work. They focused a lot on allocations. They focused a lot on memory footprint. Tons of performance gains there. The threading model is different. Many, many improvements. Take a look at it. The other thing that .NET Core brings obviously is now that you can run your .NET apps on Linux. That opens up a whole new world of options. Containers are one of the primary things .NET Core was targeting when they built it. A great option here. It's also a great option in Cloud Foundry. The build pack for .NET Core is targeted specifically for that and works really, really well. Testing. We've talked about this a little bit already. I'm going to assume everybody here is doing testing. Unit tests are table stakes. What I do want to recommend though is make sure that you're getting enough integration tests. Again, the current project I'm on, they have a bar that says we've got to have 90% code coverage. We get that bar met, but I've still seen too many times where we've got good coverage, the tests all pass, and then we run the app and it doesn't work. The individual units need to be proven that they work together. What we're looking in these pipelines for is both unit tests and integration tests, both automated. For environments, again, I'm going to assume everybody has multiple environments. You've got local dev QA prod. We want to separate these out. The goal that we're looking for here is that your environments should be temporary. Treat them as infrastructure as code where you build them up using code. That means you can tear them down, repave them, make sure that they're clean. That's one of the things I want to show you in our demo here today. Cloud Foundry makes this super nice, really easy. Then what we're looking at getting to is two main things that we're going to build. We're going to build a CI pipeline that's going to build our code, run our tests, and then we're going to do a continuous delivery release pipeline to get our bits onto servers. In the continuous delivery pipeline, as I mentioned, we want to do infrastructure as code, so we're going to build our space. Or you could build out it for an org or whatever else, right? But you build it as part of the pipeline. I don't have it modeled here, but you need to factor in things like your database migrations. There's a couple of different options. I'd be happy to talk to people about some of the things I've seen, but you want even your database migrations to be scripted. One last point that one of the guys at Majanic has really tried to pound into our minds is this idea of you build once and deploy that many times. I've been on a number of projects that don't follow this pattern, right? They have a separate compilation and build and deployment for each different environment. The message here is you build the bits one time, and you take those bits and you put them in each of the different environments. Again, Cloud Foundry in 12 Factor makes this really nice and really easy. We leverage the environment variables to configure the app for the environment we're in. And lastly, local environment. I'm going to show a little bit of this here, too. I think local environments are becoming increasingly tricky. It used to be you'd have a couple apps, right? A couple services, big monoliths. But you pull the whole thing down, you run it. Most of it's there. You might have some setup. One of the projects I'm on right now, we're going to be running like 30 different services. It's really hard to do 30 different individual setups. So what we're leveraging in that project is we're using Docker images for running the bits. We're using Docker compose locally so that we can script basically launching and loading all those things and connecting them. And then what I'm going to show today is, for example, using Docker images to run the spring containers that Cloud Foundry might offer, specifically a config server, right? So think about your local developer experience as you think about the pipelines that you've got that are going into the Cloud Foundry instance as well. So let's jump into Azure DevOps. I have a bunch of slides here. I'm going to go through them quickly. And then we're going to go and see this for real. But the slides help me make sure I don't miss something in stepping through how we set this up. The point here is we've got Git repo. We're using a Git flow structure, master develop and feature branches for the code. Then we've set up, and I used the Azure DevOps classic template here. They have a newer template or a newer approach where they want you to use YAML to build all the stuff. But this is much easier. So this does the standard steps, right? .NET restore, build, test, and publish. I get some bits out the other side. And the last step here is to publish the artifacts. So within Azure DevOps, that means I'm going to take these DLLs and all the output files, and DevOps is going to keep that in a little bucket that then I can reference later on when I do a release. The one note here is that by default, it's going to say, hey, do you want to zip the output? We opted not to zip the output. That makes it easier to kind of deal with the individual files that we need. The other thing we did on this guy is just check the box, right? Yes, we want continuous integration. We are doing that at this point on the develop branch. We don't want to build anything and everything. So this one is targeted just at the develop branch. But you can customize this to your needs, obviously. Our goal as we talked about is we want to have multiple environments. The model that I'm showing you here is one space in a given org for each of those. Dev QA test. And we're going to set up a release pipeline. So what I showed you before was the CI build pipeline. This is now the next step. We're going to set up the release bits. What this kind of shows as an overview is on the left, you've got the build output. And in this case, I manually triggered it, but you can set it up for CI triggering. And then this is going to go from the bits out to the develop space. If that's happy, we can manually approve it to go to the QA space. And if we're happy there, then we can manually approve it to go to the production. This is admittedly simple. And in a very real pipeline, you probably would have some additional stages in here where you do some integration testing, have your QA builds run, etc. Make sure that everything's happy. This is a simplified version so we can see the points. This is what an approval looks like. You can see that it's happy in QA. Do we want to go to prod? You just say, yep. All right. This shows what the release pipeline looks like after it's run. And you can see here that it's gone to Dev. It's gone to QA. Let me see if I can see here. So these are the different stages that have run. And you can see that it's sitting here waiting for prod. One of the things that I want to highlight here, though, is I find Azure DevOps naming a little weird. They have, to get your code out, you create releases, and they have release releases and release pipelines and stages and all these things. It's kind of confusing to me. What we're looking at at the big thing is the pipeline. So we create a release pipeline. Pipelines have stages. Stages have a bunch of tasks or steps in them. But then once you've got that built, it's kind of like the template that you start with. And this is the part that's confusing to me, is that then you create a release. And a release is kind of like take these specific bits and move them to these specific environments. And you can customize that. I just find that naming a little hard to deal with. All right, so here's the stages and tasks. This is specifically the dev stage with a sequence of tasks. Hopefully this is looking somewhat familiar, right? The CF push is kind of ubiquitous here, and that's in the push to cloud foundry. But we initialize, we download our bits, we get the CF client, and I'll show you that in a minute. Infrastructure as code, we create the space, we push the bits, then we programmatically create the services, bind the services, and start the app. We mentioned that we push the same bits into each environment. So what we have here is just showing you that we've got different groups of variables, and you can tie these up so that you have different values for each environment. And then we bind these groups over to a stage, right? So I can set up new environments really easy. I just set up a new set of values here, map them, put the stage in, good to go. This is what it looks like to edit a release pipeline. You can click on a bunch of these things, right? The little person icon is how you do a manual approval. The trigger, you can go in and change things. So what we show here then is for dev, we're looking at the trigger. And this is saying basically automatically kick off, like create a new release and kick it off anytime there's a release available from the develop branch. But for QA, we don't want quite the same criteria. So the trigger here is I want this to run after the dev stage has completed. And we've set this one up with a manual approval too. So it's not just going to automatically run, okay? So here we're going to take a quick step through how we set up these individual steps or tasks. These are mostly bash scripts, which means it's standard command line stuff. Hopefully things you're familiar with doing. This first step, like I mentioned, is we have to get the CF client. I'm using just a plain vanilla Azure hosted Ubuntu image. It does not yet have Cloud Foundry on it. What we're doing here is just a curl command, pull it off GitHub, extract it, and put it in the right directory. Once we've got that down, now we can go ahead and connect in to create our space. A couple of notables here. I built this thing originally with the CF login command. And there were some times that it basically just like hung the whole flow, didn't progress. And what I found is that the CF login, it's sort of an interactive thing. It'll prompt you waiting for your password. That doesn't work so well here, right? So what I found, they have an alternative set of commands. So CF API says, here's the API. CF auth says, here's my username and password. And those are not interactive. They pass or fail. So that works better in this scripted environment. Next we go and we create the space. I know this is hard to see, but we're pulling in some environment variables. So this script is the same for each of the environments. It's just the variables that change it. Once we've got the space, then we target. CF target. Here you can see on the left is the name of the variable from our variable group. And then you can see that we're pulling that in to the bash script and making it available as a bash environment variable. Next, what we did is we used an extension that's in the DevOps marketplace. This is a push to cloud foundry extension. It makes it really nice to push a code because it's got a bunch of different boxes here you can see where you can use environment variables really easily. There's some features in here that we didn't leverage. I'll explain that in a minute. But basically we created a cloud foundry connection. That's that second box down. That's just the URL, username, password. Then we're connected. And for the rest of them, we're using mostly the variables from the variable group. Name of our app, the memory, the domain, right? So these are pulling from variable groups. The one note is, by default, it's going to want to start your app after the push, but we actually don't want it to start yet. If we have a clean space, I don't have my services there yet. And if I start it without services, it's not happy. The point here on this one is just the working directories. I had to fiddle with these a little bit to make sure that it knew where the CF command line was and where the bits were from my release. So after we push, then we create services. Again, standard CF command line stuff. We're creating a MySQL DB and a config server here. The other note is that create services is an asynchronous command. So you say, create them, and the command line comes back and says, good, but they're not there yet. So there's a couple of strategies. This one takes a really simple strategy and just sleeps for 10 seconds. But there are some scripts out there where you can query Cloud Foundry and say, hey, is my service there? And that'll give you a little bit of a pause in your pipeline before it goes to the next step. So the next step is bind. This is the part that'll blow up if your service isn't ready yet. So that's why you want that pause. But we're binding these two over to the app. And lastly, then, we call start. And this is the one that can take a little bit of time because it's running all this stuff and getting things running. But we've automated it end to end at this point and got our service deployed and out there. I found these two variables interesting. The CF command line has these two variables that can kind of control your time out for the staging and the startup process. So if you want to put some controls around that, these are helpful in that space. And now we get to go to a demo. As he's setting up the demo, one thing that I do want to mention is we broke kind of the rules and we have a manual step, right? We have a manual to go to QA. That's actually something that we've learned over time. QA, our QA really doesn't like it when they don't know what they're QA-ing. So what we did was we said it's manual. You decide when you want to get new bits out of QA. It's all yours. We're not going to push anything to QA. We're going to put it up there. It's always going to be staged. We're always putting code in dev that's QA-able. So push it. Take it off when you want it. Same thing with prod. We have our business users that actually get a kick out of being able to push the button to move code because we're letting them decide once UAT and everything has been done when they're happy with the code. Okay? Not us. Demo time. Demo time. We'll pray things go well here. What we've got is a sample that Magenek put together. It's basically a simple API that kind of imagines a bike shop. The source code for all this is linked in the slides. The slides are up on sked.com. So you guys are free to pull them down and free to go take a look at the code. I don't want to dive deep in the code because that's not really the point here. But the sample does use SteelTow. It's got some actuators in there, some health endpoints, et cetera. So it's a decent and nice sample to look at for a .NET Core application. What we want to do here, though, is take you through a demo of we've got the code, it's already running, and we want to exercise this pipeline. So Larry and I are going to pretend that we're going to pair on a change, get that change, push a PR, merge the PR, watch the pipelines run, and cross our fingers. Right? So here's the API. This is our current bicycle endpoint. Get us all the bicycles. API has a set of built-in, like get, put, post, delete, right? Standard stuff. What we don't yet have is... Well, we'll get to that in a minute. Larry's going to tell us what feature we should do. So are we ready? So I see a lot of bicycles here, but my budget is not the same as yours, I guess. I don't need to see a $4,000 bicycle. Is there a way that we can maybe limit and give me a way to query on price? So I'm going to put a user story in called Browse Inventory, and on that user story we're going to say, we want to be able to browse it based on price. We'll put acceptance criteria in there, we'll do all of that within the regular .NET tool. Nathan has taken it, he's activated it. Sounds good. Sounds good. So let's jump to our code base here, and we want to implement a price filter. So through the magic of git branching, we have just implemented a new feature. We can now filter by minmax price. So what I want to show you here is just a little bit of the code changes. Again, this is not really the point, but on the query string now we can pass in minmax. When we do that, this will optionally go down and use a search query. So that's in our bike repository here. So search bicycles now supports a query, minmax. The other thing that we've done in following our own advice here is we've made sure that we have unit testing. So here we're just mocking out some things and we've got our test. So before we push, let's go over here, we'll make sure all of our tests run. I know that's really small, sorry. So that window won't grow. Likewise, so command line for those who prefer that, let's see where we're at. So we're on our new branch. So command line.net core loves command line. You can do so much from the command line. You do not need Visual Studio. I love Visual Studio for .net core, but you don't strictly need it. So we can do .net build, tappy.net test, two passed, no failures. Good to go. What's our coverage number on that? That test sounds awful. I think we need to... I think we have some work to do. We've got some work to do on that. One thing that we don't have in our pipeline is we don't have any filters you can actually put in place that you will not advance, not allow code to be checked in unless you meet a certain level with your code coverage. We don't actually like to set levels. What we want to do is though, we set a band, and if your code coverage drops more than 5%, then we know that you've added a lot of code without adding any tests to back it up. And that's when we stop the branch. We don't say that you, within our project, within the project I'm in, we don't set a level. We say test what you need to test, but always test the new code. So I just pushed this branch. I want to jump back because we moved a little past a couple of things I want to show. I've already got this deployed up to PWS in one of the environments. So let's just take a quick look at what we've got over there. The URL that I was showing you here, this is the cloud deployed instance. So this is our API over there. If we look at our environment, so again, we're using PWS here, so I've got a nice app manager experience. Can do the same with the command line here. But in my dev environment, I've got the app running. You can see the endpoint. If we go explore this guy, we've got it started. I've got two services. Here's my services. We use a config server, and we're also using MySQL databases. If we look at a couple of the steal-toe nice things here, we've got the built-in integration to see the mappings. So this will pop up all the nice routes that are in my app. It's a nice feature. And we can click right into the app like what we've already done. So let's continue on here. We push the branch. Let's go take a look at our code. Okay. So it says, hey, I noticed you pushed this branch. You want to create a pull request? Sure. Let's do that. Okay. Add the work item. Yep. There it is. Story 8. So we'll create our PR, and Larry's going to scrutinize this code intently. Do you have a touch screen? Yes. Okay. There we go. Approve. Thanks. Excellent. I don't have a touch screen. I do. I got a bad laptop. So we will complete this guy. All right. Let's jump quick over to our build pipeline. And what we can see here is we've got a build already kicked off. Let's go ahead and look at this guy. So one of the nice things about Azure DevOps is it does give you pretty nice build output and logging. And you can go back and look at your prior builds too. I find this pretty nice. But you can see we're just doing standard .NET build type operations. Restore the packages, .NET build, .NET test, .NET publish. And we're using a standard agent. It's just hosted Ubuntu. You can actually, if you need, if you're not decor yet, and you need to have a special environment to do your build, you can set up a virtual machine and have your own build agent and do your .NET framework stuff on it. Hopefully, you'll get to the point that you can just use the Ubuntu, but crawl, walk, run. Take your steps. So this can take just a minute. I'm going to jump over to... We'll take a look at a prior build. Let's look at this one. So here's the build output from something that ran. The output that we'll see should look like this. If we jump into any of these, you can see the output, all the restore. And we can see... Hopefully this is big enough. Here we're doing our build, test, and then the publish. Publish just takes the bits and put them in a directory that then we can publish the artifacts. So we just tell the build where to pull those and how to store them. So 199 files. Okay, and then we're done. I can see if this guy finished running tests. And what we're going to expect to happen next is that this is going to create a new published artifact. The release is triggered on those, and this is coming from the develop branch. So the release is going to say, great, I'm going to kick off a new release automatically. So let's go look over there. That's right here. And we can see that release 21 was just created, 1447. So we'll take a look at this guy. And we can see a couple things. So here's our continuous deployment trigger. You can see which artifacts it triggered on. And now it's running this stage. Let's go ahead and look at the log output of this guy. So initialize job, download the client, create the space. A lot of stuff is going pretty quick because the space already exists. Push the bits. Create the services. Note here that this is saying, hey, that already exists, right? Which is kind of nice. The command doesn't blow up just because it's already there. Bind and start. So this is your standard output from a standard CF push. So while that's running, we'll just kick over here. So CF target says we're pointing at the dev space. Let's look at our apps. We can see that we've got this bike shop API. Right now it claims it started, but it ought to say crashed or restarting. So down, waiting for this guy to come back up. While we're waiting for that, what I want to show real quick, I mentioned that we have this running locally as well. So if I do Docker PS, I've got documentation on this container and what we did here. But this is a Spring Cloud config server container out there. This is one of the Spring services that I needed. So I found this container to be the best. This is running in Docker. If I take my application here, now that I've got all these changes. So let's just do a local F5, launch my app. So this is going to spin up in a self-hosted Kestrel container, not container, self-hosted Kestrel serving process. And now I've got my app running locally. Additionally, people familiar with Swagger, OpenAPI specs for APIs. I see a couple hands. So Swagger is a standard out there that lets you document the Rust interface on your API. On top of that spec, then you can build this nice UI. That's what we're seeing here. The Swagger UI has some nice built-in stuff, so I can try out these endpoints. And notice now that we have these API query string parameters, so I can do min max price. So let's do this without any and see what we get. So locally, without any filters, I get all three. I know there's three in the database. But if we say that Larry doesn't want to spend more than a thousand, then we should see a few less come back. There we go. A Schwinn and a Nishiki. So a couple points here. One is we're running local. We're using a Docker container to mimic some of the services that we need. We added the feature. It shows up here. We can test it locally. So let's go ahead and kill this. Control C, that goes away. And let's go back to this guy. Nope, wrong one. All right. So now our app's up and running. We can see that over here as well. Let's take a peek here. Yep, apps up and running. Just started. Let's kick over here. So this is the same endpoint, but we just refreshed it. So hit a five. Yep, still works. Let's do a min price of 500. And now we get two. So API is working. I will tempt fate and see if the swagger endpoint is working out here. It is not. There's some sort of bug that we're running into. I think it's a SSL TLS thing. Sometimes the swagger UI works in CF and sometimes doesn't, but we did not figure out how to get it working in time for the demo. Let's go back to our release here and we can see this is our guy, right? 21. Yes. Cool. So start service. This is what we were waiting for. Finally got through and now we're running. Okay. If we go back up here, we can see now we're sitting here succeeded and we're just waiting for QA. Okay. Cool. So that I think covers it. That's the demo. Yep. We've got another slide or two that say reiterate, but are there any questions? Yes. So all of the, so the question is, is everything that we're, that we're showing here running under Azure DevOps? And yes it is. Everything here is, is native Azure. The Azure pipeline actually is split out now. You don't have to go through the Azure DevOps. You can use the Azure pipeline by itself in your own environment. They did that recently. Just look up Azure pipeline. They'll talk you through that. Yes. Yes. So good question. The question is because we have the services scripted, because we have service creation scripted, what happens when we rerun it? Nathan, can you show the, on the dev, there's a task. It just says it's already there, no problem. Yep. And we can show it in the logs. Yep. Let me go, so we sort of saw that a little bit in the logs here for this guy. If we look at the create space, it's basically just going to say, hey thanks, this already exists. I will note that I had to, I had to toggle a couple of things. There's a setting on the batch scripting commands that says, do you want to fail the step if there's anything written to standard error? And some of the CF commands will dump something to standard error, so I had to turn it off for that. And other ones don't, but generally speaking, the CF commands are able to rerun and they don't complain. Cool. Any other questions? Yes, sir. The same, the question is, does this, is this just for CF or could it work with Kubernetes or Docker or any other container system? It can work with anything. The work that Nathan did to put these in was really just embedding the command line, the batch script in the pipeline, and anything that you can embed in a pipeline, you can embed in this. Sure. Are you asking if, like, the demo that we're showing here would work for Kubernetes or just Azure DevOps in general? Okay, yeah, the demo is just Cloud Foundry. Yep. Well, but we wouldn't, the code wouldn't have to change. Okay, anyway, sorry. Yeah. Yep. Any other questions? All right, let's go back to our slides here. Okay. Summary key takeaways. Watch out for lack of consensus. Make sure that you have cookbooks, but that your cookbooks are loose. You know, don't get, don't have your cookbooks be telling how to fillet a specific fish. You want to talk about fish in general. You can get started now. You don't have to have Cloud Foundry in place to do any of this. Relay on your event storming for decomposition. If you take one thing away, please look up event storming and look at the way that that helps you break your problem down, because what it does is it allows you to get everyone in the room. You bring your business in as well, and you're actually solving the business problems, not the technical problems. And if you bring everyone in, you start having, well, this is what I do. You have the people that are actually doing the work. And they put their steps down and they say, this is what I do. And their boss says, well, I thought you were doing this and this. And, you know, you get all of that laid out. Once everything is laid out, then you come back and you look at the hot spot. You look at where you can actually make an impact. So event storming. Automate your build release pipelines. That's a no-brainer. And script your infrastructure. So have it so that it builds your spaces. So that you have cattle, not pets. Do away with anything that is specific to an environment. Okay. Resources. The slides are up, skedge.com. Nathan and I, we're available. If you've got questions later, hit us up. That's it. Thanks.