 Welcome to the launch of Azure DevOps. My name is Jamie Cool. I'm the Director of Program Management for the Azure DevOps team. We've got a ton of exciting things to talk about today. Here with me I have Mr. DevOps himself, Donovan Brown. Thanks for having me, Jamie. I'm really excited to get started because some of the demos we're going to show today are really going to highlight the power that we have with Azure DevOps. What I want to make sure people realize is that you can also be a part of this conversation. It's not just the two of us, but if you're out there on social media, make sure that you use the hashtag Azure DevOps. If you want to learn more about what we think about DevOps here at Microsoft, make sure that you go to azure.com. Ford slash DevOps. Right. Now, if we're going to be talking about DevOps for this next hour, I think it'd be important for us to level set what we actually mean when we set DevOps. I've been led to believe that you might have a few opinions on this particular topic. A few. It's really important to level set because if you ask several different people, what DevOps means, you're going to get several different answers. It's really good to make sure that we level setting what we believe DevOps is here at Microsoft. We believe that it's the union of people, process and products to enable continuous delivery of value to our end users. The most important word of that definition is value. Too many people focus on just moving files or automation, but it's the value that you're trying to deliver to your end users. What I've found is that it takes the products to support the process what your people have chosen that empowers your people to deliver that continuous value. What I think is even more important than that, is that when you start to take that digital transformation, when you're on this journey, the gains that you get are staggering and sometimes almost unbelievable. As I go and I visit our customers all over the world, those that have started this transformation are deploying sometimes like 47 times more frequently than they were before. And historically, there used to be fail fast, but they're not failing fast. They're succeeding fast because they have seven times less failure rate. Yeah, you know, when I hear some of those numbers on the surface, they just seem kind of staggering and crazy. When I step back and I think about it, I realize, you know what, I've experienced that firsthand. You know, it wasn't that long ago that at Microsoft, you know, the average ship cycle was two years, sometimes longer. You know, now I look at just last week, last week we shipped hundreds of changes just to Azure DevOps itself. You do the math between a couple years and hundreds of changes a week, you get these big numbers. And it's not like the folks today are just that much better than the folks 10 years ago. It's that we've gotten smarter, we've learned, we have better processes, we have better products. And that's a lot of what we're gonna talk about today and what I'm excited about, because we're now providing the products that we use, that we've invested in and bringing them to use that you're gonna be able to take advantage of them. And that's what Azure DevOps is. Azure DevOps are a set of services that span the entire DevOps lifecycle. You can use them all together for a full solution, or if you just need one with a particular problem, you can use just that, or you can put it together with other tools that you're already using that you want. Regardless of how you use it, it's gonna provide a lot of value for it, for you. And I know that because inside of Microsoft, we get that value, we see that regularly. Just in the last month, over 80,000 folks here in Microsoft have used Azure DevOps to deliver our products to you from both the smallest, all the way up to some of the largest. So let's start by drilling in and what we actually mean by Azure DevOps. And let's start with Azure Pipelines. Azure Pipelines is really the heart of the DevOps process. It's a CI-CD system. It's a continuous integration and deployment system. And you use this to keep the quality of your application up to make sure every change that you make is taking you forward instead of backwards. And that's really the key thing if you wanna be able to ship whenever you want to keep your code quality high. You can also use it as a launch pad to get your code up into the cloud, whether it's our cloud, Google, AWS, or any other. Because Azure Pipelines is a system that works for any language, any platform, and any cloud. We have hosted pools and machines of Linux, Windows, and Mac that we manage for you so that you don't have to. Because everything we're trying to do with this is to make your life easier as a developer. And Azure Pipelines doesn't stop with what we've shipped. It's highly extensible. We have an ecosystem of over 500 extensions that have been contributed for both the community and from our partners, from Slack to Sonar Cloud. You know, one of the things that always excites me the most is when I see a new extension showing up and seeing what someone's been able to do and extend with our product. Now, you can use it for any type of application and any type of deployment mechanism, but containers are increasingly becoming that unit of application deployment. So, Azure Pipelines works great with containers. You can use it to build your containers, to test and validate your container, to publish it to whatever registry you want and deploy it to whatever service you want, including Kubernetes. Now, it's a lot more interesting to actually look at it than talk about it. So, Donovan, can you give us a walk-through of Azure Pipelines? So, Azure Pipelines is the CICD system that we have that can build any language targeting any platform. And that's what excites me about it. As Jamie said earlier, we give you access to Macs, to Windows machines and to Linux machines. There's nothing to install. You just give us code and we'll build it for you. Here I am on my dashboard. And what I'm gonna do is I'm gonna click on this icon for this particular Spring NBC application. This is a Java app. Over here, I happen to have a Node.js application. Again, any language in any platform. Clicking on this icon is then gonna take me to my build results of some of the builds that I've been running previously. If we look over here, I can see exactly what branch I was building. I can see if I was actually working on a pull request or not. And clicking on one of these builds is now gonna take me to a summary page in a log that I can quickly review. As we can see here, I had an error on this particular build. If I go ahead and drill in on here, I can see this log file. I can see this great map on the right hand side. And if I scroll down to this huge palm file build, you'll see that we actually had some errors down here. So it's very easy for me to diagnose. But this isn't the only way that I can see my test results and I'll show you some other cool ways of doing that here in just a moment. Another thing that we can do is we can use this histogram at the top and start to see other builds that have already run. And luckily we've had some of them that have succeeded. And as you can see here, I have the same log. But better than that, I also have this great summary. The summary here allows me to see my test results. It allows me to see any associated work. I can come down here and also see any deployments that may be running. Matter of fact, you've already successfully deployed this build into our dev environment. And it's pending my approval to go all the way into our QA environment. You get to see the real power of our pipeline system when you go back in and you start to look at editing one of these. It's a really nice graphical user interface that allows you to simply drag and drop tasks. Again, I want to highlight over here. If you look at this drop down here, you can see all the different hosted pools that we give you. And what I mean by hosted is there's nothing for you to install. All these resources are provided for you. And they come in a variety of different platforms. You need to build your iOS application. We have a Mac sitting in the cloud ready to build that for you. You want to build your containers on Linux. We have multiple Linux images and containers running for you so that you can go and build those images. And once they're built, you can deploy them in the Docker Hub or to ACR wherever your images need to be stored. In addition to that, where can you get your code? You can get your code from the most popular source control systems. You're doing open source work and you want to put your code in GitHub, no problem. You want to have private repositories because you want to secure your code and you don't want anyone else to see it. You can use our Git support as well. You already have your code in subversion or some other form of source control. Don't worry about it. You can still use Azure Pipelines to get your code wherever it exists today and be able to start running CI and CD against it. Again, adding new tasks is very simple. You simply click on this plus here and you can simply drag and drop from hundreds of tasks that we have available for you right out of the box. What I really like about the out of the box task is that they're all open source. You can go and see exactly how we wrote all of these hundreds of tasks and use this as a way for you to learn how to write your own. If you already know Node.js or PowerShell, you know how to write these particular tasks. But before you go off and write your own, I'd encourage you to go and look at our marketplace. As Jamie said earlier, hundreds of our partners have gone off and written extensions that add a lot of value. And all you have to do is simply click on it, get them for free, and add them to your pipeline, and now you have new value. Not only in your build and your release, but new hubs and widgets and all other cool places that you can extend our system. Now I love our graphical user interface, but I've known a lot of people prefer to use YAML. They want everything in source control. Now what's really nice about this is you can simply click on this link here and we will export the YAML for you so you don't even have to write it. What I really like about this is I actually keep both of these up because I want the best of both worlds. I love the visual representation. I love the ease of editing. I can come in for example, make a quick change to a particular task, and then export just the YAML for that task. I can copy this to my clipboard. I run over here to GitHub where my code is currently sitting. I can find that YAML file that I created earlier and I can edit it right here inside of GitHub. So let's just go ahead and make a quick edit. I'm not even gonna paste the code I did. I just want you to see that we can have a really cool pull request. Let's make some changes. I'm gonna come down here and save this into a different branch because I wanna show you some cool stuff here. So I'm creating a patch branch for the change to my YAML file. And when I do this and create this pull request, I've wired up Azure DevOps to my GitHub repository such that every pull request that is submitted will actually have to run that build and succeed before I'm even notified. As we can see down here, we now have a build that is queued. It's currently in progress. Now, if I go ahead and click on the details, I'll be able to jump right back inside of Azure DevOps, see the pipeline that it actually started running for me. And in a moment here, once it connects to the agent, I'll be able to see a full live log of everything running against my particular build. That way I can get quick verification that if the pull request is good, did it pass my test and does it need the attention of the moderators and the contributors to go back in and review that particular pull request. Another thing that I wanted to talk to you about was the testing that I mentioned earlier. So if we were to go back to that definition, let's pick this one here for some fun. And I'm gonna click on analytics. What analytics does for me is it actually watches the test results over the history of this particular build and it gives me a report letting me know how successful our testing has been over the course of a period of time. If I click on this, I get to drill down into these analytics and actually identify the test that we need to go back in and verify. There's different ways that I can slice and dice this data, find out which of my tests are taking too long so that we can focus to get our build times down. I can figure out which tests are flaky. For example, the about test was the one that was broken and then spend some time going in and investing to make sure that is safe. Once I know I have a high quality output from my build, the next thing we have to do is release that code. And that's where our release product or what we call pipelines as you'll see in the navigation here is where we take the output of the build and we run it through a pipeline deploying it into multiple environments and even allowing you to do approvals between those environments to make sure that your code has safely landed in the target environment. If I were to go back in, for example, and look at one of these releases, we'll be able to see that I'm actually deploying this application into Kubernetes. If you wanted to use Helm, you could. I'm a newbie when it comes to using containers in Kubernetes, so I just went ahead and used some kubectl commands to get my code into the cluster. And as you can see here, I did a kubectl apply and another kubectl set, and then I was able to deploy my code. If I go back in here really quick and edit it, what you'll be able to see here is that if I go and look at my task, I can see exactly what it was that I was doing. Before I was playing with infrastructure as code, so I was actually able to take an ARM template and deploy my entire Kubernetes cluster into Azure before it even existed, which is really nice as well. I could come in here, make some really quick edits. The task is so well written that even your pool secret is automatically handled for you. Again, allowing you to take your code from the fingertips of your developers and putting it into your hands of the users using Azure Pipelines. So that just shows you some of the power that we have inside of Azure Pipelines for any language and any platform. All right, Jamie, show us some more stuff. Thanks, Donovan. So for the last six years or so, we've been on a journey here in Microsoft that we often talk about internally as the new Microsoft. And it really starts at the top, but at this point it's percolated really all throughout the entire company. And as someone that's been here for 20 years, this has been a really exciting time to be part of Microsoft. A big part of what it means to be the new Microsoft has been the embrace of open source. And if you look over the last six years, the amount of things that we've been doing in this space continues to go up year after year after year. It starts with things like simply making sure that we're embracing the projects that the community has chosen to embrace. A great example of this would be Kubernetes on Azure. Another example that I was directly involved with seven years ago was Git, right? It seems obvious now, but back then it wasn't such an obvious question about whether we should embrace Git or whether we should try to compete with it. We chose to embrace it, and now seven years later, almost all of the development that happens at Microsoft happens in Git, including Windows. And just step back and think about that for a minute. The Windows team uses the source control system that line is built to build Windows. It really is a very new Microsoft. Another example is open sourcing more and more of the products that we deliver. VS Code and TypeScript are great examples. Azure Pipelines itself has core parts of its infrastructure open sourced. So I'm excited today because we're gonna be able to add another item to this list. And that is free CI CD with Azure Pipelines for any open source project that wants it. This means any open source project can use Azure Pipelines, you get unlimited minutes. Up to 10 concurrent jobs running at the same time. Access to our Linux, Mac, and Windows pool. We use the same exact infrastructure for open source that we use internally for our own builds and that we use for our customers. This means open source gets the same quality of service that we give to everyone. We also want it to be really easy for projects to get started. That includes open source and really all of them. Since most open source projects live on GitHub, Azure Pipelines is now part of the GitHub Marketplace. This means that you can discover, configure, and even pay for Azure Pipelines through the GitHub Marketplace. So if you already have a billing relationship with GitHub, you don't have to set up a new one with us. And again, the key theme of all of this is just about making the lives of developers easier and if they got one less payment vehicle to manage, your life just got a little bit easier. I like to show you that in action. So let's switch over here and I'm gonna show you Azure Pipelines as part of the GitHub Marketplace. So I'm looking at Azure Pipelines. I can configure my plans or I can set up a new plan. We scroll down, we can see that you could configure a free plan and like I said, it's just free for open source. We also have a free plan for private projects. You get up to 1800 free minutes if you wanna use it for private projects. You can add more parallel jobs. You can run more of them at the same time. You can do that for $40 each. And again, you can configure this and pay for it right through GitHub itself. Let's go ahead and choose the free offer and let's set this up. We're gonna configure this in the Raleelabs GitHub organization. And now what's gonna happen is we're gonna consent to grant Azure Pipelines access to the repositories. It could be all the repositories. It could be just individual repositories. In this case, we're gonna just grant access to some repositories. And then we'll go ahead and create a new Azure DevOps organization. And what this is gonna do is it's gonna set me up with everything I need to use Azure Pipelines and Azure DevOps. So it's gonna create our organization and then it's gonna set up our first project and land me an experience where I can configure my first pipeline for whatever repo that I want to in that GitHub organization. So let's go use the node container repository. Once I select that, we'll go and analyze that repository to see what's inside of it. And we see that there's a node app in it. So we're suggesting a whole variety of different node templates that we have out of the box. But we also saw that there was a Docker image. So our default recommendation is to use our Docker template. Now, Donovan mentioned Configus Code. We're gonna use that in this situation. And this is the definition of my build process. And it's simply saying it's gonna use our Ubuntu pool and then it's gonna go build our Docker image. I could go and modify this process as part of it, but I'm just gonna use the default state and now I'm gonna save it and we're gonna check it into a new branch. So what this is gonna do, this is gonna create a pool request on the GitHub repository to add this YAML file to that repository, which will then have configured pipelines and ensure that all changes that go into that repository from now on are validated. Then it's gonna kick off the first build to do the validation of the actual configuration of pipelines being added to this repository. So as that kicks off, we can jump back over to the GitHub repo and we can see that there's now a pool request and go into that pool request. And you can see the file that we're adding is the actual pipeline YAML file. To go back to the conversation, we can see that pipelines is in the process of actually validating this change itself. We also support the GitHub checks API, which is a rich experience for showing the status of a variety of check of all of the sources that are hooked up to this repository. So here you can see Azure Pipelines publishing that status into it. So this just gives you a taste of how easy it is to get started and the type of integration that we have with GitHub as part of Azure Pipelines. Now, as part of the run up to the launch of Azure Pipelines, we've been working with a number of open source partners to do some more early onboarding. And we've been really enthused with both the feedback and the reception that we've gotten. So I thought it'd be interesting if we invited some of them to come join us. So we have folks from a number of them. The first is GitHub desktop. So we have Phil Hack from GitHub and he's here with Donovan to share his learnings. Donovan? Thanks, Jamie. So again, we have Phil here from GitHub. Tell me what you do at GitHub. Hi, I'm Phil Hack, Director of Engineering at GitHub. I am in charge of the client applications team. So we're the team that builds all of the software you use outside of GitHub, such as GitHub desktop, Atom, which is a text editor, Electron, which is a framework for building cross-platform apps using web tech. And then we also have a team that builds extensions to third-party editors, such as Visual Studio, Visual Studio Code, and Unity. Awesome. So I would imagine managing that many products. Something as crucial as CI has to be important. So what does CI really mean to you and what does it bring to your development? I think one of the best ways to understand CI, especially when it comes to open-source projects, is to imagine a world without it and how people would collaborate with open-source. So imagine someone named John is writing, approaches GitHub, see the repo. So if I want to contribute to this, fixes a bug, pushes the code up, and now Egret, who's one of the project maintainers, comes along and she notices that, oh, this person submitted this fix. Let me try it out. So pulls the code down and hits the build and it doesn't build. And I'm like, oh, darn it. So writes a comment, you know nothing. Can you fix the build? And then the next day, John's back, sees that, okay, fix the build, pushes it. Egret takes it down the next day, runs a test, and realizes, oh, the tests fail because maybe John only ran them in the dev configuration, not the prod configuration. And what continuous integration brings is, it really tightens that feedback loop so that rather than John having to wait for some human to look at it, he pushes up the code, all the tests run in all the proper configurations and it gets immediate feedback. It can run your tests, your static analysis, your linters, and that way you take the grunt work out and really tighten that feedback loop and then force your project standards and all of that. And so that's one of the beauty of CI for Open Source Project. Yeah, I remember the days when I used to come into work, do a get latest and the build was broken because someone left, right, and then that person has to bring donuts the next day. And when we had CI, it was really cool because it would point the finger at the person who owed us donuts. But you knew not to go do a get latest because you had that signal saying the build is broken, which also protected us that way as well, yeah, a lot of people would try to do the whole traffic light thing in the office. Right, now they're doing Raspberry Pi's with LEDs and like you know if it's okay or not. So there's a lot of CI systems out there. We've been doing CI for a while. What is exciting you about Azure Pipelines in particular? The thing to see that most excited me about Azure Pipelines when I first heard about it is the cross-platform nature of it. With Electron apps, like I mentioned before, you're targeting Windows, Mac, and Linux and that means that you often will have three different CI providers, each of them with a different YAML file. And that becomes a bit of a maintenance headache. And with Azure Pipelines, we could have one provider with one YAML file and have it build on all three targets. Absolutely, I always get on stage and I'm saying any language, any platform. And people think I'm bluffing. I'm like, no, look at our cues, right? They're like, any platform you need is in there for Mac, for Linux. If you're doing mobile, if you're doing containers, that's amazing. And it was interesting, I was going around GitHub the other day and every time I'd go into Repository, I'd see three and four YAML files. I was like, I don't understand, why are people doing this, right? Don't they know that they can get one YAML to rule them all and get you all the platforms? Well, I mean, that wasn't an option, not too long ago, right? That's true, that's true. And it's particularly nice. The other thing about Azure Pipelines is through your generosity, you all are offering it to open source projects for free, so I think we have millions of open source projects on GitHub. I think that's a great option for them, especially for Electron projects who really want to target all three platforms. Cool, so you're saying you show me what you did? Oh yeah, so here we have GitHub Desktop, and this is our Git and GitHub GUI client. And it's an open source project, we develop it in the open. And if I scroll down here, you can see that we have badges for our builds and this is the Azure Pipelines build. And then, you know, let's take a look at pull request. So, you know, these are submissions by the core team as well as people outside. And so let's say, you know, I'm looking through here and I see, oh, here's a submission from an external contributor and the build failed. Let's investigate. So I click on that. You know, hey, thanks Damon for contributing. I'm gonna scroll all the way down to the checks API. And we can see that there's four failing checks. And when I expand that, we can see, you know, each of the individual checks. And I see that, you know, the Azure Pipelines build is failing. So I can click details and it takes me right to logs for this build. You can see that, you know, it's building on Windows, Linux and Mac and it failed for all three platforms. If I scroll down here on the right, I can see each of the steps of the build and how long those steps take. And here I can see, oh, the linting had an error. And if I click on the error, I get the full log output. So it helps me see that I need to delete, you know, some strange characters from somewhere. And then, you know, I now know how to fix my build. And if I wanna go right back to the pull request, I click there. And I'm back where we started. Awesome. And this allows, like I said, I don't have to, as a, I maintain open source projects too. And this just saves me a ton of time, as you mentioned earlier, just having to, allowing them to get the feedback from the system and not having me stop what I'm doing, clone their repo, try to build their code, find out it's broken, like, oh my God, that was like five minutes of my life. I'm never getting back again. Please, someone help me do this. And now it's like, if the build fails, I don't even look at it, right? I'm like, great. That's someone, you get the same notification that I do, go fix that. And then when you're ready, I'll finally go ahead and take a look at it. So it's saving me a ton of time as well. That's exactly right. Awesome. Phil, thank you so much for coming and showing us how you at GitHub are actually using Azure Pipelines. All right, Jamie, go ahead and take it back. You know, what really excited and resonated with me from that conversation was the notion of being able, with one product, with one pipeline, with one YAML file, to be able to validate across all three operating systems, and not have to deal with managing any of those machines itself. That's a pain point I hear again and again and again. You know, so the next project we're gonna look at is in the Python space. So Steve Dower is here with Donovan. Donovan? So I wanna go ahead and address the elephant in the room. I just heard Microsoft and Python. What in the world do you do here at Microsoft that has to do with Python? So I've been here at Microsoft for about six, seven years now, and the entire time been working on Python stuff. So a whole lot of the Visual Studio integration, Visual Studio code support for Python, a whole lot of the Azure services support for Python has kind of come through my team or out of my team. So I've been working on that for years, and I'm also a core contributor to Python itself. So I'm one of the team of volunteers distributed around the world that work on C-Python, the reference implementation, and design and build the language. So how did you start to use Azure Pipelines when it comes to Python? So I've been using Azure Pipelines internally for years. Now it's like all of our products are built on it as we've already heard earlier. Visual Studio has been building on it for a long time now. So I had a lot of experience, getting Pipelines up and going with that. And then I saw the cross-platform support was coming because early on that wasn't there, and now it's here, and I started looking at that, and I'm like, oh, I could be doing the Python builds on this across all the platforms with a single setup, right? And then the open source software comes along, it's like, oh, unlimited minutes and parallel build. Yeah, let's get Python running on this. So I just kind of went out and started doing it. Cool. And just to put a bit of context on that, I've got the Python homepage up here. We can see we've got three badges going for the Linux Mac OS and Windows build, but Python runs on so many more platforms than that. If I run over to, this is our existing build bot site, this runs tests against all of the configurations that Python supports on every single commit, and you'll see this is a really, really long list. Sure. And so, the value and being able to get off all of these manually managed configurations onto something that's cross-platform, continuous integration in a single service is really appealing. And so I did that for the main platforms, and so we have all of these pull request builds and commit builds running now for the main platforms, and that was really just, I went to the dev guide that we have, we have all the instructions for all the platforms and how to build, we have all the instructions for the existing continuous integration systems, and I just pulled them over into that visual designer that you showed earlier. Sure. And just got them running in there, view YAML, export it out, check it in, and now it's in the repository, and we've got all these builds running out of the Python repository configurators code. Now what I've noticed is that we offer a lot of hosted agents and a lot of different platforms, but we don't offer all the platforms that you just showed there, so how did you tackle the fact that you needed to run on even different platforms than that? So, so far this is not the 100 different configurations. Sure. We're not at that yet, but the potential is there because Azure Pipeline supports private agents, so I can set up any machine, anywhere I want, virtual machine in the cloud, physical machine on my desk, old Raspberry Pi sitting under the chair, wherever it happens to be, put the Pipeline's agent running on it, so it's .NET Core, anywhere .NET Core is gonna run, I can run that thing, if it's connected to the internet, I can start running builds on that machine from the Azure Pipeline service, and so it's all still going through the one thing. One place where I'm already doing that is actually the Windows Release Build, so one of my jobs for Cpython is doing the official releases, so the python.org downloads for Windows, built by me, code signed, published, all of that is one of my jobs as a volunteer, and so that's a manual process, right? I log into a VM and type all these commands and sit there, wait for it to finish, do the publish, I wanted to automate that, so I put all of that into a Pipeline build, which you can see here are basically my commands. This is not running on one of the hosted agents, because it's got kind of some special requirements to it. We do profile guided optimization on every release build, so we want much more powerful machine, faster CPUs to get through the training that much quicker. Gotcha. Just to save us some time. There's also code signing, so it's every single binary in the Python package is signed with Authentic Code Certificate, under the Python Software Foundation name, it's valid on every Windows machine in the world, basically. That's a high value certificate, right? I don't want that bouncing over the Internet every time I do a build. That's locked down on a private virtual machine that I have running on Azure. Here's my queue or my pool. You can see it's offline at the moment. I don't even turn the machine on if I'm not building anything. It's encrypted at rest, it's encrypted while it's running, it's a bit locked the whole way through the machine, and so I just log into the Azure portal, start this machine up, queue up my builds, let them run, shut the machine down, and we have a custom secured machine that no one else has access to. No one's making one of those poor requests where they just download all your certificates and send them off to their own site. Absolutely. Because we never even run poor requests against this machine. So having that private agent there as an option to be able to expand beyond not only the platforms that we provide, but also very unique secure scenarios like the one that you just described where you're able to know there's no way anyone's getting to that file because it only exists in one place and that one place is encrypted to the nth degree, but I can still use that machine to run my builds, which is incredible. Yeah, and so as far as getting CPython building, there's integration steps more to do, like there's a whole lot of cool features that we're not using on pipelines yet, but the flexibility is there, the potential is there, and so I'm really excited to keep building on that, and I'm going to be building on more of this this week. So if people log in at the end of the week, then they're probably going to see changes to this already. Yeah, but this is for want to understand the engine the most of this code is actually in C that actually drives Python, but what if you actually want to build a Python app? I mean, are you able to do that just as easily? Yeah, one of the newer thing that's showing up in pipelines with this big release right now is a whole lot more Python support, and we actually reached out and got a couple of projects on board. They were very excited to do it, TOX, PIP, and Python developers will be familiar with these because they have very well-known projects there, but I want to have a look at TOX actually because it's one of the more thorough and complete integrations that they've done, and there's some really cool aspects to it. So TOX is actually a tool that is used in CI systems. People, it lets you specify a configuration file with all the build steps and run that across all of your full matrix of target platforms. So a Python project is normally, you care about Windows, Mac, Linux, you care about Python 2.7, 3.3, 3.4, 3.5, 3.6, there's a big matrix of things to test against it. TOX helps you with that. It lets you write one set of instructions. If I pull up their file, you can see there's a lot of environments that they care about, and they have all of these steps, and this was already there. This was already there in TOX. People have this file. So when they came on board the pipelines and started using it, they set up a build that they wanted to reuse that file. They didn't want to go, oh, we have to rewrite everything in a completely new form. So if I pop open their most recent build, which I've got here, we can see that, in fact, let me jump to their YAML file, because they made a YAML file. I think that's, you know, they want everything in code. Yep. And it's not even as long as the other one. There's a few steps in here, but essentially all it does, if I look at one of these examples, they're picking a version of Python. We have them there. We let you choose the version you want at the start of each thing. Okay. Then they install themselves. They install Tox. And then they use Tox to test Tox. But what they have done is they've actually inverted it. So normally you'd say, run Tox, run everything. And it does all of the platforms, all the configurations in one go. They've inverted this. So they're actually using our multi versions, our matrix support. Okay. So they have the matrix here of all the versions. And then they use Tox to run each one and let us do it in parallel. Gotcha. So if I jump to the build and have a look at the logs, you can see all of these jobs down the side, which is every configuration they care about, all the operating systems, different queues for each one of these. So this one ran on Mac. This one ran on Windows. Different versions. Each of these run in parallel, because we'll run them in parallel. Each one is running Tox, just a single environment inside it. So they're actually getting the same result as they would have if they were running Tox on a single machine, but it's all come in parallel. Some of the cool stuff they've done here is they're using, these guys, really good at using kind of the standard tools that exist for Python. So they use the standard code coverage tool that most Python projects use. This generates a Cobertura file, which we can upload and give you the summary right here. And so that integrates nicely with pipelines because we understand that file format. Tests, they run in PyTest, which connects for JUnit. Perfect. XML. They then push that up and you get the summary here. If I jump over to the test section, we get all the results. They have good results right now, so let me clear that filter. And all of the test results for their process, sorry about the size here. That's fine. Here we go. We can see all of the tests that have been run and if any of these had failed, then we'd get the information from that. So all they've really done is taken their existing build and test tools and switched it over. They've inverted the matrix a little bit to run on pipelines and they get all this really nice integration. They haven't had to rewrite their entire system. They get to keep using the standard Python tools that is already out there and they're really just bringing over what shell command should we run at each step. And it's really nice to not have to learn something new, yet you get this really rich first-class experience like you showed me the code coverage and the test results and it's integrated into your summary. It's not as if, yeah, we know how to run your stuff but we don't know how to display the results to you but we do, which is really amazing that you can now use it again, any language, any platform but you don't sacrifice anything when you do, right? We don't care. Whatever you wanna bring us, we're gonna build it for you and help you deploy it. So I really enjoy and I'm really glad that you came and shared with us how Python is actually able to leverage as your pipelines as well. So thanks again for coming and Jamie, back to you. Thanks, Donovan. So the last project we're gonna look at is Visual Studio Code. I have Amanda Silver here with me from the Visual Studio Code team. And Amanda, everywhere I go, I see people using Visual Studio Code. Developers really of all types have just embraced it in an amazing way. You must be thrilled. Yeah, I mean, I joined the Visual Studio Code team about two years ago. And since its release in 2016, it's actually become one of the most beloved code editors that's out there on the planet. And part of the reason for that is because we have a really rapid cadence of updates and releases that have new features. And the way that we can support that is we have actually a lot of community contributors. We have over four million monthly users of VS Code and we actually have one of the... It's one of the most popular open source projects out there on GitHub in terms of people contributing to it. We've had about 15,000 non-Microsoft contributors to the VS Code project. Wow, that must take a lot of work to take that many contributions. Yeah, it is. It's actually a lot to manage. And our development team definitely says that one of the most challenging things to deal with is actually code reviews. And in fact, even when we interview other developers who are users of our tools, they also say that code reviews can be really, really painful. Senior developers are just bombarded with requests for reviews. And when they actually go in to do a review, it's really hard to figure out kind of where to focus. And even further, you're oftentimes using a web editor or something that's kind of not your usual tool set. So we've been working with the GitHub team over the last couple of months to see if we could address some of these problems. And actually, we've just released a new version of VS Code that has some new APIs in it. And the GitHub team just released a new extension for PRs that allows us to kind of have a PR experience directly in VS Code. So that should make it a lot easier. Awesome, let's check it out. Sure, okay. So what you can see here is I have my VS Code editor right here. And you can see that I have all the colors that I expect, all the theming that I'd like to see. I have the GitLens extension here, which is a super popular community contributed extension to the VS Code community. But I might have a pull request to do. And so if you go to the source view right here, we can go right here and look at the, in addition to kind of seeing the usual changes of ULIT, I now have a new ULIT in here called GitHub pull requests. And I can further just go ahead and expand that and see that there are some new pull requests that are waiting for my review. So what I can do here is expand that as well and even look at the description now right in the context of VS Code. Now what I probably wanna do is then actually go look at the code changes. So I can come in here and I can see a diff view right here in VS Code, which is pretty awesome. And that allows me to do that kind of first glance review that I might wanna do, looking at the two diffs. But I get the colors and theming that I'm used to for the TypeScript code that I'm looking at in this case. If I wanna do a deeper review though, what I can do is go back to the pull request here and go ahead and check that out. And what that's going to do is it's actually going to switch VS Code into a code review mode and bring all of those changes locally. So now I can go ahead and look at the changes in that pull request, go to the files, look at that same file and I'm going to just expand this so that in the full kind of view, I actually get all of those same extensions completely running in this code editor right here. And so what you can see here is that I actually get a squiggle. And the reason that I get that squiggle is that because all of the extensions are running on my local copy of the changes, the TypeScript compiler is actually checking statically analyzing the code here. So if I mouse over this, we can see that this variable is declared but its value is never read. So what I can do is then just go ahead into the gutter here and just add plus and make a comment. Looks like this isn't used. And go ahead and add that comment. And then the other thing that looks really weird to me is these hard coded values. So again, I'm going to just add another comment here and say hard coded values and go ahead and add that comment. And then I can go back to the description page for this pull request and just go ahead and add a top level comment, left a few notes. So add that comment and now what I can do is actually go ahead and look at this change directly in GitHub and what you can see is that the comments that I added are still there just right here, hard coded values. Looks like this isn't used immediately what I just showed is now in GitHub. Now even further if I go back to VS Code, what I can then do is go ahead and just exit review mode because I'm done with this review and just go ahead and exit it and just go back to my normal coding, go back to my normal view. But now what I want to do is go ahead and look at that pull request one more time just to make sure that the build passed and that's really where Azure DevOps Pipelines come in. So if I look down here, what you can see is I can go directly to the VS Code build and this will bring me directly into the Azure DevOps Pipeline experience. And what you can see here is that we for the VS Code project, we really love Azure DevOps Pipelines and the reason is because it allows us to build for Windows, Linux and Mac OS all simultaneously using all the same kind of infrastructure, all the same scripts. And so what we can see is that we have all of those builds and they all unfortunately failed. So let's just go ahead and look at one of these compile sources. And what you can see down here is that, yeah, so we also have the zoom level default is declared but its value has never reached. So, you know, while that error would have been picked up in the CI, which Azure DevOps Pipelines really would help with as well, now because I could run that as part of my review, it might not even get to that level. I might actually be able to say, hey, you know, before you actually merge this, you probably should fix this error. This looks really cool. I can see how a lot of folks are gonna get a lot of benefit out of doing this and it's gonna make their review process and just a lot of what they do every day a lot simpler. And I'm happy to hear that Azure Pipelines is able to make the actual development of VS Code simpler in and of itself. Yeah, we really appreciate it. And definitely our development team, I think is going to, you know, feel super powered by having Azure Pipelines behind them. If other folks wanna try out the GitHub pull requests review extension, it's actually available with the latest VS Code and in our extension gallery. So they just need to search for GitHub pull request. Awesome. Thanks a lot, Amanda. Cool. Thanks, Jamie. So we've talked a lot about open source projects and, you know, the thing about Azure DevOps is that it works great for open source, but really for all types of applications from both the smallest to the largest. We have organizations of all shapes and size using it today. Shell is an example of a large organization. They have over 2,800 developers using Azure DevOps all over the world. Hawaiian Airlines, when they moved to Azure DevOps was able to increase and improve their build time by over 400%. You know, because Azure DevOps is a cloud-hosted service, getting up and running is really simple. So Accenture is able to spin up new projects in no time with Azure DevOps. Now, really the largest of all organizations that use Azure DevOps is Microsoft itself. We have over 80,000 people that use it every month to ship our software. And the numbers are just staggering in terms of the scale. We do over 4 million builds each month. We do 500 million tests executed every single day. Every single day, every, we have over half a million work items to get updated. And the number that I find the most satisfying is that 78,000 deployments are done with Azure Pipelines every single day, which means there's a much smaller window from when we write code to when it gets to you, which is better for you. And frankly, it makes our customers, our developers happy too, because they have a much shorter weight before what they've created gets in your hands. So I thought it'd be interesting to actually show you how we use Azure DevOps on a regular basis. So of course we use Azure DevOps to build Azure DevOps. So Donovan is gonna give us a walkthrough of our actual engineering system, the actual system that our engineers are using right this minute to build Azure DevOps. So Donovan, let's take it, let's have a look. Thanks, Jamie. I love that inception kind of work that we have where you actually use the product to build the product. And we're not the only ones inside of Microsoft that uses it. We obviously have the Windows team, some of the Xbox teams, and everyone's moving to what we call our one engineering system. So what I thought it would be cool to do is take you through like the day in the life of an engineer on the Azure DevOps team and what they experienced to get their work done using the product that they build to build the product that they use. So here we have a dashboard. Imagine you're coming into work and you see this dashboard on your plasma screen or on your surface, and you can see exactly what your team is working on, what sprint you're currently in, how many days are left. You can go and see what work is currently assigned to you, how many bugs you have out there. You can see the health of your builds, your team members, what features you're working on, and make sure that you're focusing on the most important things first. This dashboard is completely customizable. Yours can look completely different. We have a great library of widgets that you can use to build it, and what they're able to do here very quickly is see exactly what they're supposed to be focusing on so that they can deliver the highest level of value. Not only do we have great dashboards, but we also have product backlogs. Backlogs are a priority list of all the things that you want your software to do. You can simply drag and drop items from here to assign them to a particular sprint so that you can start doing some sprint planning as well. I won't drag anything now because, as Jamie pointed out, this is literally where we're building the product, and I don't want to assign work to the wrong sprint. I love the fact that I can expand this and get a really nice view of exactly all the work necessary to turn one of these ideas into a working piece of software. We also have the ability to have Kanban boards. This allows you to visualize the movement of an idea from creation all the way to being done and actually being run in production. And done for us means it's actually being monitored in production so that we can learn from the telemetry and decide if our priorities are in the right order and go back and use that data. We call it monitor and learn. Again, this is a very rich user interface where we can simply drag and drop. These tiles are already assigned to people. And one of my favorite things that we can do here is actually create a branch right from the board for the work that we want to go off and create. So now I'm using get. We're using feature branches. I have a feature I want to go implement. I don't have to go off and create the branch separately. I can create the branch right here from this board. And what happens up happening is that work item now becomes associated to that particular branch. And that traceability lives throughout the entire course of this work. So not only do I have the branch tied to the work item, every commit is also associated, every CI build that is triggered, every release that is deployed is all traced back to this particular work that I was doing here. I like being able to see every line of code that I change so I can do test impact analysis. I don't have to go running around to find it. The system actually gives out to me by simply creating a branch. Once that branch is created, I go off and do my work and here's our get repository. And you can see on the left-hand side how all these branches start to come back together. This allows me to visualize the branches that were created, the pull requests that merged back in the master to make sure that we're delivering on our goals. We don't just go merging back into master willy-nilly. That could be a recipe for chaos. So what we do instead is we use a process called a pull request. A pull request is an opportunity for our peers to review our code before it gets merged back into master because master is the law. This is our golden master and we wanna protect that and we protect it through a process called a pull request. If I were to come over here to the pull request tab, it shows me all the pull requests that are currently running. A pull request is a chance for their engineers to review each other's code. It also has the ability to run tests and run builds against a code as well. If I were to drill in on one of these, for example, I'll just pick a random one here. We'll be able to see everyone who's involved in the commit. We'll be able to see everyone who has made changes, what lines they're on. I can actually leave comments on each individual line discussing with the engineer what's good and what's bad about these particular changes. If I wanted to, I could see if there's any conflicts, the commits, and we also have policies applied to different branches. And the policy that we have applied to our master branches, you have to survive a build. And in that build, we run a lot of tests. I believe if I remember correctly, we're running somewhere near 83,000 unit tests every single time you do a pull request. And if a single one of these fails, your entire pull request gets stopped. So we wanna make sure there's a lot of quality, a high level of quality. Because these are unit tests, we can actually run 83,000 of them in less than 20 minutes. We're getting that signal back to our developers very, very quickly. And because they are supposed to pass, we've all had flaky tests in the past and people start to ignore those. We worked really hard to make sure that these are solid, they're fast, and they're reliable. Such that if a single one of them fails, no one ignores that. We go immediately and figure out what failed and we fix that code and we start this process over again to ensure we only ship the highest quality code to our customers. And to ourselves because we are our first and biggest customer here at Microsoft. Running all these unit tests allow us to ship the highest quality code to our end users. And it doesn't stop there. We continuously run tests against build after build, release after release to make sure that we only ship high quality code. And it behooves us to do this because we're the first people to get the code that we ship. We don't put it on to our customers unless it survived time with us here at Microsoft. When you use the tool that you build to build the tool that you use, you cannot risk having bad quality software because it could stop everything that you're doing. As a matter of fact, here is our deployment plan. And what's really cool is we actually use safe deployment here at Microsoft, which means we deploy to it a production environment. And we let the code sit there for 24 to 48 hours. And it has to survive our telemetry and our monitoring and our daily use before we then allow it to deploy to another environment. This first environment, which is identified here as ring zero is actually where the Azure DevOps team actually works. So when we're pushing out new features, we're the first to fill those features. If we make a mistake, we're the only ones to fill the pain of that mistake. We go back in and we correct it. After we let it sit here and we monitor the telemetry, we check for any new work items that have been logged for particular bugs. We make sure that it's healthy and can sustain our traffic. We then promote it to the next ring of our deployment. And we rinse and repeat this until it gets all the way out into production for all of our current users. So this is how we ensure and how we use Azure DevOps to produce Azure DevOps for our end users. Thanks, Donovan. As you saw, Azure DevOps is really a full end-to-end solution across the entire DevOps chain with great traceability from the start all the way through to the finish. It's highly scalable from the smallest team all the way up to the largest team. It's enterprise ready. You can run in whatever geo that you want. So you can keep the data close to home. You can run it in our cloud where we'll take care of everything for you. Or you can run it on-premise with Azure DevOps server, which lets you install and manage the Azure DevOps services however you want. You really have a choice between a public and a private cloud. Now, Azure DevOps is the evolution of Visual Studio Team Services, which is why we're able to capture so many of the learnings and innovations that we've had over the years. And this evolution is really based on the feedback that we've gotten from customers. And that feedback is about choice. You know, lots of folks want to be able to use the full end-to-end solution the way Donovan just showed you. But others want to have the choice of just choosing a particular part of it. Just using pipelines or just using artifacts or just using boards and putting them together however you want. And you're able to do that. So you can choose which parts of Azure DevOps you want to use and assemble them with other solutions. So for example, if you wanted to use Azure boards to do your planning and GitHub to store your source control and do your pull requests but use Azure Pipelines for your CICD and take the artifacts that were produced and put them in Artifactory and then use Ansible to deploy them to AWS or Google. Great, you can do that. You want to mix and match, you know, however you want. If you want to use Azure Artifacts instead of Artifactory and Jenkins instead of Azure Pipelines, it's up to you. You can assemble these solutions in whatever form makes sense to you. Now, this is really an evolution and a broadening of the Azure ecosystem. So Azure DevOps now provides Azure with a new set of services to help developers make their lives better. We're really making Azure a developer first cloud. Azure already has very broad support. You know, there are hundreds of tools and technologies many of them open source that are part of the Azure ecosystem from Terraform to Jenkins to Chef to Puppet. You name it. And Azure itself has a whole set of first-class services that are really important to the DevOps lifecycle. One category of those is really focused on telemetry, analytics, and insights. You know, when Donovan talked about what DevOps means to him and the definition of it, it's really clear that DevOps is not stopped with deployment. You have to get telemetry in your application. You have to know whether that application is performing well, how folks are using it. You know, in this day and age, data is becoming a core part of how we build software. And Azure has a whole set of services that help you do that from Azure Monitor, application insights, and log analytics. They provide predefined defaults so that you know what the thresholds are you should be expecting from a high-performing application. It helps you visualize all of this data that's coming in and customizable dashboards. And it has infrastructure to help you separate the signal from the noise. Because as we move into this more data-driven world, there's so much data coming in, it can often be hard to really tell what the important signals are from all of the different noise. And of course, all of these services are highly extensible and work with existing processes and tools like ServiceNow. And this includes, of course, Azure DevOps. So one of the really powerful things that you can do is take all of this telemetry insights and tie it in to the actual workflow processes in terms of how you work. And Donovan is here to give us a walkthrough of how you can do that. Donovan? Thanks, Jamie. Like you said, going back to the definition of DevOps, we want to continuously deliver value. You can't just randomly copy files to a server and assume that you delivered value. If no one uses them, you didn't. The only way that you know is that you monitor the application. And here, inside of Azure, we have an amazing offering that allows you to monitor not only the application itself but the infrastructure upon which it is actually running. I can make sure that my infrastructure is secure, see if there's any patches that need to be run. I can see my application health. I can configure all of that using application insights. In addition to that, application insights ties in really well inside of Azure DevOps and Azure Pipelines. Here, I can go to a dashboard that I've created. This is the actual application insights container where all the data from a Node.js application that I've written is actually being pumped in. I'm able to see if I've had any failed requests. Luckily, I have not. I'm able to see what my server response time is. If that starts to go up or down, I can go back in and make adjustments to my code, ship out a new version, and come back in and check these numbers to make sure that they're looking good. I also have availability test running. It's been rock solid since I deployed this. I've had 100% success rate. If for any reason, one of my applications is inaccessible, I would actually get a notification saying, I can no longer reach your particular application and I can actually test from several different locations. So it'll test from all the different regions that we have available for you from Azure to make sure that your app is accessible from everywhere. And if not, I'm able to go back in and make a fix for that. What I really like about this is that I can actually incorporate this information back into Azure Pipelines. So what I mean by that is earlier I talked about how we use safe deployment, how we deploy our application into a production environment, and we monitor it there. Well, that monitoring historically has been manual. People would go and run queries against our work items to see if any new bugs had been logged. Someone would go and look at a dashboard like the one I just showed you to see if there's any spikes in our traffic or increase in the number of failures that we have. But we want to automate everything that we possibly can. And thanks to release management, I can do just that. I can come here and enable something called a release gate. A release gate is an automated way for me to take those tasks that I used to do manually and verify them as I move through my pipeline. For now to add a new gate here, you'll see there's several different types. I can run an arbitrary function. So this is an Azure function that can go off and do whatever it needs to do, and then send back either a positive or negative response, letting me know that things are good or not. I can call a REST API. Maybe I'm deploying a API myself and I want to be able to call a few of those APIs beyond just normal testing. But specific APIs let me know the health of my particular system. And this one I like a lot is I can actually query application insights that I just showed you directly from Azure Pipelines and make sure that we have a really healthy deployment and that everything is going as it's supposed to go. And if and only if it is, move on to the next environment. I'll show you how to configure one of those in just a moment. And also we have the querying the work items. This is the ability for us to go back in and run queries and see have there been any new bugs logged or new issues logged against our software while it's being deployed. And if so, you probably want to stop our release and go look at it. If I were to go ahead and click over here on Azure Monitor, I just have to configure a few things. It's going to ask me for my Azure subscription because this is where my application insights lives. This is where the telemetry is being collected. And it'll allow me to then find the exact application which I want to monitor. So I'll pick my Azure account here. Inside of my Azure account, I obviously have resource groups. The resource group I'm looking at that has my Kubernetes cluster in it and also my application insights is expressed. Now I want to look at application insights information. One more time I'm going to tell the exact resource that I'm looking at because I can actually have a different application insights container for every environment. I don't want all my telemetry going into the same place. I want to be able to know how my production environment is doing but might be different than my QA environment and or my development environment. As I go between one gate and another, I can go and review the previous environment's gates right here using application insights. Then I'll choose a particular alert. And if I have any failure anomalies during the deployment of my code, release management will be able to protect me and stop my release and give me a signal letting me know that I need to go back in and investigate that. This is a potential bug that would have been able to escape into the next environment. Had I not had an automated way for me to be able to monitor that information. So we're taking the monitoring and we're automating the review of that monitoring for you. The final thing is having that single pane of glass. It's great to have those really rich dashboards in Azure but I want to be able to bring the data that's being collected by application insights directly inside of Azure DevOps. And I'm doing that here with a dashboard that I've been able to create. As you can see, I have tiles here for my dev environment, my QA environment and my production environment. I can see how many events have been triggered. I can see what my response time is and down below I can see my team members, the health of my build and the health of my release. This again is taking all the power and the data that we need to make a good decision and have a really high functioning piece of software and we're surfacing it here right here in Azure DevOps on a single pane of glass that we can use. So again, I hope you realize any language, any platform and everything that you need to turn an idea into a working piece of software. Thanks Jamie. Thanks Donovan. So to pull all of this back together, Azure now has five new services that are gonna help you as developers to be more successful, to collaborate, to ship faster with higher quality. They work for any platform, any cloud, any operating system. You can use all of them together or you can use them individually, it's all up to you. They embrace open source, Azure Pipelines has free open source for any project that wants it. They work for projects of all shape and size from the smallest up to the largest and it's really easy to get started. If you wanna get started, you can go to azure.com slash DevOps. And Donovan, I think that's about it. Did I miss anything? Actually, I have a few questions because I'm gonna go on tour here soon and I'm gonna speak bombarded with questions. Now, you said it's free for small teams, but what if I'm not a small team? How do I pay for this? Yeah, so for Azure Pipelines, if you're an individual doing private builds, you get 1800 free minutes. If you'd like to grow up from that, you buy units of parallelization. You can run more jobs at a time, each one costs $40. So if you buy 10 new jobs, you can buy it, run 10 jobs at the same time, $40 each. Most of the other services you buy per user for about $6, you can go to the pricing page on our website and get more details. Great, so obviously we've been using VSTS for a long time. Looks like the URLs are changing. Like what is my experience gonna be as a current VSTS user moving over to Azure DevOps? Yeah, this is gonna be a good thing for existing users. The transition is gonna be seamless and automatic. The existing customers will get to decide when they make that transition so they can decide when the right time to go do it for them is. Okay. If you're using all of the capabilities of VSTS, all these Azure DevOps services like you showed work great together, if they're not gonna lose anything, they're just gonna get more choice. So if they wanna set up a project and only use it for planning and only have Azure boards on it, for example, they'll have a much cleaner and better experience for doing that. Sure, yeah, cause I've noticed that I've done that on a couple, I maintain a couple open source projects and all I needed were pipelines. And it was really nice to be able to streamline my experience, right? And as the team gets bigger and as we grow, I think I'm gonna start turning back on some of those other things, but for now it's really neat to be able to just streamline it and just, let's just focus on what it is that we wanna work on, which is great. But we talked about VSTS, but I always say that there's a twin to VSTS called TFS. How does that transition work? Yeah, so TFS is now Azure DevOps Server. It's still the vehicle that we use to deliver all the value we provide in the cloud, one premise. The way that we ship it and update it and the way you use it will continue to be the same. We'll continue to update it on the same cadence that we've been doing forever. So it'll be, again, a nice seamless transition for TFS users as well. Right, so that next quarterly update I get, it'll just gonna magically change its name from one to the other. The next update of TFS will be the Azure DevOps Server. Gotcha, awesome. Okay, so the last thing I wanna talk about is just how do we get started if I'm not an existing customer, right? I'm brand new, I've seen this, I'm excited, I see the value of cross-platform, any language, any platform and one tool chain. How would I go and get started with this? There's a couple ways. One, if you wanna use Azure Pipelines and you're a GitHub user, you can go to the GitHub Marketplace like we looked at earlier. Otherwise, I suggest you go to azure.com slash DevOps, pick the service that you wanna get started with and go from there. Awesome. Well, I thank everyone for joining us, but be sure and join us also on September 17th. We're gonna have a live stream and a Q and A where you can ask some more questions and interact with those of us who help develop this amazing product and can answer your questions and make you productive on your DevOps transformation. Thank you so much for joining us. Thank you.