 Hello everyone, are you tired of spending your valuable time setting up operational tasks? Well, if you are, this is the video for you. We are joined by Dean Wells, Principal PM Manager for Auto-Manage, and we'll get right into it. Hey, how are you? Hey, I'm good. How are you? Very good. Thank you. So Azure Auto-Manage, we announced this at Ignite 2020. We did not, yes. What did it tell us about it? What is it for those of us who don't know? And why, especially, why would we come up with that? Okay, yeah. So we announced it at the tail end of 2020 in the last Ignite there in public preview. We're still in public preview for the Auto-Manage VM Best Practices offering for Windows, but we also announced the preview of Auto-Manage VM Best Practices for Linux. So now that same service, which is currently the main functionality of Auto-Manage, is now available regardless of whether you're running Windows or Linux, and the user experience is identical. So I put a pin in that and then answer your original question, where did this come from? So about, it took some time to get to this point. About three years ago, by pure chance at Ignite, my current manager, who back then was in a different team to me, we knew each other very well. We've been friends for a long time. He suggested that he had an idea and he would find it useful for me to go and interview some people while I was at Ignite doing shielded virtual machine and virtualization security stuff. So I said, great, what's your idea? And he said, I wanna see if we can build some clever technology in Azure that eliminates the need for people to perform these mundane repetitive day to day, week to week, month to month tasks, where because it's so mundane and repetitive, it often introduces human error. I'm like, okay, give me a few examples. So he said, well, pretty much any management tool in Azure, I want to be able to automate it if the machine is healthy. So if it's in a steady state, well, because it's Windows, we know that extremely well and we can even measure whether it's in a steady state. If it is, and given that in the first place, we built Windows, we know Windows so well, why can't we automate those daily operations completely? If it's in an unknown state where it's broken, let's not try and fix that because we all know that the number of broken permutations is very, very large. So the basics of the service is if there's a management service in Azure that typically a customer needs to discover on board to configure, monitor, and then remediate all there was my camera, there it is, all five of those things, they have to do that manually, which requires a browser or awesome scripts that they've written in PowerShell or Azure CLI or even leveraging our own APIs around templates. But nonetheless, it's still human beings doing the job. What we wanted to do was see if we could build technologies where with point, click, done, simplicity, all of those things are taking care for you or taking care of for you rather, including the discovery piece, which means now you don't have to know what services there are in Azure. Then the onboarding, now you don't need to know how to onboard to that service, then the configuration and making sure it's aligned with that service's best practices, now you don't need to know how to configure it per best practices. And then I think the biggest piece, because those first three, you only do those once, right? You discover, you onboard, you configure, and then hopefully it's doing everything it should be. But the biggest piece where it becomes repetitive and human labor intensive is monitoring that the service is still working in the way that you expect it to on that virtual machine. And then if it isn't, remediating it. We do all five. So the first three, we do that by the time we get to general availability, we will be onboarding, configuring, monitoring and remediating. And obviously the discovery piece is kind of a requirement. 15 services for Windows and 14 for Linux. Ultimately it will become 15 for Linux as well, but one of the services right now doesn't support the Linux excuse that we need. So the basic goal of the service is take what today as a very manual point, click, know about that. How do I backup in Azure? Let me search for backup in the Azure portal. Oh, there's something called Azure backup. Let's go and see what that does. Spend an hour reading up on how it works and what the fees that it incurs are on all those things. Then configure it so that it meets your grandfather, father, son backup strategy or whatever it is you do. And then from that point forward, you've got to keep monitoring it to make sure it did actually take a backup. Once that's done, now if it didn't take a backup, now you're the one on the hook to go and remediate it. We take over all of that work for the entire lifecycle of IBM, starting from, it probably supports Windows Server 2008 R2 in terms of raw technical functionality. I would expect it to work, but that's really a very old operating system. So we start with Server 2012 and then we work with most of the commonplace distros, such as RHEL and SUSE, it's less and all of the others. And I've got all of that information for those of you that are interested in knowing about it, which we can provide web here. How would we get, oh, I know, you could simply browse to aka.ms forward slash also manage, and that would give you all of the details in terms of the Linux distros we support. Yeah, and all of the links that we are talking about today and all of the references will be somewhere at the bottom of the screen as we talk about it and the associated blog post as a next step and resources for you to see. Yeah. I have a quick question, because you mentioned that for log analytics and onboarding for performance monitoring, that's all part of AutoManage. Is this using the current set of agents or are we leveraging the Azure Monitor agents with the data collection rules that is currently also in preview? So we're not planning on rebuilding anything. So right now we're built on top of the existing Azure agent and the extensions they're in. Once a new agent, we're gonna switch over to that as gracefully as we can. And obviously we've got some technology planned to make that transition hopefully transparent. But obviously we've not built that yet, hence I'm gonna say hopefully and cross my fingers. So yeah, we don't actually do anything unique on the virtual machine. We don't need to install another agent. We don't even need to install another extension. All we're doing is pulling Pinocchio's strings in the cloud based on the existing agent and extensions that are there. And if there's a new extension needed for say, Azure backup or security center or any of the other services that we onboard you to, that gets taken care of. So you don't have to worry about it again. Okay. So that's perfect. And when you mentioned onboarding, currently it's only during the preview, it's only through the portal, correct? Close. The portal is the primary experience because we wanted people to learn the extent of the product's capabilities. Because if you think about it, do people prefer to go to a user experience, walk their way through it and learn? Or do they go and wanna go and read a three page white paper? Typically it's the former. I know I'm certainly in the former category, I would far rather go to a portal, figure out everything it does there. And then if I'm like, well, maybe I'll read the white paper because if this is actually of interest, I will typically do that second. So we built the portal experience and we threw a great deal of effort in making sure it was point, click, set, forget. That was an important goal for us. That's done, that is now available, but you can also onboard using ARM templates. So you can just do native onboarding there and you can do Azure policy. So we have a preview of a built-in Azure policy that allows you to scope a policy to some number of thousands of VMs or whatever. Obviously that scoping criteria is extremely rich and you then in that Azure policy, simply choose one of the two config profiles that we offer. One is called production. You can probably guess what that's for. And cleverly, the other one's called DevTest, which is not production. So that's where we have two and the two different profiles basically offer different services. For example, Azure backup, good for production but can incur a cost because you're storing backups. DevTest machines probably don't need to be backed up. So that's the reason for the existence of two config profiles. Basically it allows us to ask one question as opposed to 17, from which we infer the rest. So the portal experience is there, Azure policy is there and ARM templates are there. By the time we get into GA, hopefully in the next few months which would be toward the third calendar quarter of this year, there will be Python, Go, PowerShell and Azure CLI in addition to those. Of course, again, native ARM template ingestion and all of those good things. Okay, that's wonderful. You did mention the production and DevTest profiles. I've had some questions from the community. One is, can those profiles be modified? So for example, in a DevTest environment, yes, those machines are not gonna live for very long so I don't need to have them backed up but I may need to have them VM insight and log analytics and performance counters collected so that I can see that the application that we're building is not creating a bottleneck on that VM so I need to get that. Can I modify that those DevTests profiles? So we get this question in various forms quite a lot. So right now these profiles and that release they are going to be technical development engineer at terminology, they're immutable and read only. So from the customer standpoint for you guys they are not something you can edit. So does that mean that that's all that you can do with the VM? No, you can augment it. So if, whether it's DevTest or production if we don't onboard you to a service that you want you can go and onboard and discover and configure it manually, we won't consider that as deviation from the config profile. If however you try to off-board from a service that is in our config profile now you're contradicting what is measured as conformance to that profile. We will consider that to be drift, i.e. you've drifted from the configuration at which point kicking and screaming or politely we don't care which you're gonna be dragged back into conformance with the profile. Whether you like it or not. It repeatedly was I'm down with 12 of the 15 that you're gonna release but I cannot do, I'll give you a few candidates that come up, Azure backup because we've been in using Symantec or some other backup tool for ever on a date. And we've got it so well set up. I don't want to change that. The other one that we get a lot was Azure IaaS anti malware. People love the service, they love what it does but again, they've been using Trend Micro or Norton or whatever it is they've been using again for ever on a day. So what we did is we started adding things that we call preferences, config profile preferences that allow you to turn off some of those services or to tweak their configuration one way or the other still conforming to best practices but you might actually say for Azure backup wanting to backup more frequently than we configured it. That is not drift. That's still conforming to the best practices so we will allow you to do that. If you tried to off-board from Azure backup and it was a production profile, not allowed. We're gonna drag you straight back in and backups are gonna continue. If you onboarded to Azure backup for a machine with the dev test profile, that's fine. We don't consider that as drift because you manually did something in addition. You didn't change what you told us the machine was supposed to look like because we only focus on the pieces we configure. So you can add to them and in some services you can switch them off or tweak their configuration slightly but you cannot challenge what the service list looks like or what the definition of best practices is at the lower and upper bounds of some setting relative to that service. Okay, so basically your profile becomes your minimum bar. Yep. And if it drops below minimum bar, we fix it. If you add stuff on top of it, we don't care about it. You've got it precisely. Okay. And one of the things that we've also been asked is can I create my own configuration profiles? That was gonna be my next question. I figured you might ask that next and it really is the sad thing to say when we do these videos. Good question, but that is a genuinely good question and I get it all the time. We are looking into that. So I won't commit to that. And for those of you that have seen these videos with Pierre and myself before, we don't like to commit to things that we haven't yet built. Well, we haven't yet built them but we are looking into refactoring our API. And Pierre, is there any way that folks that watch this can give us any feedback? Is there a feedback? Absolutely. There is. Absolutely. I'll ask for some feedback here and Pierre will tell you how would we get that feedback in a second? It'll be listed below in the screen and in the description of this video. Brilliant. Okay. So if any of you would like to cast an opinion, we have production and dev test profiles. I already said they are immutable, meaning they're redone that you can't touch them. You can add to what they do as Pierre coined it. They basically define the minimum bar configuration for a virtual machine. You can add to that by onboarding something to another service that we don't touch but you can't argue with us because we consider any argument as drift and we fix it. One exception to that is obviously if you're going to the lower or upper bounds or you increase backup frequency to use that example again, that's permissible. But the thing we're getting frequently is can I do my own? So we're actually refactoring our API now that we've got a couple of thousand auto managed machines already up and running and we've got a huge amount of data from the way people are using them. So the entire API is being refactored. We're refactoring it in a way where for want of a better phrase, you could call it desired state configuration for Azure. Which if you think about what we just discussed, Pierre, the config profiles, they're kind of like DSC for Azure where you've got actual DSC inside windows and Linux guest operating systems where you define a profile, you define an end state. And if it's not that, DSC makes it do that. Config profiles kind of do that already. So they are almost already DSC for Azure. I think that's probably a crap name in my opinion, but it is extremely self-explanatory. So we're refactoring the API to really give us the same level of flexibility that DSC has in windows and Linux across the gamut of services that Azure offers. Given that we're building that, it does seem very logical to conclude that we would allow you to define a config profile of your own, save it as a given name and then push it out via an Azure policy to 10,000 VMs. That seems very likely. So I can't commit to doing that because we don't have it on my backlog yet. I don't have a dev team assigned to do that yet, but as you can tell from the length and the amount of thought that's gone into my answer, we've given this a great deal of thought and we've surveyed lots of customers and they all love the idea. So if any of you have got feedback to give, there will be a link in the bottom down there. So I'm gonna look at me doing the YouTube stuff. It's there, that's my foot. No, it's not, it's a link to the actual page. You have no idea what Pierre is gonna do there, but yes, get me some feedback and even if you wanted to email me directly, I'm open for that as well. My email address, are you ready for this, is dean at Microsoft.com because I'm that important. Just Dean, no, it's not great. Just Dean, you know, like Madonna or... I added myself as a member, so that's really all it is, but it's kind of cool when you can say Dean at Microsoft.com, you should do it, Pierre. Yeah, Pierreatmicrosoft.com. It's like Madonna or Prince, like it's that. You're at the top of your game. Yeah, Tafcad, the artist formerly known as Dean, there you go. Ha ha ha ha ha ha ha ha ha ha! Actually, I had another question here, is this gonna replace DSC? So, funnily enough, the one of the services we onboard you to is called, or is actually under the cover, is DSC, guest configuration. We already do that and we push down the baseline profile. So what that basically means is when you onboard, say, Windows Server 2016, every time we release an OS, there's a team that's under the Windows banner and then there's a team under the Azure banner, they get together and we produce what we think is the optimal DSC profile for that operating system. We automatically implement that. So we're already doing DSC in the guest. So it's not replacing it, it's using it. And then the discussion we just had, Pierre, does DSC for Azure exist? It kind of does in the form of ARM templates and then there's this preview of something called bicep that's out there right now. We're trying to say we're gonna make that even easier and build something potentially that's built on top of that API I mentioned earlier on that truly does do exactly what DSC for Guest Operating Systems does. It's so very popular. I'm very, very optimistic that we will build it. I'm pretty optimistic that we won't call it DSC for Azure, but it is a good explanatory term. But yeah, I definitely wouldn't wanna have that as it's brand. Just wait till the marketing people get their hands onto it and it will be DSC for Azure, version one enterprise. The branding I try not to get too involved in because people have got far stronger opinions than I often do. Auto-managed was a brand that my colleague who actually came up with the idea, the person that I did the surveys for originally night back in 2018, I think it was. He came up with the brand Auto-managed and I told him, I thought that was horrible. Literally the worst brand I'd ever heard. So then he's like, well, come up with something better than when I joined the team to take it over and drive the project for him. Two and a half years later, I've come up with nothing better. So Auto-managed it is. So I'm like, oh no, that's the brand. Okay, moving on the profiles you mentioned. So we've got the production, we've got the dev test. Can they be split or scheduled? So if you've got an HA environment where you've got like a failover environment or a distributed environment, can you apply those changes? Like when you do your remediation, can they be staged across your environment? Right, I understand the question. So you're thinking or the person who asked it is thinking along the lines of like maintenance windows that would be used for pattern. We currently don't intersect with any maintenance window. So if you drift, we'll detect it within six hours which is our poll cycle when we go back and we say, are you the seller? Well, yes, you're all good, ignore you. Go to the next one, you're all good, we'll ignore you. Oh, you're not all good. You're no longer doing backup in your production machine, fix you. That will happen within six hours whether you want it to or not. In terms of onboarding, if you wanted to onboard just to make sure things didn't go awry, you could obviously cruft up an Azure policy to scope out a set of machines that make sense to you. Onboard those, test them. And if they appear to be working 24 hours, 48 hours later then go and do other batches by using Azure policy scoping criteria. It's not so much about the onboarding is the ongoing. So once all those machines are onboarded six months from now, a patch comes down and AutoManage deploys. Actually, it doesn't deploy the patch currently. Right, we don't deploy patches. So again, if you think about the services that already exist in Azure, update management already exists. So we're not gonna replace the update management stack which, funnily enough, doesn't actually replace the Windows update stack. It orchestrates it, pulls Pinocchio strings again the same analogy I used earlier on. We make that easier by pulling Azure update management strings for you. So now we've got, we're pulling its strings. It's pulling Windows update and Linux update mechanisms. It's pulling those operating system strings and ultimately you get patched. However, with update management, it's difficult for us to know what the configuration should look like. So for the moment, we onboard you to it and we begin auditing your conformance with the current set of patches specific to and obviously available and released the operating system that we're looking at. And we'll tell you when it is not patch conformant but we will not schedule the patches yet. That's one of the very few areas where there is one additional management task set up that maintenance schedule because whatever we pick, it's gonna be wrong for majority. I would guess that's the reason why we do it this way. So that is one of the few, if not the only things that would require a follow-up action with regard to I've just onboarded to the config profile production. Is there anything else I need to do? Well, there is. If you wanna do Azure update management patching you need to go and add a schedule. No, I can't think of anything else off the top of my head. So you've just answered, I think three of the other questions that I've had from the community. I mean, no, it's not going to replace patch management because you're really auto-manage is kind of like an, it's mostly an orchestrator to onboard existing services and ensure that they're present and running properly to your targets. The simplification is the VM best practices focus, but keep in mind, I think this is a good place to tear it up here is also manages gonna become an umbrella brand. Meaning we also manage this and that and this and that and all different permutations. We've got a lot of things already in there. VMs was our primary candidate because they're so pervasive, they're everywhere. Even with people marching to the cloud as fast as many of them can, they still got gobs, millions of VMs running in Azure. That's why we built it first because it's one of the areas where we can make the biggest impact and do so in a timely manner. There are other services coming down the pike. For example, we plan on adding premium services. I'll give you a hypothetical. I've spoken to the team that will do this, but this is still hypothetical because there's no code to back it, not even a prototype. That team would be Azure backup. So I asked a lot of customers when you take a backup whether you use an on-prem tool or Azure backup, what's your schedule, what's your backup strategy? And they all say, we have a grandfather, father, son backup schedule where we backup once a day and then it gets archived off, persisted for six months and then eventually gets deleted. Something along those lines. One team, an MSP, a managed service provider, coincidentally, the Toboke team inside of Microsoft said, we don't have a backup strategy. I'm like, what do you mean you don't have a backup strategy, you're a managed service provider? Your customers must think you're useless. He says, no, no, no, you misunderstand. We have a restore strategy. Backup is a prerequisite. I'm like, ooh, that's good. I'm gonna steal that phrase as though I made it up. A restore strategy is when they take a backup, they don't consider that as success. They clone the VM somewhere in a little sandbox isolated and they restore the backup. If it works, they go, right, that backup's good. If it doesn't work, they delete it and try and figure out why. That's the way people should be working with backup technologies. So now imagine we bring auto manage into the picture. We want to start adding icing on the cake features to services that already exist in Azure that you get when you're auto managed. And these will obviously not be free. So I guess that's a good point to say right now, VM best practices is free in preview and it will be free when it's generally available. We have no plans to build for it. So there's no fee for all of this discovery, onboarding, configuration, monitoring, remediation. However, some of the services we onboard you to may incur fees like Azure backup, for example. So keep that one in mind. These premium features that we're discussing now, we will probably build incremental fees for those specific to a collection of icing on the cake features that we will ultimately build. But again, I do want to emphasize this is pure hypothetical right now. I have a spec for it. I've spoken to the team that owns it. They love the idea. Customers that I've spoken to love the idea but do not treat what I've just said as a commitment or a timeframe because we haven't even written line one yet. So no, we're nowhere near there but those are sort of some of the lost year goals that we've got under the auto manage for VMs banner. And the other thing that we're doing that is imminent. We do have a prototype for that is funny because we were just talking about this patch orchestration. So Azure Update Management orchestrates Windows Update patching stack the software in the operating system. We orchestrate Azure Update Management to do the right thing, but without a schedule. Ultimately, what's missing in the market today and again, this is not a feature we built yet. This is one that's highly specced and has been prototyped. The way patching works today is the VMs get treated as individual VMs. There are things like scale sets that you can put in place in Azure. But even a scale set doesn't fully define say an end tier application where there are four tiers with 10 VMs a piece. There's a front end web tier load balanced. There's a compute and database tier at the third layer. There's some business logic mapping maybe in the third layer and maybe identity domain controllers active directory in the back tier. There's nothing really to allow you to define that exact application, what the minimum bar requirements are to keep the whole thing up and running whilst you're patching it. Nobody's built that yet. That's what we plan to build. So additional layers of orchestration that would be another icing on the cake feature just like restore validation would be for Azure backup. But again, I do need to re-emphasize this. Sorry for saying it so frequently. This is not a commitment and there is no timeframe but I'm pretty darned confident we're going to build patch orchestration soon. And soon as it's almost April 1st. Well, this is not an April falls joke. We do plan on doing that imminently but the other stuff and even this one the timeline is still up in the air. We don't know when yet. We're not quite done with the end best practices. This is when I press the pause button next to your head and insert the disclaimer. Where is it? There it is. Yeah, it's right there. The pause button there. That's the lights, Rich, folks. But it doesn't make a pause button, doesn't it? Yeah, and then the up to the opinions expressed in this podcast may not be reflected by Microsoft Corporation. Resume. Okay. Okay. Actually that kind of ties into what we talked about earlier about HA and having multiple or scheduled for. So that's good. One more thing in terms of what we can support. What about hybrid? Is there a hybrid play? Like what about an Azure Arc managed server or even VMs that are running in HCI locally? Okay. So with regard to Azure Arc and also manage this is definitely something that we're asked very frequently. It's probably the second most frequently asked question we get, second only to can I edit your config profiles or create my own. Intersecting with Azure Arc is probably ahead of that sometimes and other times it's just beneath it. So very, very popular question. So much so that a colleague of mine is actually on the same team Azure Arc and also manager in the same team. Ryan, he owns that. He and I spoke about it extensively and then we went and prototyped it and lo and behold, it worked. Not quite sure when that's gonna go out the door but we're prototyping it both for Windows and for Linux. So once we've got that done again, time frames unclear at this particular point but the prototype was extremely successful. Only one service failed and I wanna go mention what that is right now because obviously that's just down to us to fix it. That is going to be an extremely popular product in my opinion, plus as we've sort of alluded to it's kind of a captive audience for us. People that are using Azure Arc for server want Azure management tools to manage stuff that's not in Azure. The goal of auto manage VM best practices is to eliminate the need for you to know about the management tool on board to it, configure it, monitor it and remediate it. Now, how did I make six out of five things? I cannot count, 35 things that we do there but yes, that is extremely popular very commonplace question and absolutely on what I would consider to be an imminent roadmap. Okay, all right, so that makes absolute sense and I'm glad to see that it's on the roadmap which was the next question on my list from the community is what else is on the roadmap? Okay, so that's a lofty one. So I mentioned earlier on the auto manage will become an umbrella brand. Right now the focus and it's a laser focus on virtual machines later down the road we might, for example, take other services and we'll answer this aspect of it first. Other high value services, the customers say love this service like they love using VMs in Azure Azure Kubernetes service. Customers are using that so much now it took off like nobody's business. They tell us though it's quite hard to administer and configure and I mean so. Okay, let's auto manage that then. So that's one of the off on a different path services under the auto manage umbrella that we may well do at some point in the future but the VM space itself is so deep. I'm gonna need to triple my dev team to be able to take on the likes of AKS whilst still working on the VM pieces. So getting back to the more imminent roadmap stuff we mentioned some of the icing on the cake features like backup restore validation that would be one another one would be patch orchestration which I'm hoping to build imminently and I mean really imminently that's what we're looking at. Other features that we're looking at are AI ops based and some of the technology that you're already seeing in auto manage is actually backed by AI running in Azure. One of the things that we're gonna be leveraging is resource exhaustion forecasting machine learning algorithms that will be and we've already prototyped this so this actually exists properly prototyped and partially as you're a fight in terms of it runs as part of the way that the Azure fabric runs it just doesn't have an experience yet so you can't turn it on, you can't turn it off you can't see what it predicted or anything but this is gonna come down the pike at some point in the future. Resource exhaustion forecasting will basically monitor the core compute resources CPU memory disk and network and it will look for patterns over time and if it sees a pattern of CPU usage growth when that hits some prediction point where it will say I've got three months of data I can make a prediction 30% of the data sample size to basically one month out it's gonna make a prediction based on that three months trend and say, oh crap, you're literally gonna be at CPU 99% pegged internally. Yeah. Therefore it will make a recommendation to upsize the VM. Isn't there something already in the Windows admin center where it does that predictive analysis? Yeah, so the machine learning algorithm is actually based on system insights which is part of Windows Server 2019 and obviously Windows Server 2022 I think it is going forward. I think that's right. I'm not sure if that's the correct name but somewhere around that. Yes, that is literally the product. So we asked them for their code base and then we morphed it or refactored it to work in Azure, which it now does and now it's just monitoring all of the resources but you get a collective view for all of the virtual machines and which ones have got resource and forecasted events. That was prototyped extremely successfully and that led us to a natural conclusion which system insights can't do because it might be running on bare metal. When you're running in Azure if a resource exhaustion forecasting event is predicted that you're gonna run out of disk space that's a pretty easy one for us to predict. CPU is harder to predict memory. That's a good one for us to predict because it has a significant impact. We all know that they've got this much memory and today we've got gobs of it but when we use too much memory we start paging stuff out onto the disk and now you're looking at a 10,000 fold performance degradation memory versus disk IELTS or whatever that factor is it's gonna be a very, very high number in terms of how much faster memory is in this space. It would be good for resource exhaustion forecasting to be able to predict when you're gonna end up deep in the page file which will then have an impact on the throughput of whatever the service is you're monitoring. So one of the things that we are then gonna do with the resource exhaustion forecasting data is feed it in to a right sizing engine where customers and again, hypothetical prototype PS ready as a service, no idea when but we are very much planning on building this a right sizing engine that will say that VM is under provision do you're gonna end up 90% in your page file in six weeks at which point people should be like I'm pulling their hair out which apparently you must have already had one of those events here because you've clearly pulled it out. I had several, this is my COVID haircut. So that will feed into a policy driven right sizing engine and allow customers to say whether they don't care want to be notified or whether we should just automatically fix it and we would then go ahead and upsize the VM and then power it off and power it back on again within the maintenance window that's been configured for that VM assuming what exists. And we would also do the same for cost optimization if it said that the VM was over provisioned and the CPU is sat there at 14% in the past six months straight and less like you ever provisioned this get a cheaper one and then they'd be able to define a policy that says do we just automatically downsize it and get a cheaper VM? So it's not just for performance optimization it's also for cost optimization. Again, those two features we call them resource exhaustion forecasting and automated VM right sizing very much part of the imminent roadmap. Today is the day prior to April Fool's Day 2021 I don't know what imminent means on those time frames because we're biasing patch orchestration. If somebody doubles my dev team literally doubles it inside then I can do both at the same time but yeah, time frames for those are not yet known. The other stuff is more icing on the cake features. Automatic ASR for example ASR being Azure site recovery. My recovery, yep. We may automatically figure out based on patch orchestration application definition. So we use the data that I was mentioning to you earlier on that will allow you to model the entire NTR app. We might use that same model for ASR purposes and then automatically create you a DR plan and then automatically test the DR plan in a sandbox to see that this stuff works. Those sort of things are the way that we're thinking in the VM vertical and then obviously more laterally we've got stuff like should we auto manage AKS and other sub services that are high value but deemed to be complex. So what about some PaaS services such as Azure SQL? Making sure that the backups are there, that the encryption is set and all of those services. Just so you know folks, I didn't actually tee up Pierre with that question but it's a perfect one. We are currently prototyping under the auto manage VM best practices banner something called workload optimizations which means we know that there's an OS there and we know that there's a VM because you're auto managing it. So so far VM best practices really just goes up to the OS layer in the stack. But then we know that there are workloads atop that like SQL and then applications that are using those workloads on the operating system on the VM. So there's your overall stack. We want to push ourselves one layer up further in the stack into those workloads. The one that we've prototyped thus far is SQL. So we believe it or not exactly. And I promise you if I didn't even tee that up with Pierre that was just pure out of the blue and a good question to ask. So they actually already have an Azure SQL offering for virtual machines. They literally call it SQL VM. It's not particularly descriptive in that it distinguishes itself from the phrase SQL VM when I'm just talking about a VM that happens to be running SQL. That is actually the brand because they have two other areas where they have similar branding. We've prototyped that with them. We would consider that to be a workload optimization. And if you said this machine is going to be SQL or we detected it was SQL, we can then automatically start putting in place the best practices not only specific to the OS but specific to the database where instead of backing up a SQL database as part of a disk that you're backing up, it actually knows it's a SQL database and backs it up using SQL semantics as opposed to OS file system semantics. So yes, we are already looking at doing that as well. That's wonderful. Well, Dean, it's been 42 minutes. I'm very happy to have had those conversations with you. And I'm super happy that you were candid and told us what's coming up because everything you've said, I know will resonate very heavily with our audience which is IT pros, operations and architects. Great. So I'd like to thank you very much for this time that you spent with us. I will put at the bottom here the links that we've mentioned and also a link to the original video that you've got on YouTube from the original announcements that includes a demo of the onboarding. I kind of didn't go there because as the discussion was going, it was so good that I didn't think we should interrupt it for a demo of how to onboard. Yeah, there's lots of that already out there, Pierre. If you just literally go onto YouTube and search on me, I think you'll probably find my car channel with an M8 or something like that. But there'll be lots of content as well around also managed and you should be able to find the demo stuff from Ignite as well. All right. Well, thank you very much, Dean, for spending the time with us and you at home watching this. Thank you very much. And as Dean mentioned in the description below, there will be some links as to where you can get more information, where you can provide some feedback. I will include Dean's email address because he volunteered in this video, not because I want him to get spanned. He's at Microsoft.com, baby. He's the man. All right. So thank you very much and have a great day. Bye, guys.