 Hey, everyone. Welcome to another edition of ITOps Talk. Today, I'm joined by Shaiyoni Seth, PM in the Azure Monitor area, and we're going to address something that some of you may be dealing with right now, the amount of agents that you have to deploy on machines that you want to monitor in Azure. So, with no further ado, let's get to it. Hi, Shaiyoni. How are you doing? Hey, Pierre. I'm good. How are you? I'm good. So, today, you're going to tell us all about this one agent initiative that we have. Where did that come from and what's the idea behind it? Yeah, absolutely. So, thanks for having me here today. We had been looking at our customer feedback for over, I think, three, four, or probably more than that, and we understood there's a lot of frustration, there's a lot of feedback around what you already started with, i.e. the number of agents that exist today. So, there's probably six or seven different agents for Azure Monitor. So, even for uploading data to Azure Monitor, there's too many options, and the reason that happened was just organically products were developed and they all came together under one charter. But for a customer, what that means is, if I want to collect logs, I have to use one agent. If I want to collect metrics, there is a different agent. If I want a certain feature, there's a third agent. But most commonly, a customer wants to do all of these. So, I shouldn't have to use six or seven different agents and just do one agent. So, that's primarily where the idea comes from, although it's not as simple as it sounds. It has to be, a, we could just clump these into one code base, but that wouldn't be good because there were other challenges with these agents. So, what we instead decided to do is have a brand new agent which combines the feature sets, but does it better. So, the same capabilities, but more performant, better networking, better scale, and a lot of other improvements that we talked through. Okay, so does that mean that we're gonna have one agent in somewhere we're gonna do? Okay, for today, you're collecting logs and you're connecting Perf monitor data and, I don't know, like updates inventory. And it's all configurable? That's a great question. So, usually when people hear the word one agent, they think for everything, there's just one agent now. That's not really the case, if I'm to be completely honest. It's still one agent that will help you upload data to Azure Monitor, but when you want added capabilities and mind you, there are probably 50 or 60 different log analytic solutions that exist today, including update management, others could be change tracking, Azure Security Center, Sentinel, the various insight solutions and that's a lot. Now, for a customer, you probably want to have some of these capabilities, not all of them. And so, the way to do that is you install the Azure Monitor agent, which is the new agent, that's like a prerequisite because that uploads the data to Azure Monitor. And then additionally, we've changed how these solutions work. They are also now VM or virtual machine extensions. So, I installed the agent, if I want security capabilities, I also installed the ASC extension. So, the ASC extension, what it will do is, I know immediately people will be like, oh, there's two things again. But what's really happening is security extensions is doing things very specific to security data. It's processing the data, adding some information that makes sense from a security perspective, using the Azure Monitor agent and then pumping the data. So, the actual data pumping is still happening with just one agent for your logs, for your metrics. It just, all of these different VM extensions will do some additional processing, very fine tuned for that scenario and then use AMA to send it to Azure Monitor. And configurability, since they are all VM extensions, user just needs to know one single way to do all of this. So, one way to install AMA, the same way applies to install additional VM extensions and things to start working. So, that we believe this whole extension management model is gonna make configuration significantly simpler. Okay. You mentioned a few things that I know our audience right now are going, what? What are you talking about? Is this only VMs or can I deploy this on physical servers, on-prem or on other clouds to start reporting into our one Azure Monitor in the pane of glass? Yeah, absolutely. So, yes, I use VM, which is a typical example, but for on-prem servers or servers that are managed by other cloud, it's the same extension that will work, the same agent that works, there's no different agents, but in order to have that running on these other kind of machines, you need Arc, Azure Arc for servers, which is yet another prerequisite, yes, but bear with me here. What that agent will do is simply act as a bridge between your non-Azure machines and provide it a resource ID in Azure. That's all it does. So, of course, it can do more, but if you don't want it to, it's not gonna do. So, it's just gonna sit there, act like an installation bridge for the Azure Monitor agent, and then it also helps with authentication and it's free of cost. So, no cost unless you want added capabilities and no resource consumption. So, for you, it's just another prerequisite, but the good thing is the agent management is now the same for your Azure VMs or your on-prem VMs. You just think of them the same way once that Arc is acting as bridge. Okay, and the deployment of that one agent on new machine, I assume, is fairly straightforward where you just out of the portal download this MSI or EXE and then install it. Actually, it's even simpler than that. So, today, since it's a VM extension, firstly, if you look at the way agent installation works today, I'd like to do a comparison just to show how amazing it's become. And I've experienced this myself. So, the existing agent is to install it. You have to have a big setting file. You have to put your log analytics keys into that agent so it knows who to talk to. And a bunch of other things, and it's all firstly, additional work, error prone. You can mess anything up. The agent's not gonna work. So, instead, what happens now with the new agent is zero config installation. There is absolutely no configuration or settings needed to install it. It's just a VM extension. So, the installation works same as any other VM extension. You have a VM, you go to the list of extensions, you can install it via the portal. There's command line interface, a single command, PowerShell, REST APIs. It's all documented on our Azure documentation website. So, all of those steps, but these are all single step installations. Now, the key to this is that it's just gonna install the agent. It'll not start doing anything. It's not gonna be operational. And that's where I think we can talk about the other new capability that was introduced with Azure Monitor Agent, which is data collection rules. Okay. So, tell me more. Sure, yeah. So, a lot of new capabilities that have come in the new agent is actually because of data collection rules. That's, as the name kind of hints at, it's a new way of configuring data collection for Azure Monitor. Today, it's for the agent, but our long-term vision with this new capability is just to act as data collection configuration for all of Azure Monitor. And when I say that, it could be, today if you look at the ways you can pump data into Azure Monitor, there's probably 25, 26 different ways for different kinds of resources. But again, a customer can have all of these resources. They can have VMs, they can have applications, they can have platform logs they're interested in and so on and so forth. And just imagine a world if there is just one single way to configure data collection for all of these or to monitor them. It's just gonna make life so much simpler for a new user. It just meant you learn one way and that works for everything. So that's of course our long-term vision, but coming back to where we are today, data collection rules. So, I think we'll be showing a brief demo. It's a way that we've abstracted everything related to data collection into this one box if you can imagine. So you have your source on one side, your source could be your VM. Your destination on one side, which could be a log analytics workspace or a metrics database. And then in data collection rule, what you specify is A, what to collect, where to collect it from and where to send it to. As simple as that. And then the way the agent works is one it's installed, it'll make a call to fetch DCR for your machines. There's a new service as part of data collection rule which is Azure Monitor Control Service, AMCS. And that's really think of it as the brain of the agent. So all the settings, the reason it's become a no-config installation is because everything has been now moved to AMCS. And I'm placing it here because it sits on the cloud, it's within Azure. And so agent calls into AMCS, gets the DCR that are associated with that VM and we talk how that works. And then it just figures out, okay, this is what I need to collect, this is what I need to send it to. So that's really the power. And do you have any questions on that? Yeah, I was gonna say, so is this sort of like a streamlined, more efficient way of doing it like we used to in log analytics where you would go to the agent and say, okay, I wanna start collecting all of these performance metrics because they weren't there by default or you could add them out to the fact. And then the agent would just figure out, oh, I start, now I need to start configuring that. Absolutely, it's exactly like that. But the differences earlier, you were going to a log analytics workspace to do it. What means by that is all VMs connected to that workspace will have that same configuration. And that actually posed the challenge that we heard from customers that, hey, I may have 100 VMs all connected to the same workspace, but these VMs actually belong to different sub teams in my organization and they don't want to collect the same data. So I can't have a subset kind of model where certain VMs collect something, some other VMs collect a different kind of data. Now that's possible because the configuration, even though it's similar to what you were doing, it's abstracted out from workspace. It's sitting on its own, it's not tied to a destination or a source, but you can do pretty much the same things. You can say, I want to collect per font array from just five VMs out of those 100 VMs and send it to one workspace. For the next 95 VMs, you can say I want to collect something totally different and send it to that workspace and also metrics and also a third workspace. So indirectly, I think I answered one question is multi-homing, which is a very requested feature, is now possible just by how I describe data collection works. You can use one DCR to specify up to, I think 10 workspaces is the limit today. So data is just collected once and sent to 10 workspaces at the same time for Windows and for Linux. That's wonderful. So it makes it a lot more flexible as well. What about companies or customers or that's our building kind of like custom built application and because I know a lot of like IT pros that they'll have scripts and they'll have different methods of configuring and basically they generate their own logs because as part of their scripting, they make their own log. Is there a way for us to teach the agent how to ingest logs that are not the standard logs? Yep. So I think it's popularly called the custom logs capability just to ingest files that are generated by custom ways. As you mentioned, and upload that to Azure Monitor. So that exists on our existing agents and that same capability at a minimum will be introduced on the Azure Monitor agent. It's not available today, but I think closer to GL like soon after the agent GS that feature would be added. And then we are actually also looking to enhance that feature. So today the way custom logs work is there's a lot of limitations. There's certain ways you need to do things, but we're trying to make it more seamless, have more capabilities, be more flexible, but at a minimum what you described would be available. Okay, that's perfect. Okay, so when we're looking at that, so we've already defined that it's a lot more straightforward, simplified in terms of deployment, in terms of configuration, in terms of how and it sends its data and where it sends its data. Are there any other major benefits to our audience? Like our companies that are currently looking at hundreds, if not thousands of servers with the existing agents on it, is it really worth the migration path to jump to the new agent? Or should they say, well, we're gonna stay with the agents we have now in deployment and going forward go with the new agent. What's the approach here? Yeah, so that's a question with probably a longer answer I think, but at a high level there's two main reasons to I think make that decision or maybe three. The first one would be availability of features on the new agent. So if you already have an agent deployed on your machines at a minimum, I'm guessing you want those capabilities to be still retained, right? You can't just say, oh, something doesn't work and that's okay. So the first question would be, look at the new agent, look at what's available and see if there's parity for what you need. Now, there are customers who may just need perf counters, event logs and syslog collection and that's all they've been doing. For them, the new agent is ready. But if there are customers who need those solutions that I'm talking about, they are gonna be added probably every month after GA as we start rolling solutions out. So for example, today we have the sum of the security solutions already in private preview. We have virtual machine insights coming out in private preview pretty soon. But the model we took is first the agent hits GA and then all these solutions will start showing up. So that would be a very important analysis for a customer to do, like, hey, when are my solutions gonna be available? Because you typically what we are hearing is people don't want to be running two agents on the same machine because that's firstly it's not gonna save on anything. It'll probably be more management for you because you still have to manage them both two different ways now. And it might be using more resources too. There might be data duplication and a bunch of other things. So we are hearing from customers that they want to wait it out. The other part would be the fact that the existing agent would be deprecated in three years after the new agent GS. So keeping that in the back of our heads, if you are looking at like a really high scale change or deployment that you're doing for your organization, that's a really good step to look at the new agent, even though things are not available because you're already planning to do a lot of work and you don't want to do throw away work, right? You spend all that time and typically these things take a couple of months. And if you want to redo that, maybe a year or two down the line, that's not really good. It's not good for your customers either. So these are some of the things that we usually these two items that we advise looking at. And the other thing is even when we do take the decision to migrate over to the new agent, people should look at the data collections capability very differently because today, the world that we live in with the agents, it's built with a lot of limitations. And people have designed their usage around those limitations. But now that you have this freedom, we should use it. We should really encourage your organization and the customers to look at here. There's so many things possible now. So take a step back, figure out what you want to monitor, what your data collection store you should look like and start from there. That's also the reason we are not providing like a blind upgrade option because we really want customers to make use of the new capabilities. And of course, it's not like we're saying it's all up to you, figure it out. We will provide capabilities to automate that to make it simple for you. But a little bit of thinking while doing that is what we are encouraging users to do. And I assume the documentation takes our customers and our audience through those decisions and gives them the information they need to make that proper decision. Yes, actually, it's a very well laid out section and a documentation on it. I think the question is, should I stick with the existing agents or migrates? So it's actually a question we've addressed quite well and we keep adding more information to it if there are more questions. Okay, that's perfect. And for those of you listening right now in the description of the video below, you will find all of the links to the information that we're talking about today. Okay, next time, can you show me something? Can you actually walk me through some of those configuration? Yeah, absolutely. So let me move over to my screen here which is showing the usual Azure portal homepage. We'll just use this for the demo today. But for command line or REST APN and using non-programmatic ways, they are well documented and you should be able to use those methods too. So for fans of UI and pointy clicky ways, you start with the left nav and click on monitor. On that, you'll find data collection rules under the settings tab today. It's in public preview right now so it should be visible and accessible to everybody. There's no enablement required here. That will bring you to the screen and you should see something like this that says create data collection rules. So as I described previously, the story for the new agent really begins with the data collection rule. And so if I click on create, that quickly brings you to the creation experience. So each DCR that you create is an Azure resource. So what that means, you're gonna have to specify a name for your resource. Let's say test rule one. I'm okay keeping it in the subscription. I pick a resource group that it should be in. Today there is, it's not sure if it's a limitation, but the DCR should be in the same subscription and resource group that your machines are in. The region, whatever is available or whatever makes sense to you. And then the last one is a platform type. So you can pick windows or Linux dependent on whether you want to create this rule for your windows or Linux machines. But if there are cases where you want the same rule to apply to both, you can always pick the third option which is custom. I'm just gonna pick windows and then I'll go into the next step, which is resources. So you've given a name done the basic steps. The first obvious step is, okay, what machines will this rule apply to? So as I click on add resource, it will bring me up to this resource picker blade on the right side that you see here. And it will show all of the resources available under the subscription that the rule is being created under. So I'm just gonna select this resource group and I have two virtual machines here. So let's say I select both the virtual machines. What that means after I click apply is now this rule will get associated or applied to both of these VMs. And I just have two VMs here in my test subscription, but you can select any number of VMs that you'd like. Another key thing I'd like to call out as we did this is in this blade, instead of VMs, if I select a resource group, it will only include the VMs that are currently in the resource group. If you want to have a behavior wherein new VMs in the resource group are automatically added, you should instead be using Azure policy. So we'll also introduce like a redirect here just to point you to a policy. But what an Azure policy will do is dynamically in future, if there are more VMs getting added to the same resource group, it'll automatically apply this rule. So you don't have to even worry about that kind of a scale up model. Couple of things to note here, it's only rolled out in these regions, but all hero regions should be available. And we are also rolling out to more regions like Fairfax and Mooncake pretty soon. This is also important. It's listed in our documentation, but what happens with the new agent, we've changed our authentication model. And what that means is we are using managed identity. It's good news for customers because it means you don't have to worry about certificates or keys or any of that manual effort that you were doing to keep those updated because this is managed as the name suggests. What it does is it will create an identity for your resource in AAD. And that is just gonna be upgraded on time whenever needed, you don't have to worry about that. So today, the- So you don't have to worry about the like a log analytics key that you would normally have to put in the agent. And if some reason that key gets compromised, you have to generate one and go and update it everywhere. Yeah, you don't have to worry about any of that. Thank you. Sure. And so in the managed identity, those of you are a little more familiar. There's two ways it's usually enabled using system assigned or user assigned. Just of today, we have a limitation where existing user assigned will not be impacted. But if you enable system assigned managed identity, it will become the default. So make sure you understand that's acceptable. If not, we've also listed out how to work around that. And very soon we'll be supporting user assigned too. So that should be available when we are GA-ing actually. Yeah, so as a reminder for our audience, this is not currently GA, it's still in preview and some changes may occur between this video going live and the product actually going general availability. Yep, thank you for that, Pierre. So moving on with our collection story, we already picked the VMs that we want to monitor. The next step is what data would we like to collect from these machines? So clicking on our data source will bring you to this blade on the right. The data types that you'll see since, if you remember I selected Windows for this rule, it'll only show me performance counters and Windows event logs, but for Linux, you'll also see a third one that says this log. So I'm gonna pick Windows event log because I'm gonna show you another new capability that's now available. So for any data source type that you pick, you can always use the basic mode to just check the type of events you want. But if you want to filter data, so for example, I want to collect security logs, let's click this here, but I only want to collect certain event IDs or I don't want to collect certain event IDs that I'm not using. So anything that you collect, by the way, and if you're sending it to Log Analytics, it's money or cost that you pay for, right? So if you're not using certain events, you obviously don't want to collect it. And that's now possible in the agent where you can use XPath queries. So if I say something like security and I say event ID equal to four, two, three, six, I'm just making this up. And I add this, what it means, and I delete the first one. Now what it means is it's only going to collect this event ID, nothing else. And you can also do the opposite of that. So if you want to filter out this event and collect everything else, that's totally possible. And you can add as many XPath queries as you'd like. So XPath is a common way Windows event filtering works and that is now available within a data collection rule to be applied to an agent. And just for clarity, this filtering will happen at source. So once you add this rule and the rule gets applied, what the agent actually gets is the result of the filters already applied. So this filtering is not happening on the box as such, but the agent will only collect the events that the rule finally tells it to collect. So as an example, if I have five rules here and the result of that is just three events getting collected, that's the information that gets sent on to the agent and the agent is only collecting the three. So that's a lot of money, say for you, a lot of resources, a lot of storage space that you can save up on now. But if I decide that like next week, we've realized that we were vulnerable on a specific area and that we need to start looking for another event ID or event log, well, I can just go back in the DCR that is and just edit it and add the event that I need. Yep, add simple as that. Perfect, that makes sense. So limits to what you absolutely need to start with and then it grows with you as your need grows. That's a great way to put it. Perfect. And then once you've selected the data type and have your filters here, you go on to the destination to specify where this needs to go to. So just to demonstrate the multi-homing, let me show you, I want this data to go to this workspace and then I can add another workspace right here in the same rule and select workspace number two. So there's multi-homing, I know I picked windows here but that's gonna work the same way for Linux. You have to be aware that when you're sending to two log analytics, you're actually paying for data ingestions on two different log analytics. That is true, yes. This is of course, if you needed but it's gonna be additional cost for sure. So usually if you pick, so let me show that as well. So let's change it to performance counters and even here you have the same capability, you can pick like an all performance counters or you can individually specify the counters you want to collect or unselect any of them. So like as much of customization you need and for performance counters, you also have the metrics destination. Now another way to save on cost is if the data that you need is not really, if you don't need as much details as logs provide but you actually just needed to monitor your machines and get alerted if there is an issue if there's more CPU getting used then metrics is a great way to do that. It's more near real time. It's going to a metrics database that you are not charged for at least for the platform metrics but you can do both too. So just to show that capability, you can select metrics and logs in the same rule and the data is collected once instead to both destinations. Again, something new that's possible in the Azure Monitor agent. Today you'll have to probably use two agents to do this. So once you've done that, was there a question? No, I was just following along. Sorry, sorry. Okay. So after you've done that and your data sources are added, that's pretty much all you have to do in the data collection rule. So you finally go to the last step where you review everything you specified, make sure the name looks good, resources, data sources, everything makes sense and just click on create. So once I do this before I click what will happen in the next step is, A, it will install the Azure Monitor agent on the machines you've selected. So this is something we are taking care of for you if you use the UI option. But if you're using the API and Commandlets, you'll have to deploy them using templates or API calls, whatever works best for you. And it's also gonna enable managed identity on your behalf. The third thing that it'll do, which is new to DCR, is it will create a data collection rule association. That's also an object we call DCR A. That just think of it as a link between your machine and a rule. So for every machine, there will be an association created but there could be just one rule that applies to all the machines. In fact, if I can go over and show this for a bit. So this is how you can imagine the data collection rules connection happens. So you have these rules on the right. These are three different rules and you have your VMs, these yellow boxes on the left. And if I say rule one gets applied to all four VMs, then you see association one that gets created for all the VMs here. Similarly for rule two, if I only wanted to apply to two VMs, you have to create a DCR A only for these two VMs. So that's how the model works just to make it a bit more visual. Okay, is there a way, I guess there would be problem, programmatically a way let's say to have a set of rules and have the association based on either location, Azure tags or any kind of other identifier of the VM. If these are identifiable via Azure Resource Manager, then yes, you can use any of those ways to kind of associate or create data, apply data rules to the VMs, yes. But as you mentioned earlier, that's not happening dynamically unless you have your rules tied into Azure policy in your environment. Correct, today the only way to do that dynamically or our at scale way to do this is using Azure policies. So there are actually some built in policies available. They should be showing up in the portal probably in a couple of weeks time from today, but what those policies, at least the standard policies that are available to begin with, they install the agent, enable managed identity and also create the association for the rule that you can select to be applied along with that policy. And so instead of using VM ID, you can probably use things like a tag or any other arm identifiable ways to pick. Okay, and if I think back to what you said earlier that when we were talking about machines that are on-prem, this is why you would want to deploy the Arc agent first so that these machines are actually identified in Azure and then you can just apply that collection rule to them once they're applied. Yep, that's absolutely correct. Okay, oh, that makes sense. Makes great sense, yep. Yep, so yeah, let's click on clear and see what happens here. So if you can see on the top right where it shows the status, so it says it installed the extension on both these machines enabled identity. And I think the last part would be installing the data collection rule association. Yep, so if you see successfully created two DCRAs for each of the machines. So it was just a matter of few minutes as we were watching here, the deployment has done. So let's go and see our data collection rule, what it looks like. So this is the rule that you created. You can see an overview here. You can also view the JSON, which is actually pretty interesting to see how the model operates. So in here you'll see everything that I had specified in the UI show up in a simple JSON format. And this is probably what you would be interacting more if you're looking at programmatic ways. Here's the perf content data source that got created, the workspace that I selected and so on and so forth. And coming back to the resource, you'll also see your data sources show up, the resources that you had selected. You can export this as a template and also use this to create more rules if you want to use the same thing or just make some few changes and deploy that instead using ARM templates. So that's also available. And I was gonna be my next question is, if I've got machines that I'm deploying as part of a CI CD like a continuous integration continuous deployment pipeline and I have an ARM template that basically builds the server that's gonna host the workload and I have that server automatically assign itself that rule as part of a ARM template. Yep, that is absolutely possible. So the fact that DCR or the new agent is using ARM fundamental concepts for how we build the deployment, management, installation, everything, you can definitely plug those into your CI CD pipelines and make use of the same thing. There's nothing new that we'll have to enable all of the existing ARM ways of management should be available out of box with the new agent and data collection rules. So once you come here, so this is one of the virtual machines that we had picked. If you open that up and you go under extensions, you should now be able to see the Azure monitor windows agent installed on it. There's also some other agents that are showing up because I had clicked on VM insights enablement. But if you haven't selected those features, you will only be seeing a single Azure monitor windows agent or Azure monitor Linux agent, whatever you've selected and the corresponding version name. And deploying those other extensions that you mentioned like earlier we mentioned there's an extension for let's say updates and for inventory or desired state config and stuff like that. So to deploy those extension, where do I go and how complicated is it to just check them off? So currently in the UI, we don't have this but one of the ways we are gonna introduce soon is if you remember the data collection UI that I showed where you go to pick your resources and data types, we'll have something over here like a UI that will show all of these different log analytic solutions that are available on Azure monitor agent. And from within the same DCR UI, you can click enable or disable. And that may redirect you if a certain extension, for example, if you're installing Azure security extension, there might be some additional information you provide. So it may take you to the onboarding experience but the way to find it or the way to see which all extensions additional capabilities are enabled for my VM, you can always come to a DCR and see what's configured because that is like I said, the singular way to configure anything for the new agent which includes the additional extensions that get installed. So to be like a single source of truth. Yeah, I was wondering because I remembered you saying that there was like a single way of configuring it but I couldn't remember walking through those agents when you created the data collection rules. Yeah, that's because they aren't available today in the UI. So it's in private previews, there's a little bit of manual instructions but pretty soon they would be visible and configurable through the UI as well. Perfect. So that was great, Chewani. Now, I know we've mentioned many times that this is still in preview. So I'm not going to put you on the spot and say when is this going to be available? Because I've worked with program managers too often or product managers too often to know that this is a very closely guarded secret and that the last good answer I got was it'll be released when it's ready. Yeah, but we can get a little more particular than that. I think we are targeting April to June as a time frame this year. So probably in a couple of months, yeah. Perfect. And what are the next steps for our audience that are looking at this, that are watching us right now and getting excited as I am for it to have more flexibility in a more simple way? What's their next step? Yeah, absolutely. So I think we are encouraging all of you, everybody watching this to start testing, start exploring the new capabilities that we just described. It's not GA yet, but which also means that you have some time to play around and test it out on your non-production infrastructure if available. We also recommend that if you're already using some of the existing agents, run both the agents side by side. Preferably have them pointing to two different workspaces just so you don't have data duplication and see if the data looks same in both and see if the performance has improved, it should, we believe, but see that for yourself. And most importantly, if there are gaps, if there are things that don't work for you, if there are feature requirements that you have, get them to me, get them to our team. We are actively looking for feedback. The way we are thinking about this new agent, the roadmap, is completely feedback driven. So we need to hear from you what you need and that's what we are going to prioritize. We have no other way of doing this really. What customers need first is what we are going to ship first. So that's how we are taking this. We mentioned that this is in preview. We've mentioned that this is in private preview right now. So people want- It's in public preview. It's in public preview. Ah, okay, sorry, I misunderstood. I thought you said it was in private preview. Public preview, perfect. So I will put the info in the description below on how you can access the public preview and start deploying the agent. Yep, and that would also mean, I think, by the time the agent is GA, if you've done all your testing in the meantime, as soon as it's GA, you can just extend the rollout from your dev to production without really doing any additional work. So that keeps you also ready, provided that you don't need any of the solutions. We'd also be posting timelines for the solutions that are already migrating over. So if you're interested in any of those, you can watch out our announcements and our documentation to learn more about those. This was wonderful. Thank you so much for taking the time to walk us through this godsend, because I have to admit, I set up a lot of demos, environments sometimes, and then I have to go over and then onboard all of my VMs into Log Analytics and then onboard all of my VMs onto Azure Monitor and then all of them on DCR. Like it is, if you're doing it programmatically with a proper ARM template, that's not too bad or scripting, it's not too bad, but if you have to do it in the portal or one-at-a-one, it is hellish. Thank you, Pierre. Thank you, everybody, for having us and let us know if you have more questions or feedback. Thank you very much, everyone, for spending this time with us. I hope this was valuable. And again, let us know what you want us to investigate for you. We are here to answer your questions. Thank you very much. Have a nice day. Thank you.