 Good morning everyone. How are you doing? Good morning Pierre. Good morning everyone. How's it going? I'm good. I'm good. Welcome to your, what do they call, well, the first? The inaugural. Many of them. Easy update for you. You have just joined our team. That was what, late December? No, it was actually late January. Late January. Oh, that's right. I don't, that's right. We had the interview. After the holidays. That's right. After the holidays. Well, welcome to the team and welcome to AZ Update. Why don't you tell us a little bit about yourself? Thanks. Well, first of all, great to be here, been following the show for what, for years now, season four, right? So yeah, I mean, I've been involved in IT since I can remember. And one of my passions has always been to share knowledge and work with the community. I've been an evangelist in the past. So it's awesome to be an advocate today and more importantly, work with the group as well as continue to work with the community, sharing and learning from them. So great to be on the show today. Well, we're very, very happy to have you on the team and happy to have you on the show. What's your focus is going to be going forward, though, for this audience? Yeah. One of the things I've been working on in the past few years is containers and Kubernetes. So I was actually part of the Windows container team inside Microsoft. Been working on a few things around the container platform itself, tooling for containers. Like I said, I love working in the community. And one of the things that I've been trying to do as part of the product scene was actually create tools that were relevant for the operations audience, right? So when you think about containers and Kubernetes, it's all about developers. It's made for developers, for developers, by developers for developers. And one of the things I was trying to do was create more content and tools because it was the product team for the operations audience. So when I come to the team now, one of my goals is to actually continue to create content and actually prepare the operations audience for this evolution of containers and Kubernetes that is hitting right now. So when it comes to managing the environment that developers are creating new apps to operation scenes are prepared to manage the applications and do all the operations on a day-to-day basis that they are used to but on this new containers and Kubernetes platform. So you can expect a lot of content around that topic. Very, very good. It's funny because you say like from for developers, by developers, through developers, praise the developers. It's funny how it's always about the developers until you actually have to put it in production and then all of a sudden it's all about the ops. So what do you mean I have to back this up? What do you mean I have to put some kind of like management and instrumentation around it and what do you mean I have to keep track of the resources that it's using? Yeah, compliance, security, all those things that operations teams have been doing for a while. It doesn't change because it moved to the Kubernetes platform. Now granted, some companies have been using a lot of tools that actually give the developer the infrastructure components that they need to actually build the applications. But when it comes to day-to-operations, the developers are not going to be monitoring the whole infrastructure where the applications are running. So that continues to be an operations task. Yeah, so again, really excited to be on the team, really excited to be on the show and really excited to be working with the community even closer. All right, well speaking of show, let's get to the news. And you are up first with Azure Backup Support for Trusted Launch. Yes. What is that about? So for the traditional IT ops, one of the technologies that have been out there for a while is Shooted VMs for Hyper-V, right? When it comes to Azure, we have a similar technology called Trusted Launch that actually leverages a bunch of the components on Generation 2 VMs, like VCPMs, secure boot and all the things that we have on Generation 2 VMs. To give you an additional layer of security when you combine all those things together, they prevent against multiple types of attacks like rootkits that you can think about, kernel and all that kind of stuff. So when it comes to protecting those VMs from deletion and accidental deletion and those kind of things, Azure Backup now comes in enabling you to backup those Trusted Launch VMs with the new enhanced policy under Azure Backup. So the enhanced policy allows you to have multiple backups for multiple backups per day, one or three days operational tier, vault tier, the zone redundancy, resiliency, and of course support for Trusted Azure VMs. So great news for customers that are very sensitive around the security from their VMs being executed in the cloud because it's outside of your realm of outside of your own environment. You can use Azure Trusted Launch VMs for that, but now you can even go further and backup your Trusted Launch VMs with Azure Backup with this new enhanced policy in Azure Backup. Yeah, because I don't like Trusted Launch VMs are not the VMs you would want to use for everything in your environment. They're only for very specific, highly sensitive workloads that you want to protect all the way down to the kernel, make sure that none of those root kits, all kinds of attack vectors are possible at all against them, but the fact that up to now backing these up was problematic is great that we are now opening that up so you can use them, you can have that benefit for your enterprise, but at the same thing, your operations doesn't necessarily have to adapt around the technology because you can still do your backup and everything else. That's fantastic. That's correct. Yeah, I mean, it's protection from the moment you boot the VM with the fact that like insured VMs, if you're familiar with that technology, you have a whole set of whole squadron services and Trusted Holes that you have to think about with Trusted Launch is single VMs that you don't have to take care about in an entire environment, but then you have protection for the VM from the moment you boot and now we're adding support for you can backup your VMs, like you probably want to do that running in Azure, so you have that support as well. All right. I guess the second item is up to me. We're going to go and take a look at scheduling automated emails of your saved cost views and you may think, here, that's a very boring subject and you might be right. However, when you lose track of your budget in Azure or in the cloud because pay as you go and if we were talking about developers doing their own thing a minute ago, if developers go in and they've got rights into your environment because for some reason policies and governance hasn't been set up properly or you never know, what if they start a bunch of VMs for a test or and I'm saying VM, it could be any service and they don't tell you about it because I've done this as a demo one year where I started 10 G5 machines as a scale set and I forgot to turn them off after the end of the demo. Now, if you've got in your billing, in your subscription billing analysis section, you've got a view of, you've got it the right, the way you wanted. You've got all the resource groups that you're looking at. You've got all the subscriptions and you've got the views exactly like you want them. Well, you can have that view included in an email and sent to you daily, weekly, monthly. You select that and then once you have that, you can now share it with somebody else in your environment, but you can also share it with anybody else that's outside your environment because it doesn't require credentials to get into Azure. But just having that daily or weekly email to say, ooh, why do I suddenly have $100,000 worth of compute time that was unbudgeted shows up. You don't have to wait till the end of the month. I think we can all relate to that. Especially in the early days of using the cloud, we all wanted to try a monster VM that we don't have enough hardware to try and see what it looks like. I've seen so many customers have to call Microsoft support to talk about a series of VMs that they just forgot that they were not using and then the excuse is I was not using the VM, but then we were running your VM for you. There's this debate that has been happening since day one of the cloud existence that we cannot relate to that. It's great to see something like that that gives you information upfront and can prevent something terrible from happening. Yeah, and our folks in the chat, Andrew, Jared, and Amy right now are kind of like, more and more Jared is like, more mail, more mail. Yes, I understand. We don't need any more mail in your environment or in your mailbox. What you need is more of the right kind of mail in your mailbox. Outlook rules. Or Power BI to Power Tools, Power Toys, Power Something, Power Apps that allows you to do some manipulation as to if it's from this in there and if there's an unsubscribe, send it here and delete it. I really got to look into that. There's probably a way to automate how the information inside that email is actually presented to you, yeah. Yeah, so some new stuff getting a hand and handle on your billing. And the next news item is a bit of a, I was a little confused when I read that. So it'd be really interesting if you could like take me through it and really what it means for me like running these environments. Yeah, yeah. So I think taking the confusion out of the way it probably means nothing to most customers, right? So it's probably the most irrelevant news that we can bring, but it is an important thing for people working with containers and that are looking into how the community is moving forward around the Kubernetes community. So the news is that now you can run Windows containers on Azure Kubernetes Service, AKS, using the container D runtime. And the reason why this has been very confusing is because the news is that container D is replacing Docker as the container runtime on Kubernetes. And when that happened, people freaked out saying, well, but I use Docker to build my containers. My developers are using Docker and that's not gonna go away. So let's talk about, I'm gonna try to make some analogies for people that don't use containers and don't understand how the container runtime might be impacting someone using containers. Think about driving a car, right? Let's say you have something to do some service in the engine. The way you drive the car is still using a wheel, you put your foot in the paddle, you change gears if you have that. But the engine might be changing a little bit, right? So Docker is how you create new container images, how your developers work with the container images, right? So how they put the applications inside a container image. The container image then goes to a repository where other servers can go pull the image, download the image and then run those images to run the application, right? The engine that runs the containers on the server side behind the Kubernetes environment could be Docker, right? Or it could be container D. Now container D also run Docker containers, right? So the images built using Docker are still run by the container D, container runtime. And the Kubernetes community decided that Kubernetes is going to bet on the container D, container runtime, not the Docker container runtime. However, again, the way your developers develop the applications and put it in a container, how the container images are built, the way they are stored, that's not gonna change. That's not changing, that's not gonna change. So what is the impact? Well, the impact is if for some reason you need to open a Kubernetes node on AKS, which you shouldn't, but if you need to do something like that, you will now be seeing the container D runtime rather than the container, I'm sorry, the Docker container runtime. The way you interact with the container runtime, if you choose to do so, it's different. But for 99% of the customers, they will be interacting with the container D runtime through Kubernetes. So you run kubectl, for example, and that's not changing as well. So that's all to say, AKS is following the Kubernetes communities because it is a Kubernetes service, right? And the new container runtime that is going to be used by default is starting with container, sorry, Kubernetes 1.23 is the container D runtime. If you run a older version of Kubernetes by default, the container runtime will be Docker, but you can change from Docker to container D already. I think it's starting with 1.19 for Windows Server 2.19, and then from the 23 and above, it will be container D by default. Or you can even upgrade, like if you have an existing cluster running the Docker runtime and you want to change to the container D runtime, you can do that just upgrade your Windows Notebook to the new container runtime. And again, 1.23 and above, it will be container D. Furthermore, for Windows Server 2022, it will be container D only. You won't have the option to run Docker in Windows Server 2022 when it lands on AKS. So I hope that that clarifies and even for people that are not using container or container on Kubernetes, they can understand and see what's happening. So when you decide to learn containers, you now have the understanding that Docker is one thing. You build your containers using Docker. The runtime is a different thing. It's kind of a server-side thing. It's behind your Kubernetes deployment, so you don't interact with the runtime. You interact with Kubernetes itself. It doesn't affect most of the customers, but it is an important thing to know. Okay. So I think it's, to me, it's almost like a Kleenex analogy where it's a tissue, but we call it a Kleenex versus, so it's a container engine. And for some reason we call it Docker, even though Docker is just a company that created all the tools and basically started this, well, not started because it's been around, containers have been around for decades, but really brought it back to the forefront of development and operations. And now we're just changing the engine. It's still going to run the same images. You can still interact the same way with it, but now it's a Walgreens tissue instead of a Kleenex tissue. Yeah, that's a good analogy as well. Okay. Well, I'm just happy that in terms of operations, there's really not much that's going to change, but knowing what's underneath the hood that the engine has been tweaked is important for our audience, I think so. Yeah. And I mean, some people in the audience that are really like hardcore with containers might be thinking, well, but I do want to see my node and I want to interact directly with the windows instance. For those folks, there are tools out there that let you do that. For example, if you run Docker on a Windows server today, you're running directly against the windows instance rather than Kubernetes. You have the Docker tools like the Docker CLI, like Docker run, Docker build, and those kind of things, right? You have tools out there that replace those tools, like CRI, CCL. It's a command line option for you to interact directly with the container D. You can install that on a Windows node and interact directly with container D. So you have that option. It's just that like, because you are running on a Kubernetes environment, most likely you are going to interact with your containers via the Kubernetes APIs, the Cubs, the CL, and all those kind of things. Well, if you're going to use an orchestrator, use the orchestrator. Exactly. That's exactly the point. You wouldn't do both at the same time because you'd probably end up screwing something up somewhere. Yeah. Yeah. That would be my case. Yeah. Yeah. Too many cooks, too many tools. The only, the thing I've been asked that question before is what is the linkage between us, Microsoft, AKS, our usage of Kubernetes and the open source, like the, what the purist would call like the Kubernetes project that's out there? Like what's the relationship? Like a vanilla Kubernetes, right? Pardon me? Like a vanilla Kubernetes. Exactly. Yeah. So two things. First, Microsoft works directly with the Kubernetes community, the upstream community. So basically what happens is Microsoft is constantly collaborating with the Kubernetes community to make Kubernetes better for Linux and Windows, right? And the other thing is Microsoft uses Kubernetes as the engine to run Azure Kubernetes service, but that's a managed service. So what that means is you don't have to install your Kubernetes cluster. You don't have to actually manage the control plane. We manage that for you. You just tell us how many nodes you want to have in your control plane in order to scale to your large application. And you have access to that, the configuration of that control plane. But most likely you'll be managing the, the pool of nodes running your application and those can be either Linux containers or, I mean, Linux hosts for Linux containers or Windows hosts for Windows containers. So that's why when the community, the vanilla Kubernetes changes, those changes translate into our environment as well. That is correct. And that's exactly what happened with the runtime. The community decided to use container D as the default container runtime. We're following that. Okay. So if you're out there and you're not happy about this, this change, take it up with them. That's a good way to put it. Yep. Hmm. All right. Let's move on with the news. Next item. Actually, I should say almost like next two items, but they are kind of related. We now have with Azure Monitor, Azure Monitor Log Analytics data export has been around for a while. The problem with the data export and the day export is there so that you can actually take the data, the log performance metrics and all of that out of a log analytics workspace and store it in a storage account so that you can have historical data, you can do historical analysis over a long period of time without having to spend the money, which is more expensive when you're storing it into a log analytics workspace. The problem up to now is that not every table in the log analytics workspace was or is exportable. So you have to kind of select what you want to export. And in some cases, it might not be might not be available for exports right now. One of the things that's going on with this announcement in the log analytics is that through the custom log APIs for Azure Monitor Logs, you can start now selecting data and I'm happy about this, but I'm thinking personally it's not quite far enough. So you can select all of the tables that you want and as we work through in the background into building the API and the engine, we will start exporting the data you've selected as it becomes compatible with the export process or the export API so that you don't have to every month go back and say, oh, now this table is available. So you select all the data that you want, you configure all you want to do with your data export and as it becomes available, then it'll start being exported to your storage account. Not the greatest solution, but it is a stop gap. Eventually we will get to a point where we can export the majority of the data, but for now this is what we're stuck with. And so basically the data comes in and then in your ingestion pipeline is where you will configure what data you want sent where and then it goes and sends it both to your log analytics workspace and to a storage account that you can have like a glacier, a cold storage, so it's fairly very, very cheap and that way you don't have to worry about spending the amount that you would into a log analytics workspace specifically when you look at the log analytics workspace that if you extend the amount of time where you're going to retain that data can run fairly expensive. First of all, to me what always amazes me is how much data is collected in those cloud instances, right? It's so much telemetry. It's even hard like as a human being to actually go and look through that so it's always great to see options that you can export data and filter it out and drill down to more specific information that you want to check. And the other thing that comes to mind when I see those solutions is the other like ISV and third party software that is going to be built because now we have all those new capabilities for filtering data and sending data to other instances and those kind of things. So I'm curious to see what other tools are going to be built based on the fact that you can now collect data and send to someone, something else. Anyway, it's just something to think about. I agree. It used to be that we would send our data to basically just like a file share somewhere on the network or some kind of network appliance for storage. And then we would have to parse all of that and figure out how we pulled out the data. And then we went to the log analytics workspace where it's a like a structured approach at collecting and managing and analyzing that data. And now we're kind of coming back to the we're just going to send that to storage. I believe that it's very important to figure out what you want to do. Like, yeah, you really have to have a good use case or a reason why you would want to spend the time and the cycles to actually achieve that. If you only want to keep that because you want to have historical data that you may want to need a year from now but you really have no plans for it at this point, I'd say don't do it. The same reason where the same thought pattern is at the beginning I was a big fan of multiple log analytics because at first accessing log analytics was like the data that's in it. So if you had like an IR key where you're like the corporate IT and then you've got divisions and then you've got IT in those divisions and they wanted to get their log the logs and the analysis of like performance counters and everything else that they would get access to it. But you didn't want to give group A access to the telemetry of group B. So you ended up with multiple log analytics work space now. Actually, since last February when we did the IT OPS Talk hybrid event we had a really good chat with Mir Mendolyevich who runs that section. Now with RBAC we can send all of it to one log analytics workspace and if you are RBAC so your role-based access control is set up properly like group A is not going to see the data that's coming from group B and vice versa because RBAC will prevent you. I'm now converted to why manage and pay for multiple log analytics work space. I'd rather just have one and have that one be configured properly. Now when we go into start talking about log analytics data export it's great. It's especially great if you have like really stringent compliance requirement where that data needs to be audited on a yearly basis or on a quarterly basis by somebody so you want to export that data to a storage account where the audit can occur because you may not want to keep that data in that log analytics work space for that long. If you've got use cases like that then log analytics data export is definitely the way to go. But if you're doing it just to get historical I'm not sure if that makes it may, it may. I'd be glad to have that conversation with somebody in the audience in the community to like convince me I'm open minded I will change my mind. I have done that money many times. But yeah I think it's really cool. I think you agree with you. Using the cold storage in that case will be probably a better alternative. So we have gone to the almost to the end of our show today. The next section is our learn module of the week and this week we decided to go with a learning path not just a learning module control azure spending and manage bills with azure cost management and billing. We figured since we were talking about this we talked about managing costs right? Yeah. Plus I saw in the in the chat that some of the Jared was talking about they've run some of their some of their environment at 130,000 budget and they defined out that they got they get their billing like a month and a half later so by the time they get their billing it's too late they can't do anything about it. So going through control your spending with azure cost management is very important. And I understand that it might not be part of everybody's job to manage it but I think it's something that we should all be conscious of. Yeah. And I mean some things are even related to operations that you might be involved with like compliance and security right? Like one of the things that might affect your bill at the end of the month might be the fact that someone had access to something that they shouldn't have. They shouldn't have access to build specific types of resources that are very important in terms of compute and that might generate a large cost for you at the end of the month you can prevent that by simply denying the access that they shouldn't have in the first place. There are so many tools to prevent a larger bill. There are so many tools to actually reduce your cost so I think looking at the learning path and understanding what those options are is just going to help you along the way in whatever work you'll be doing in the cloud. Yeah. And I really like your take on this by reviewing your bill you figure out where your management holes or your management blind spots have been. Yeah. I haven't thought about that one in that context before but that makes a lot of sense. At the end of the day it's all connected and it's all stuff that we'll have to take care and it's better to know up front than at the end of the month. Like I said, we cannot relate and we saw in the comments like everybody has been there. We all shared that experience the terrible experience. Yeah. And Amy says there's so many tools and so there's no excuse but she says blame me. I'm very responsible. When everything goes wrong I'm responsible. With that being said thank you Vinicius for your your support this week. This week I don't know who is going to be leading it because I'm going to be toasting my toes in sunny Mexico. That's awesome. That's great. After being in the snow for a while. Yeah we're still we're expecting another crapload of the white stuff over the weekend but I'll be gone. That's great. I enjoy traveling south this time of the year. Well you're going to Brazil in a month or so. I'm going further south in a couple weeks. Good. All right well thank you Vinicius and thank you for all of our viewers and the people in the chat so Jared, Amy, Andrew, Richard Andrew did I say Andrew again? Pixel robot Rob and I'm missing a bunch. Thank you all for joining in and see you next week on AZ Update. Thanks for having me. Thanks folks.