 I just like Dan's comment about I'll put it up again about delve if you didn't see that one Uh, well, there's the other one is the common question. How do we turn it off? Yeah Then is this question Is it going to go from how do I get co-pilot to how do I shut it down like We need a bedding pull for how fast it's going to go from one to the other. It's already coming up I mean the the whole question. This is a different topic We don't have a question around it, but it's around governance around co-pilot So you have a lot of people that are out there a lot of companies that are coming up with like guidance And here's the dirty little secret everybody. It's the exact same stuff. You should have been doing to clean up your data 20 years ago. It's all still relevant. It's the yes. There are some new assets and new products There's sensitivity labels. There's kind of all the new features that are out there But it's it's about metadata management permissions and information architecture As Daniel said, you know securing labeling data and even having different data repositories where it can't even pull from and you go That's a whole different that sits in another Maybe even another tenant we go that all is there highly confidential doesn't get pulled in no matter what And separate your data if you need to Can we can we clarify? Something that it bothers me a little bit because this came up in internal conversations with the company Co-pilot is just one piece of ai right Microsoft owns open ai if nobody knew that guess what you know that now Which I would be surprised if you didn't know that but It's only one model It lies a couple models, but it's it's only a piece of it. There's a hundred different ai providers out there Building their own models You know, you see berkeley has their own stanford has their own You know google has their own you know so It's not just we focus on co-pilot, but Users could be using any one of these And it's a matter of where that data is coming from because Some models are restricted some are not And what I mean by that is co-pilot can be as tame as you want it to be right But if they introduce something into the organization that is not as tame as co-pilot Then they are running risks Right they now they could be getting bad data now they could be putting out bad data They could be putting out data, you know, they could be at risk So this is something that has to be Really well thought out and people aren't doing that. I mean companies are just putting in a you know Allowing people to use chat gpt and use, you know mid-journey Whatever just because hey, it's really cool and we want to do that Well, they don't vet and verify so everybody's just like I go out on the internet and I search for something I'm like, oh, that's the truth because it's on the internet It's gotta be true. I don't think that that And now you're bringing that into your business world too and people don't Do their due diligence to make sure what they're bringing up, you know, the content that they're bringing in is actually valid and current And just be using it as a crutch. That's my biggest fear with You know That kind of that laziness that people have come up. Well, they don't have their own governance around it either They're just kind of letting it come in and you know, there's there's no there's no guardrails There's no rules. Yeah. Yeah, and and Mike sure I want to add to that So I'm I'm as as you know, I'm Microsoft, right? So Microsoft town um I'm in in the middle of a A project right now, which is very heavily AI focused as you can imagine a lot of our A lot of our new features and new capabilities. We're trying to drive AI as being the primary kind of Um enabler if you like for a lot of those a lot of those capabilities I I can't Begin to tell you the level of Governance that we go through right even for one simple feature Um We already announced this at night last year. So I'm not gonna I'm not I'm not over speaking Right simple things like metadata extraction from existing documents and contracts We already announced that um, but the level of governance that we put around that and The the guardrails around responsible AI It's such a huge piece of work that every product team has to go through if you want to implement any kind of AI into a Microsoft product And it's it's it's to the point where the The pms and the engineers are so frustrated, right? We just want to get the job done But we're really really really focusing on responsibility, right? It's all about responsible AI and doing things the right way That that's something that's really been You know, uh, it's been great to see Uh, there there was a There was an online conversation a couple months back about how in fact, there's a blog post by the mvp To find it's a fantastic blog post where he he basically says, you know, microsoft has a history. It's a cycle of pumping something out there that's not quite ready Has it been thought through from a governance and administrative standpoint and that it it's um Presents opportunity opportunity To the partner community, um around that. Um, that's refined. So there it's innovating. It's getting faster and faster But with that over time, it's like especially around um co-pilot that Microsoft was very upfront at helping lead the discussion around the ethics of AI and the need for governance And the adoption site is much quicker to have that or have some of this content Had to be leading the discussion Right up front right on day one of the announcements around these pieces So like I feel like like microsoft has learned that lesson Um, we talk about when share point was released We often talk about the fact that it's like the share point community is as robust as it is As connected is because there was no documentation. There was no help. It was something where I used to joke like no not everybody knows everything about share point is like, yeah, but there's a list of people I know that this happens when microsoft product team members The share point team had questions about their product. They would call these external people And we know their names. We know who they are most of them mvps uh, it it really is um It's great to see that microsoft is being more thoughtful around lessons learned Uh around the solution doesn't mean that all the functionality all the features are there to manage all of those things upfront But they're aware of those gaps This is my perception at least I don't know if somebody has it. Oh, no, it's the same thing I I I agree with you because and and I'm again Perception is you know what I'm going off of here is microsoft Is not the only company that does this they release beta And they call it production So I'll give you an example co-pilot for windows 11 new outlook when when co-pilot for windows 11 came out It was beta. Okay. I it was it would fail Half the time whenever you would ask it something it wouldn't know an answer to but they're using the consumer as their testers That's why you have windows 11 insiders edition. That's why you have the windows server insiders That's why you have azure insiders the people that want to be you know, they that microsoft is Allowing to our office insiders allowing to test this stuff for them um and Be able to provide the feedback but also fix what is broken So it's the same thing with ai they released it but In my opinion, they released it early too early But I think that's with any ai because now you have Every every corner mom and pop corner store is going to start having their own ai And it won't necessarily be You know factual or it won't always be you know legitimate So that's my take on it