 OSX has Siri and Android has Google now. So what about the Linux desktop and what about the Linux operating system? So I'd like to introduce to you the Microsoft AI. Microsoft AI is one of the very first open source digital assistants available on the Linux desktop. And it's extensible, it's customizable, and it's completely open source so anyone can run it on any device like you can run it on Raspberry Pi or you can run it on a desktop or even a smart watch. So going ahead with Microsoft AI, I'd like to introduce to you the plasmoid that is the front end for Microsoft AI on the plasmoid desktop. So you can see the plasmoid consists of animation bar on the top that gives you the microstatus. It is a start and stop button so you can run Microsoft whenever you feel like and it's not always running and you can even mute Microsoft when you don't want it to listen to you and you can pin Microsoft to the desktop. The navigation bar is on the left, it has four tabs, consists of the home tab, it has skill tips, it can tell you the skill tips, tell you what commands you can run, the settings tab and install skills tab. So when I talk about skills, skills are something that is developed by the community people at Microsoft and they can be different skills that are integrated with different back end services. So it could be basically if you have a music player, you could write a skill for Microsoft to integrate with the music player or if you have for example a smart bulb or Philips U bulb or something, you can connect Microsoft to the API and run a speech recognition command from there. So I'm going to be giving you a short demo on how Microsoft works. So this is the plasmoid and now I'm going to just run a few commands and see what we can do on the plasmoid desktop. So hey, Microsoft, open Firefox, you can open applications, so let's run, so let's try something more what we can do with the desktop. Hey, Microsoft, search this computer for Microsoft. I am searching locally for Microsoft. So you can even search, okay, run on stuff from Microsoft. I'm afraid I couldn't understand that. So let's see some other stuff. Hey, Microsoft, create an activity test. Your activity has been created. So you can create activities and try something else. Hey, Microsoft, change wallpaper type abstract, so you can even change wallpapers. So now on the plasmoid side, some new stuff, I mean, apart from text messages, we can even do visual messages, we can receive visual details from Microsoft. Can you please repeat that? Hey, Microsoft, what is the current weather? With the high of 29 and the low of 28, the weather is currently 28 degrees. Whatever location you're at, so, and another thing you can do is interactive visual messages. Hey, Microsoft, what is the stock price of Apple? Apple Inc., with ticker symbol, Apple is currently trading at $150.27 per share. So there's visual feedback. You can even open this up in our external browser from the plasmoid. So this is currently the state of the plasmoid, and you can, of course, do text stuff. Like, you can even talk to Microsoft through text. There's a suggestion bar, so you can select a word from. Chuck Norris doesn't need to try couch exceptions. Are too afraid to raise. So that's about for the demo for now. And this is how far the microplasmoid has gotten. So what's next for microplasmoid? So more skills for the plasmoid. Like, it should be able to interact with more applications and more integration into the desktop. So maybe next time, we can say open notifications or show clear my notifications. Or maybe play some music based on whatever my key music keyword is and easier installation. So Microsoft is clearly not packaged pretty well, and we're looking for distributions to package Microsoft so everyone can give it a try and help improve the user experience better. Getting involved. So Microsoft has a Slack channel. That's where all the developers are, community guys are contributing at. And we all hang out there. And you could always try compiling it from the Git. That's the Git address. So questions. I'm not sure. So we've seen in the demo that there was always quite a significant delay between the command and the reaction. So what is this delay caused by and what can be done about it? So I think my internet connection is really bad right now. And the wallpaper time took a lot of time to download the wallpaper. So if I'm on fast connection, it happens much faster. So if it's a text, if I'm doing a text command, so a text command would probably be much faster than speech to text because speech to text is still going through Google STD. And it has to go to the Google server and then all the way come back and get possessed. Just answered the question. It's about the same question, but I think even with the first command that was to open Firefox, it did have some delay. It has to go to the Google STD, then comes back to the Google API. It always goes to the... Yeah, because it's what we're using for the speech to text engine currently. So my graph is trying to work with getting something like open STD, that's the open speech to text model. And they're trying to incorporate other things like bucket springs and stuff. So we can skip Google STD altogether, but it's still a long shot. Okay. Any more questions? I think we can take one more. Okay. No questions. Cool. That was an awesome talk. Thank you Aditya. Thank you so much.