 Okay, so there is something very interesting going on right now. There is big companies like Google and Microsoft trying as quickly as possible to put AI tools inside their workspace or office products. We've got Google doing that with text generation and image generation and we've got Microsoft doing the same thing, but better because you can actually ask the AI to interact with the document. As an example, tell Excel please make me a table with blah blah blah. But there are a couple of issues with this approach. Firstly, they are scared and they are not releasing at all these products to the public, but only giving them to a few selected companies. So all of us mortals who might be interested in AI just can't access these products. Also, they quite don't care about whether the models are open, available to everybody. Actually, they are probably happier if they are kept private. I would say it's a bit of an issue that open AI decided that chat GPT-4 should be given no details about whatsoever and that it should be kept as proprietary as possible. Yes, they are claiming that it's risky for it to be open source because of AGI risk, but are you really gonna say that we should just all trust you and that just one company should have access to this technology? Is that gonna end well? But next cloud is trying to address that in my opinion in a very nice way. So firstly, next cloud is kind of the same as Google Workspaces and Microsoft Office 365 suite, so it's trying to give a complete suite for companies to do company stuff. We've got shared documents, shared drive, and a lot more applications like the calendar. Very interesting things. And then of course, they are now releasing their hub for which includes AI tools for everybody. So nobody's gonna be left out, thankfully enough. So which AI tools we're going to dive into shortly, but of course the question is, is that gonna be a problem? Like, are they using something that's very close source and stuff and not the yarn? In fact, they have created some sort of a thick commission that analyzes these AI tools depending on the availability of the model, training data, and of the original code to give some rating. And then the user can actually choose whether to use or not at all, depending also on this ethical rating that next cloud gives. And I do think that this kind of things really try to push forward for something that is more open and such. I do think that will get more open source or at least available for everyone AI tools in the future. So what are they including actually? First of all, we've got Whisper, which is made by OpenAI. It released just this September. And in theory, it should be speech to text, almost as good as humans would transcribe something, which is extremely useful. You've got any document or any text field within a next cloud app. You just type slash, you select speech to text, and then you just start talking and everything you say is going to be transcribed within that document. That is generally something very useful. You can even choose to translate automatically to English only right now. And you can choose which model of Whisper to use because they do release those very interesting feature. Next up, of course, is GPD because who's not going to include GPD these days? And again, you're able to choose the model to use. You can give a prompt, see the result, and eventually insert it into the document. And in advanced options, you can even select to show more than one result. All of this is extremely useful in my opinion if you have to do some early draft to fill and or elaborate for more later on, but if you need some brainstorming or some template template real quick, then for me it's quite useful. Fun fact, I promoted a t-shirt some days ago, and the design was actually completely invented by chat GPD. I just asked what's going to be as narcole little fun slogan to support Linux, and they came up with it. It came up with it. Then we've got image generation through stable diffusion using Dolly, so still open AI for that. They managed to market this on the announcement page the worst way possible because it says your cloud is full of document. Here's a tool to quickly find them, which is not what this feature does. This feature creates new images and does not find anything. Seriously, who thought of that? So you just give a prompt, get an image, and insert that image inside the document. I think this is useful for early drafts or to give some sort of reference to an artist to do the final work. So how are all of these features actually exposed to the user? Because it's very important that you're able to use all of these, especially the speech to text in all text fields. And what they are currently doing is inserting what they call the smart picker, which is a very interesting feature, which is basically the same that I think Google document has, but also Notion and probably lots of other people. That is, if you press a slash, you're able to select a lot of things to choose from to insert in the document, such as speech to text or image generation or text generation. But also, very interestingly, we got maps. You can actually see an open street map dialogue, you can select a place, and that place is going to be added to the document. And if it's a text document, you can even embed the whole map inside of it, which is super useful. And you've also got GIFs, obviously, but also videos through PeerTube. And my videos are on PeerTube, so you can actually search for me and my videos should pop up, maybe not, but who knows, and more because I forgot. Yes, there is one feature which I don't see they use case for, but I guess it's nice to have this feature. That is, you're able to search through actors and movies and series the movie database. Okay. It's, I guess, very nice to have it handy. I don't see the use cases of it, but it's good to have. So along these smart picking features, there's also a new application which is called tables. Now I've did a bit of research and it seems like table already existed since one year ago. So I guess the thing is now it's part of next cloud, maybe they upstreamed the work or something, but it's very interesting. They market it as an alternative to Microsoft SharePoint, which it isn't to be clear, but it does allow you to create tables. You can create a lot of columns, you can give a different type of data to each column. So you might have one column that's text, one that's numbers, one that's checkboxes, one that's a slider, one that's a date, this kind of things, and then you just insert rows. And, and this is very important, you can now insert a table within a document using the smart picker. So everything actually clicks together in such a nice way. I know I don't quite sound very excited about this year, but when I first read about all these features, I was like, you know what, I should try next cloud. Maybe that sounds actually pretty useful. So what else they do have a next cloud talk application, which allows for video conferencing as well. And now they've added a possibility to record some talks, which is extremely important, because a, I guess, if you do like a lecture, you might want to record a lecture for students that weren't able to go there. Or in general, if somebody misses a meeting, you might want to record it for them. But also, even if not for that, my mind is terrible. And I don't remember half, if not 90% of the things we talked about. So please record all the meetings and send them to me because I'm going to forget everything. Yes, I do take notes. I'm not good at it though. Another nice feature is breakout rooms. So basically, you've got a group of people and you can ask next cloud to either automatically or manually, or allowing each person to choose, divide these people in smaller groups, you can decide how many, obviously. And then each group is going to get its own video call. And then you're able to move people between groups and also send messages to certain groups. As an example, saying, okay, are you done? Every other group is done. Are you done? These kind of things. These are actually pretty common and I guess useful in some e-learning platforms. So it's very nice to have as well. And then there's improvements to next cloud file versioning. So first of all, you can give a name to a certain version of a file, which is extremely useful. Otherwise, if I mess up, I have to go the list of all the time and that I edited a certain file and try to find the exact revision I want to get back to. This time, I can just write like final, final, final, actually final. And then I go back to the revisions and search for that word. And it's gonna pop up so much, much better. Especially if you have to do that for a document that somebody else has made, because who knows what's on their mind. If they actually give names to revisions, which they want, then that's gonna make it much easier to undo the mess that they have done. So another nice thing is that now revisions are saved one for each minute in the last hour, one for each hour in the last day, one for each day in the last month, one for each month in the last year. Actually, the announcement just says and so on, but I guess it's like one for each year in the last decade, one for each decade and I don't know. This is very similar to what back in time does as an example with backups and it allows you to see all the changes that were done recently without having to have terabytes of storage to save everything that word was done each minute into the last few years. I guess that's pretty useful or decades or centuries. Now, all these AI stuff, yes, is very nice to have, but it does come with some compromises in order to actually deliver it so quickly, which I guess is why Microsoft and Google are taking more time with it. So firstly, all these AI tools are not able to have any context regarding what you are doing. So you cannot just pop up an email exchange and say AI, please summarize this email exchange unless you copied the entire thing and paste it inside of the GPT tool, I guess, which okay, okay. Also, you are not able to interact with the document as Microsoft is claiming to do, so you cannot just say, you know, automatically sort my files into subfolders. I think that would be an awesome use case to implement and it shouldn't like be impossible with the technology we have right now. In fact, I do want to digress slightly to talk in the end of what next loud in my opinion could do. So we currently have a proof of concept implemented for Unity that allows you to tell something to GPT and make GPT implement that something in Unity, the game engine, which is extremely interesting. How I guess this works is that Unity has some kind of public API and that they tell the GPT thingy to use that API to produce some code to do the thing that the user asked for. And that's a good idea, I think. And I even like tried to figure out how it could be done to Kelly Plasma because just imagine having a little application that you go by and say, hey, little application, please change my theme to dark and move my panel to the top and switch between the monitors and everything just magically happens. That would be awesome. That would be awesome. And I don't think that it's impossible. I thought that we can like take the API of Kelly Plasma and then allow the GPT thingy to search through the API upon request. You can tell GPT to follow some sort of syntax, which I think is what Bing does as well. So you can figure out a way for GPT to end up spitting some code that uses the Kelly Plasma API to change your desktop. And the issue that I encountered in trying to figure out actually how I would implement this is that Kelly does not have a good unique single API that allows it to do everything. And I would have to spend months just trying to make sure that the documentation is accessible to the GPT thingy just to have a documentation that is one page and easy to use. But I don't think it's feasible right now. But it's such an interesting way. And to be honest, I don't see why next cloud couldn't try to do the same thing. If they have some internal API, they could try to expose its documentation to GPT as again, I think others are doing as well. And by doing that, they would be able to make GPT make requests to that API to actually interact with your documents as you're using it. So that in my opinion is something that I'm really hoping for next cloud to investigate, or at least be able to optionally give some document to GPT, where you just, you know, say this document in the prompt, and this automatically gives some context about which documents you're talking about, that would be also extremely useful. And again, what Microsoft is claiming to be doing. So this is the way forward that I would hope next cloud to go on. I'm already very happy with what they're doing right now. But if they do that, then I'm going to switch to next cloud same day. Nonetheless, this was a pretty exciting features launch thingy. So whilst wishing my while switching the best luck to next cloud, I do want to say thank you a lot to all the patterns that are currently chipping in something to make this channel exist, because it's it's not completely free to maintain. I'm actually like paying a lot of things for the lights, LEDs, camera, editor. So if you're able to chip in something, I'm trying my best to improve the quality of the videos a lot. So thanks for following. And if you're able to chip in, I've got liberal pay people, passion, kofi, I've got tires, benefits, all that boring stuff that every YouTuber is doing. Come on, you know that. Bye.