 So before we close the series, I just wanted to share my predictions about what we can expect to see in the future based on all the new technology that I've seen and tested in the last six months, because after using chatgbt and blender assistant, I think I have a pretty good idea of what we can expect to see change for both 2D and 3D software. So here are the three biggest changes I believe everyone can expect to see in the future for all the software that we're going to use in day to day life. Level 1 is going to be a built in AI assistant for all software. I really think blender assistant is a preview for the first major change that you will see implemented into all of your favorite software. Kind of like I said before, we are already seeing glimpses of this as people download, run and train their AI locally on their favorite stuff. People who love horror will train their AI on all their favorite horror art. People who love anime will train their personal AI on all of their favorite anime art. People who love books will train their personal AI on all their favorite books. There will be an AI trained specifically on blender. There will be another AI trained specifically for Photoshop. Same thing for Unity, same thing for Unreal, same thing for Microsoft PowerPoint, Excel, and basically every major software in the future will have its own version of blender assistant ready to go as soon as you download or open the program for the first time. And this will solve the biggest problem that new users have when they get lost and don't know how to use the software. I am fairly confident that by the end of the year, tutorial channels like mine will no longer be necessary because new users will be able to ask the AI assistant how to do everything they do not know. So basically all programs will have their own version of chat GPT built in by default. The second biggest change though will be an AI assisted workflow, which means at this point we have learned how to train AI with text data, image data, sound data, 3d animation data. And I believe the next step will be to train AI on sequence data, which means learning to understand the connection between text and action. For example, if we're using Microsoft Excel, and we wanted the AI to understand what it means to take all the numbers on a list and turn it into a spreadsheet, we can train the AI by giving it a written task that interprets as text like chat GPT, then we can also feed it information such as the user's visible workstation screen, and have the AI observe thousands of different people perform the same task. The AI will then study and learn the correlation between what it sees on screen and what the humans tend to do given the task at hand. It will notice what keys you need to press where the mouse needs to go where you need to click, and it will learn to replicate all of that and call upon this information when you ask, which means eventually it'll understand that when you say the words turn this into a spreadsheet, it will understand that what you really want it to do is take the info outside of Excel and move it to an organized sheet inside of Excel. If we were translating this example to something more artistic like Photoshop, after watching a thousand different humans tackle the same task, the AI will eventually understand that when you hear phrases like separate the subject from the background, the proper command sequence will look something like click this button, go here, select the foreground and move it to a second layer. In 3D terms, in the future, when you tell Blender Assistant, instantiate a torus with 16 major segments and eight minor segments, the machine will be able to understand that means press shift A, mesh, torus, go down to this window, change major to 16 and minor to eight. And it will do that for you automatically as you sit there and watch the magic happen on screen. And it will do this because it will learn how from watching thousands of users before you do these exact same steps when they wanted to create a torus, which means the future workflow for your stuff is gonna look something like this. You'll open Photoshop to make an image. The future version of Photoshop will no doubt have an AI built in. So a new user will have a blank canvas and be able to type in beautiful red cloudy sunset sky in the style of Claude Monet, and it will appear. Then the user will type something like make it darker and the AI will automatically go to the adjustment settings and do that for you. And if you still feel like some of the details are out of place, that is when you go in manually and fix it the way you want by hand. Now my final prediction is that your AI will be personalized to you. Your Photoshop assistant will not be the same as my Photoshop assistant, because my assistant will be good at doing what I want the way I want. If I tell my assistant, give me a good base face for a fantasy princess. It'll probably show me something like this, because that is the style I like. And that's the style I usually work on every day. But when you give your AI the same command, it might show you something like this, because that might be the style that you usually work in. So everyone's AI is going to be personalized, trained on their own interest on their own work for their own needs. So even if we all start out with the same AI in the same soft, it will very likely be different as you train it over time. It's very likely that kids born today will actually have something very similar to Cortana on their phone and in every single program that they use. So those are my predictions for the future. We'll see how close they are as the year progresses. But yeah, this concludes the chat GPT series. And if you join me next video, I'll talk about how this experience with the evolving tech has changed my mind on a lot of things and my approach for how to adapt to the stuff that we're about to encounter. But anyway, I hope that helps. And as always up, you have a fantastic day and I'll see you around.