 How are you, what's up guys, it's Chris Pike. My friends call me Big C. I'm back in action today. I got another video that I wanna go over. I use the heartbeat tool H.K.I or H key and log in and follow along with me. But the video I wanna show you today is by ColdFusion because here's the thing, in a previous video I showed you Sora and some of the great, amazing text to video stuff that's coming out. Well in this one, ColdFusion not only covers that but he covers how this is going to impact jobs, how this could impact you, me, people that create videos on YouTube, designers, VFX people and what's going to happen in society in the future and then how to combat it. A lot of the issues that have come up. So without any further ado, let's get in. I found a whole bunch of great moments in this video. Let's go. I'm gonna click on it right here. H.K.I if you wanna follow along. And now let's kick it off. And he starts quickly right at the three second mark. Here we go. This is a Reddit thread from three years ago discussing AI imagery. The top user says, imagine in a few years when we can make photorealistic videos from just a few sentences. AI is crazy. He gets downvoted and the reply comment laughs at him saying that it's not going to happen in our lifetime. Our great grandkids might have such technology. Well. Right, so there we go. Right out of the box, holy smokes. This is three years ago. It was virtually unimaginable. And if it was imaginable, most people were thinking, yeah, okay, if you have kids and they have kids, maybe they'll see it. It's here. It's here. It's basically been displayed as of a couple days ago by OpenAI. This reminds me of what happened with 100 years ago. Not that I was alive, but I've read all of the comments from writers about when the Wright brothers and others were researching flight. They're saying this will never happen. It's impossible. It can't be done. It got done. It was painful for them, but they got it done. So just, you know, never underestimate is kind of what I'm getting at here. So let's keep going here. Let's go ahead and talk about when you combine separate videos into one scene. So this is now going to be a look at one of the most incredible features. And it's not very talked about. You gotta see it. It can do more than just create videos from scratch. It can combine separate videos into one scene. Animate still images, modified non-AI videos seamlessly depending on the user prompt and much more, which we'll get into later. So there you go. So not only is it, you know, can it create text to video at an amazing, just amazing stuff. And I showed you that in a previous video. So I didn't want to cover all that again and duplicate myself. But this, you can do more than that. I mean, look at that text to video. I mean, there's image to video. There's, you know, animations. You can basically put in an animation and then they will animate that, you know, maybe vector graphic, crazy stuff going on here. And again, this is something that wasn't really covered in the original release of the mind-blowing Sora demo. So let's get forward a few more minutes, another minute here, and talk about a little bit of the context. But to understand the context here, as Marcus Brownlee pointed out, this is a viral clip of where text to AI video was a year ago, but even the state of the art now is nowhere near closed. I tested the same prompts on Runway ML and here are the results. Watch this. Right, so Runway ML is kind of a competitor, so to speak, although Runway ML has shipped something where Sora is still, it's not usable by anybody, but the gap between the two products, even though one has shipped, like I said, it's just unbelievable. So if you are working for mid-journey or Runway ML or a Pixiverse, Pixiverse, or whatever all these other ones are that are doing this, yeah, you saw what Sora can do, that should light a fire. So anyways, let's get forward now. Let's talk about the, this is called an apples to apples comparison. So let's go a little further into the Runway ML versus Sora versus other competitors, et cetera. And that's a prompt right there, just to show you what the prompt is if you want to rewind it and watch it. But this way you know that it's, you know, the same prompt for both, amazing. Do look though, when the doggy starts walking, even in Sora, it isn't 100% realistic, but it's pretty close. I'm assuming it's because they don't quite have the physics down, but check this out. It's what Sora is, it's coherent. Previous video AI systems have a characteristic morphing quality as the video progresses. With Sora, that's vastly reduced or gone altogether, objects remain stable, even when obscured by things in the foreground. Like this, this video of these puppy dogs playing in the snow is just absurd. Like you have to look really, really hard to think that this is AI. You, I mean, you could maybe see it with some of the blurring in the fur here and maybe the snow doesn't look, looks 99% real. It does have a little bit of an artificial look if you look closely. Crazy stuff, let's keep going. It's a much more robust system, but not only this, Sora can animate images such as cartoons or this Shiba Inu dog. We've seen stuff similar to this in research since 2019, but what is new is the ability to combine two videos together in one scene. Let's take a look at that. Look at this. Hey. You see that drone just turn into a butterfly? It didn't even morph. It just instantly happened. Super impressive stuff. Sora, I can't believe this kind of stuff, but there's more to it. This was also not really covered much in the original press release in some of the demo videos, but look at this. This is different camera angles in the same scene. Also simultaneously make up different camera angles of a single scene with just one prompt. Just another incredible thing that Sora can do that just defies belief. It's unbelievable. Now, let's go forward a little bit and let's discuss how Sora was trained. Where did they get all this video, all these images, all these things that it's using to create this incredible content? And again, nobody knows for sure, but there's some clues. So here we go. I'm gonna skip forward to 5.20. So how was Sora trained? There's no public info on the training data, but OpenAI did partner with Shutterstock last year. So there's a wealth of copyright-free data for the AI to chew on, and that might be a clue. So this is cool and all, but before we get ahead of ourselves, what are some limitations? While these videos look good, aside from the cherry-picked examples and a handful of selected public users, we can't get a full grasp of how robust the system is. That's right, almost nobody has proper access to this tool other than people in the loop, people that work there and red teamers that are testing it for safety and putting on guardrails, et cetera, and the occasional VFX person, apparently. So do keep that in mind. You know what? I mean, Sora's impressive, but now the more interesting part of this video, in my opinion, that's probably because I've seen so many demos, is what happens in the future? What does this mean for the average bear? Let's get into that. What happens when this technology becomes democratized beyond the boundaries of just open AI? You've all probably thought of some implications of this. One is the reduced need for stock footage, but of course, an obvious thing people love to gravitate towards is misinformation and fake news. People... Yes, exactly. All those news stations already are doing a lot of fake news, depending on who you listen to. Yeah, it's gonna get worse. CNN, you're going to have a new competitor. Oh boy, sorry, I had to drop that one. Yeah, CNN try keeping up. We're using AI to create events that never happened. We've already seen this with AI images when they were brand new, but if it's now video, will there be issues with law enforcement? Forensic video experts may face challenges in distinguishing between genuine and fabricated or modified video evidence. Yes, and if you really think about this, if you can deep fake at this sort of level, I mean, can you imagine the kind of things that could happen? You could literally deep fake a video that could start a war. So seriously, I mean, really think about this. Criminals may also deny video evidence. They could claim that the implicating footage was AI generated. These issues require the development of new standards for verifying video authenticity. Yes, a new standard. We're gonna get into that, but let's go ahead and skip forward a little bit here. I got a new moment here for you. That's how it's easier for creators to tell stories. And if you think about it, that's exactly what most people are going to use this for. But on the positive side, tools like Sora make it easier for creatives to tell stories. Videographers might be sweating because it gives a similar capacity to those who have never picked up a camera. That being said, it's not as cut and dry as videographers disappearing overnight. There'll always be a need for them in certain situations, like if you're filming a particular event or people, but the future could turn out something like this. The higher tier of videographers that do custom work will remain, but the lowest rung that take out their cameras just to film something for stock footage purposes or things of that nature will start to see their work be impacted. Again, it's not now, but we can see the trajectory in a couple of years. So there you go. If you're going into videography as a, you know, maybe thinking about going to cinema school or film school, yeah. Unless you intend to be one of the higher end types. Yeah, you may want to rethink this based on this. So, hey, there you go. There's my public service announcement. Let's get forward a little bit here and talk about AI fatigue. Another thing that is probably going to start coming up from all of this darn AI content. Tools like Sora could have a strange effect on human psychology. It's an effect that we came up with on the Cold Fusion podcast last year. Other people have probably noticed it too, but we called it AI fatigue. Guys, I have this. I've seen so much AI footage. It's now, it's hard to baffle me. It's hard for me to go, oh my God, that's amazing or oh my God, that's definitely not made with AI. Let's watch this a little further. It's the concept of AI being able to produce a stunning imagery in such volume that it lowers the specialty or visual value of true creative work. For example, on social media, you could see a crazy video that would have made our jaws drop just a few years ago, but now you just think, meh, reason being you've been overexposed to it. Every visual media, anything you can imagine can be done easily with AI now. There it is in a nutshell. If AI can do it and do it amazingly and you see this all day, it's gonna take something pretty crazy to shock you or make you go, oh my God, I can't believe that. That is what AI is doing. Let's get forward a little bit more. I've got a couple more moments I wanna cover with you and let's go into the perception of what we see and not believing. Beyond this, collectively human perception of what we think is real will be altered. People aren't going to believe anything they see. Key point, people are not gonna believe anything anymore. Crazy world we're going into, crazy times. Imagine doing all of that hard work for people just to think, once again, that you just sat at a computer and typed in some prompts. If you're a creative, specializing in unique visual content, how do you feel about such a future world? You'd like to hear from you in the comments. A lot of... By the way, I do create a lot of animations and things like that. And honestly, I don't feel good about this. It's a little scary for me having somebody that has access to a program that I don't have access to. For example, could type something in and do a month's worth of work. It's crazy. I don't even know what's gonna happen with the VFX industry. I don't know if they're gonna elevate their game to something unbelievable, or if that's gonna result in a lot of people not going down this path and a lot of unemployment, and it's really hard to say. But scary as it is, let's get forward to here. Let's get forward part of me. We're gonna go to the erosion of trust about two to five years from now. This, yeah, a little bit dank, but hey, let's go there and let's see what he's got to say. Let's take a look two to five years in the future when this tech will become trivial. Along with AI fatigue comes the further erosion of trust. For example, in journalism and media production. While a tool like Sora can enable faster and cheaper video creation, it may also challenge traditional notions of authenticity and trust in media. Yeah, and trust in media is already at an all-time low, people. I'm looking at UCNN and MSNBC and even Fox. Yeah, I'm watching you guys, but yeah. To go, it's gonna get worse. Like, can you imagine it's gonna be just a hellscape? But anyways, let's keep going. A simple website or app will be able to generate photorealistic videos for you. When it's democratized, there's going to be a lot of people that use it for nefarious reasons. Just like deep fakes before it, there's a potential for chaos. Yeah, it's gonna be crazy. Scammers, yeah, scammers, scam products. Here we go, check this out. And don't even get me started on scammers. They're gonna have a field day with this. They could create adverts for products that don't exist, investment opportunities for things that don't exist. YouTube could be filled with AI-generated trash. So what can be done about this feature scenario? That's a good point. So he's gonna talk about what can we do and basically comes down to digital markup. So I'll end it off here with these last few moments. There are things in the work to try and combat this. Let's get forward to 1039. A watermark isn't going to be enough as that can just be cropped out. Wouldn't it just be great if we could render AI videos that contain some kind of digital marker that tells us that it's AI-generated? The good news is that this exists, well, kind of. So what is this digital marker? Well, in February of 2021, the BBC, Microsoft, Adobe and a few other companies got together and realized, hey, we might have a little problem with generative AI and misinformation on the horizon. Just a little. Their solution was the C2PA standard. C2PA, interesting. A technical marker that embeds metadata into media and is used for verifying its origin. The CTPA standard is also being adopted by camera manufacturers, news stations, and of course, open AI and Sora. The metadata also can't be edited without anyone else knowing. However, there's a problem. Yeah, watch this. Something as simple as setting a screenshot and resaving the image can destroy that metadata. There it is. So there is a standard in place or it's coming at least, pardon me, C2PA, but there are loopholes and they are glaring loopholes and various industry players are trying to plug the gap and come up with solution. At this time, it's a partial solution at best. So that in a nutshell is Sora AI, what it's great at, what it's not so great at, what its risks are. I mean, just so many interesting things in this one video to cover what the future looks like. And yeah, if you like this video, check out Cold Fusion's entire video. It's really, really good. Links to all of this in the description below, including the heartbeat tool. Thanks for watching.