 I, for one, welcome our AI overlords. I hate to admit it, but if I were a prominent journalist at some publication who had been laid off after standing up for a progressive cause being harassed on Twitter by a bunch of five-year-olds telling me to learn to code, I would take it really personally, because I am continuously upset about the fact that I can't code. It's not like I haven't tried, I have. I know fewer than three separate points in my life, but much like any language that isn't the one I'm speaking right now, it never sticks. Despite that, I love tech and the world of tech, and I spend at least as much time reading about it as I do doing anything else that isn't maybe watching TikTok. In particular, I click on virtually any headline I see that mentions artificial intelligence and neural networks and machine learning and all of that good question mark stuff. Now, maybe this should be because of the at least weekly emails I receive in my work inbox telling me how AI can help me improve productivity or even straight-up obsolete staff, but what I'm really interested in is the ways that AI tools can and will be used to fuel or replace creativity. Because it seems inevitable, right? AI is getting ever more prominent in our day-to-day lives, especially as hardware is having AI specific processing built in. The most noticeable impact for us laypeople has been in the absolutely ridiculous quality of modern smartphone cameras. Let's be real, my Pixel 4a does not take great pictures because it has great hardware. Far from it. Rather, it is because of Google's algorithms that can use data from that tiny sensor that the company has been using since the Pixel 2 to generate images that are genuinely mind-blowing in their detail. Anyone who has spent more than like five hours with me will definitely have heard my rant about how the next revolution will come if slash when the software capabilities come to proper cameras. And in general, imaging is a place where AI really excels right now, but obviously there are so many more uses for it. A key thing to understand about AI is that it is always specific. Any AI is only really able to do one thing because it's only made to do one thing. The AlphaGo bought that shocked the world in 2016 when it beat 18-time world champ Lee Sedol in a game of Go could not have turned around and beat him in, say, a spelling bee, or even another type of game. Now that AI has been a little more generalized into AlphaZero, which has also proven itself good at chess and shogi, but it still couldn't write a screenplay. Then again, neither really can Benjamin, but at least it could try. Benjamin is the name that a screenwriting tool gave itself after filmmaker Oscar Sharp got together with artist and technologist Ross Goodwin after seeing AlphaGo's success and wondering what AI could do for them. They fed dozens of classic sci-fi scripts, plus a few dramas, etc., for good measure into an algorithm, gave a couple of basic prompts to get the story started, and out popped Benjamin's bullshit. And it was bullshit, obviously. A whole heap of nonsense dialogue and some nonsense song lyrics to go along with it. They filmed it, although, resulting in a curious little piece that probably made a lot of would-be screenwriters sigh with relief. When Benjamin was brought back the next year, the team went with a more hybrid approach, injecting machine dialogue into mostly human writing. It still produced nonsense, but the idea of mixing AI and human dialogue certainly showed more potential than pure AI had the year before. Differently interesting was the third and final project in which Benjamin was expanded to be a more generalized film creation, with minor audiovisual capabilities beyond the natural language processing aspect of it. With the mixed results, the team fed the new Benjamin public domain films and some new footage of a few actors on a green screen, at which point it directed its own movie, choosing sequences and generating dialogue and music on top of it, and trying admirably, but quite poorly, to deep fake new performances onto old ones. It's pretty bad, and the project was rejected from the 48-hour short contest it was produced for because it simply didn't have enough new stuff going on. All three Benjamin projects were made for 48-hour short contests. But that doesn't mean it's not worth watching, while keeping in mind that as fun as it is to point and laugh at the dumb robot, it is hardly the best that a machine can do. You may have heard of OpenAI's NLP tool called GPT-3, which is the best one, largely as a result of being based on the largest dataset, the internet. Not like the whole internet, but close enough, there are 175 billion parameters in there built from 45 terabytes of text data pulled from a filtered set of the web archive system CommonCrawl, the text of every web page linked to on Reddit with a plus 3 upvote rating or higher, two different online repositories of books, and much of English language Wikipedia. I feel like the Reddit part is why GPT-3 doesn't seem to like Muslims all that much. Sadly, this is not the part where I reveal that everything I've said so far was written by GPT-3, but I have seen enough text produced by the system used in breathless articles about it to know that under the right circumstances, it is virtually indistinguishable from human writing. So it makes sense that someone would task it with writing a far more cohesive story than Benjamin ever could. Using a generally available, albeit rather expensive, online tool that we will talk about the marketing pitch for later, we got Solicitors, the first bit of which was human written, and the rest came from the machine. It is a night and day difference between the projects. The new AI created a consistent, if convoluted story about a missionary who went from being a violent drug dealer to a messenger of God with a twist that really makes me wish that we could ask it what it was thinking, because it could mean two very different things in a way that's actually kind of fun, but of course it's not thinking. That's not how this works. The obscene amounts of data provided a framework for twists, and then when asked to finish a story, GPT-3 followed it to an almost hilarious degree. And that is cool, right? It seems like almost something you could use to help with a real project, and of course in post-production, VFX houses are beginning to use AI to automate aspects of their work, even beyond the whole deep fake thing, but what happens when a system is more deeply embedded into the production itself, offering input into shot choices or blocking? When that silly third Benjamin experiment becomes something much more serious, and when the AI has been trained specifically on the works of one of cinema's most beloved craftsmen. Hello by the way, and welcome to the Weekly Review. You can call me excited to finally have a reason to talk about all this, and today I am talking about Fellini Forward, the documentary, as well as the untitled short film that it centers upon. Well, you can now see Fellini Forward on Amazon Prime Video or whatever. I attended the North American premiere, which was part of the 59th New York Film Festival. I don't know why I was invited, but it was a far less grand affair than the world premiere in Venice where patrons sat in fucking gondolas on the canal watching a floating screen or some wild shit. I saw a normal screening at the normal theater where I saw the best video game movie of all time, but the reason for that premiere pageantry was because this whole production was funded by Boo's company Campari, which gets a lengthy bit of wank at the start of the documentary and a cameo of sorts in the short. As someone who doesn't drink, I know nothing about Campari, and I did not partake in the bright red beverages they offered before and after the screening, but I did bring a friend who was happy to imbibe on the company's dime. He noted feeling like something about class trader being in the fancy-ish after-party amongst a bunch of folks wearing nice suits and dresses, but like, I was wearing my own channel merch under a hoodie I bought at T.J. Maxx, so neither of us made a serious effort to blend in. Point is, this is all technically commercial payment or whatever, but I'm not going to be mentioning that again because someone's gotta pay for it and who cares. What matters far more than the source of the funding is the intent behind the project, because this whole thing is ultimately a gimmick. But there are right and wrong ways to do that. Wrong is easy. Look at the bullshit that was the gross Let's Cast James Dean in our bad movie because it'll get us some press thing. Sure, they got permission from the estate because the people who own his legacy want money, but fucking yikes. Fellini Ford, on the other hand, put in actual work involving Fellini's only surviving relative, his niece Francesca, as well as several of his former collaborators to help make sure that the team wasn't getting too far off track. All of them seemed to genuinely believe that Fellini, who loved experimenting with the art form, would have approved of doing this and that it felt right to do this with his work as a result. And I'm glad because Federico Fellini is one of the most beloved filmmakers of all time and if his legacy isn't going to be respected, the rest of us are even more fucked than we definitely already are. His movies, which include classics like Ledolce Vita and Eight and a Half, have a very specific dreamlike quality. The worlds of imagination, which it seems like anything could happen at any time, and honestly, that makes his work feel almost appropriate for your typical nonsense generating AI machines. Not to run unchecked, of course, but when guided by a human, the two together can make something more Fellini-esque than either might alone. And the level of human guidance is a key differentiator between this project and the others we have discussed here. It wasn't merely a matter of giving the AI a prompt and letting it run wild or even close to that. A custom script writing tool was created that at each new line would ask the human if they wanted to write their own idea or see what the AI would suggest. But even if they went with the latter, it was actually a series of five options that the tool would propose and the human could use any of those as opposed to here is your new line, eat a deck you lesser life form. And much of the time, they did go with the AI suggestions, which included lines of dialogue as well as new characters and actions for them to perform, but definitely not always. We get a few specific examples throughout the documentary and briefly see the wide variety of options that it would give at any time, most of which seemed quite strange in a bad way, but every so often would be strange in a good way. Now, any AI is only as good as the data it's trained on and we established a bit ago that to get really consistent suggestions requires hilarious amounts of data and obviously that couldn't happen here because to be Fellini you can only work from Fellini. However, Fellini did not really work from scripts. Sure, they existed in some form, but they served as blueprints with him making potentially radical changes on the day once he saw everything coming together. And because he never recorded sound on location doing everything in post, the actual dialogue was subject to change at any point in the process. So even the scripts that they were able to get a hold of weren't really much help. As a result, they had to painstakingly transcribe every bit of dialogue and each action as best as they could. Obviously, this reflects a fundamental issue with the project because of just how antithetical this was to Fellini's, but it's also an issue because the dataset being worked on was not in Fellini's style and required third parties to interpret the events on screen into text for the system to be trained on. And I think that actually matters as we consider how to train so-called creative AIs. Just this past weekend there was a performance of an AI's attempt to complete Beethoven's 10th symphony based on training using his final works and the notes that became them in order to replicate his creative process from one to the other. Once trained, the machine was fed the notes that the late composer left for his 10th and on the machine went. Now, the fact that the German press who attended weren't all that impressed, saying it was more like listening to a student of Beethoven than the man himself, speaks volumes about the general endeavor, but the different styles of these different projects' training feels more than academic. That said, while Fellini Ford claims that he's pulling from the creativity of the filmmaker, it does not pretend to be by said filmmaker, unlike this other project that wants people to believe it really is Beethoven's work. Of course, if all Fellini Ford had to offer was some AI prompting, it wouldn't really be all that interesting and I wouldn't have made this video. What I think gives a glimpse into where filmmaking might go from here comes from its further use of AI for generating the film's previs. Previs or previsualization is an umbrella term for ways that a film's visuals are created before they're shot. Storyboards are a type of previs as are test shoots of stunt performers running their fight scenes in a gym. Nowadays, it often refers to basic CG animations of scenes that demonstrate the camera movement and blocking and etc in a much more detailed way than something like a static storyboard really could, and that is what the Fellini Ford team had their machine do. Once again, the team fed all works into this second machine, uploaded all of his features and shorts, and then painstakingly tagged each shot with length, type, movement, and even the emotion being conveyed as expressed by the actor's face. One of the scientific experts brought in to help on the project pointed out that this is flawed because body language is so important in expressing emotion, but the team had made a decision not to account for this because it was just one extra step that would have made the already too time-consuming task just impossible on their schedule. That said, a future iteration of this tool could totally integrate with other machine learning tools that are built to recognize body language and emotions that a human could then verify versus needing to do the initial tagging themselves. And when they gave the finished script to this machine, it output a full-on animated CG render of the film, and there is a sequence in the documentary where all of the Fellini adjacent folks are standing and watching this play out and afterwards note how shocked they are by how accurately it captured the vibe of his movies, specifically like a combination of eight and a half and the clowns, one says. There's only one specific problem, an odd extreme close-up of the lower half of a woman's face where nothing is really in focus. Fellini would never have used that shot, nor would anyone else, for that matter. The tool allowed for manual changes though, so that shot went away and some other things were tweaked from the AI's initial conception, but nonetheless this final machine-generated visualization was brought to set and it informed the direction, shot choices, camera moves, blocking, etc. were all there on the system's four displays and it was ultimately the creative team's job to translate the gray digital world into something vibrant and human. And they do fine, I guess. I dunno man, the short's okay. It tells the story of a young Federico Fellini dreaming as he follows a woman in red and talks to a statue and interrupts a parade and etc. It's definitely interesting and you can feel the influence, but at the same time, so what? On its own, the short is nothing special and never could have been. It is an attempt to replicate someone else's work and the absolute best-case scenario is that it feels like a long lost Fellini short, but we'd know it wasn't. It's an imitation and without the context of its creation, no one would care. And once we understand that, it feels like this whole thing is being presented in exactly the wrong way. I wasn't that interested in seeing the final picture, but I was very curious about that previs. And as a result, the documentary is kind of at its best in a handful of cases where you see the two side by side. That's where you get to see what a human touch adds to a machine in a way that is too abstract with the final film on its own, and I think that if the documentary was nothing but the two presented together, it would have been far more illuminating than what we actually got. Now this isn't meant to diss the documentarians or the creative team behind the short. I believe that they're doing the best they can within the limits of their production, but I wish that it was either much longer and harder or shorter and lesser. It's in a sort of limbo where it's not super satisfying in either direction. In a QA, after the screening, we were told that the tools the filmmakers used were adapted from off-the-shelf software. I wanted to know more about that. Also, there was a point where they tried to get the system to totally rewrite a sequence because of expected inclement weather that would have ruined the shot, and they filmed that whole process, but it was apparently kind of boring, so it's not there. And I believe them, but that's also the sort of thing I think is really valuable as we look to the future of AI production. It might have made for a less enjoyable film, but it would have been a more meaningful one. Despite that, I still came out the other side of this fully believing that these types of tools will change the way movies are made. Going back to something I mentioned a bit ago, Solicitors was not made by someone who has direct access to GPT-3, but someone who is using a tool based upon it that you or I could access right hecking now if we were so inclined called shortly. However, it costs $79 a month or $780 a year, so it's available to all of us, but not necessarily accessible. The pitch right there on the first page is compelling. Make writers block a thing of the past. What creative type hasn't run into a point where they just don't know what to do. Fucking liars, that's ill. And an AI that can generate whatever left field idea may be exactly what the script doctor ordered that said. Something like the tool used in the creation of Fellini Ford may actually be more useful than shortly here because, well, when I put everything that I had said so far into shortly, this is what the gold standard of AI natural language processing suggested. A writer can't really call the studio and say, these guys, they have cool tools because they never use them. But a writer can email someone like Fellini Ford and say, hey, I need to this for my short. And then the studio says, okay, what exactly would that look like? And then that person explains using their tool. And then the studio says, okay, we need to see something tangible. It's kind of fascinating to see how a machine attempted to match my whole vibe, but it didn't really do a great job. When I first thought to run it through one of my five free trial uses of shortly, I wanted to just slip the result in there and see if anyone noticed, but obviously it was nowhere near what it would have needed to be for that to work. And more to the point, it did not help me figure out what to say next. Sure, I could have told it to try again and again and again, but a system where a handful of new ideas are presented simultaneously that I can choose from or even just be inspired by would be rad as hell. And to me, preferable. Likewise, as someone with minimal artistic skills, I love the idea that I could put a script into a machine that generated a basic pre-vis that I could work from, whether it was storyboards or a full on CG render. It would be a game changer as long as all of us go in with the understanding that this is a starting point and not the final work. The machine just becomes a collaborator like any other. And I can understand why that would scare people a little bit because it could make certain aspects of certain jobs redundant. But at the same time, I came out of this whole thing as convinced as ever that creativity isn't a problem that can be solved no matter how much data it's given. There is no reason to believe that purely AI generated films or even screenplays are the future. But there's also no reason to push back against the possibilities that this sort of digital collaboration can provide, especially to younger filmmakers with smaller support systems. And I don't know if it is quite right to say that we are at the start of a revolution. But the world of ideas is going to have a lot more people put in their input and a lot less people with access to the time machine. I mean, I don't think it takes a tech fascist to recognize that current creative models are broken. I recognize that much. Sure, buddy. That's definitely what I wanted to say. 6.0 out of 10. Thank you so much for watching, and thank you particularly to my patrons, my mom, Hammering Marko, Kat Saracota, Benjamin Schiff, Anthony Cole, Magnolia Denton, Elliot Fowler, Greg Lucina, Kojo, Phil Bates, Liam Knipe, Willow, I Am The Sword, Riley Zimmerman, and Jacob Alexander, and the folks who'd rather be read than said. If you like this video, great. If not, oh well. If you'd like to see more, please subscribe. Hope to see you in the next one.