 Our speaker is a camera developer and also a film director. We're looking to have a cinematic camera development. He'll describe to us how this project was started. It was self-made. And the goal was to have a cinematic effect. For example, this limited range film effect. It started in 2006 and he came in in 2008 and he can explain more. We're in the central film archives of Austria. There's over 500,000 films here. There's many millions of meters of film here. It's a great place to have this talk here. Online, of course, it's different this year, but we're doing the best we can with what we have. It's nice to sit here. And this location can bring us something. This is a chance to come out of the crisis this way. After the talk, there's a live question-answer period. It's online. And let's go back to the old ages. It's back in 2001. And it was clear that everything in a cinema was filmed in analog at this time. There was a lot of post-processing that was happening. And more and more films were going into digital production. But in the middle time, there was still the normal lead filmed on analog. It was very slowly, it was changing. What actually is analog film? It's going to be a sensor that's light sensitive on the film. With light contact, the base will have a chemical reaction and it will develop and it will fix the base. And then when you film through it or you can cut it and then stick it back together. In the end, every copied film roll will be sent to one cinema and then that one will be projected. This was a standard technical process for over 100 years. And so this was for over many generations. And then there was a new revolution happening, the 2000s. It was a big change. It was a disruption technology. There was a pressure that the film should be all digital all of a sudden. Why? Well, you save a lot of money. This analog film was very expensive. So development was really expensive. So to produce it is expensive. Everything is expensive. Digital world to save everything is very cheap. And then to edit a computer is cheaper. This is another dimension in terms of cost. And then you have different possibilities. The acquisition. There's a period here where the tour story was the first animated digital film. And this is still is already pretty old for us. At this point, it was Once Upon a Time in Mexico with Rod Riegrish. George Lucas with Star Wars Episode II was the first big production in Hollywood. A big project. The first films that was completely made digitally. And it was done through a television camera. This was the only possibility to do it in high definition. And do it in 24 frames per second. The cinema camera was still in kindergarten at this point in comparison to the television cameras. It was only in the beginning. It wasn't good enough yet. In the cinema it had to develop over a 10 year period. You had to consider how do you bring about the cinematic aesthetic? How do you produce this picture quality? How do you actually produce this cinematic aesthetic? There's a typical film process that you see flicker that comes through the 24 frames per second. In comparison to the television then it's coming faster. It's coming fluidly. Cinema is then slower in comparison. As it's going through the projector it's being pushed through and then the light is also pushing through the film. And then there's a pulse that happens because of this. And then it's different in the end what the aesthetic comes out from it. Then you have a high frame rate. And then it's going to be higher in comparison. So you have a big project. In 2012 you have the Hobbit. And then you have a very high frame rate for all the very fast action that was happening. For some time there was a similar trend for TV technology. Smart TVs were creating intelligent intermediate pictures to upscale 24 images per second to 48 and more. So they were marketed as clear motion estimation and stuff like that. And the result was the Hollywood pictures that were produced very expensively were looking suddenly like a cheap soap opera. It was completely flopped. HFR is no longer something that people produce with. So it kind of disappeared. The film grain results from the random distribution on the negative. And as opposed to the digital image sensor which has always the same distances. So every image has this random distribution. So you can see from the negative or from the film the random distribution dissolves into the motion. So it looks like it is more defined has a higher resolution. There was a vendor for cinema cameras and they had an image sensor where they were vibrating the sensor moving the sensor for every image less than one pixel. So this worked but it was way too expensive and way too difficult to do. And now the shallow depth of field which is something that most people probably associate with the cinema aesthetic. And that depends on the projection or on the sensor or the negative. And the size of an APS-C sensor corresponds about the size of a super 35 negative. The difference to a photo camera 35mm film is positioned on the left and then moves to the right. Which means it is a bit larger than the film that is moved horizontally for cinema cameras. Which can use only a slightly smaller opening. And 35mm has about two thirds only comparable to 60mm film. Which means the TV image has a lot more sharpness in the depth of field. And then there is also the dynamic range which is the ratio of the darkest and brightest spots in the image. Which is in TV you see clipped highlights. That is white areas that are over saturated clipped. And we know that from audio recordings when it's too much gain then you hear some sort of noise. And that is comparable to digital images that don't have any more information because they have too much brightness. And that is the result of a dynamic range that is too low. And one of the very typical things that happen with EV recordings. The film negative Kodak has set has about 13 f-stops dynamic range. And TV cameras have been below that. But currently cinema cameras are in the same range. But in TV and you can see it with live TV there is often clippings. So back to the year 2001. Change from camera TV technology to cinema can't be stopped anymore. So as I said it's a lot cheaper. So this is where technology is going. Storage is cheap. Camera technology is being replaced by or is being caught up by computer technology. So in 2005 there is still no real camera that is comparable to an analog negative. And it's obvious that when this technology has arrived it will be first available for Hollywood studios because they have the money. But the potential has been recognized. So they're using tricks to approach this cinema aesthetic. And one of these tricks was an optical 35mm adapter which is essentially a matte screen where the matte screen is semi-transparent. And then you project the image onto this matte screen which is then recorded using a cheap camcorder. So you kind of get the cinema aesthetic for cheap. The problem with that matte screen of course is the surface structure of this matte screen. Which means that if it isn't moving the matte screen you will see this structure after some time. So people started to think how do we move this matte screen so it blurs in a way. There were rotating adapters or vibrating adapters where the matte screen was losing the structure due to this motion. But that meant you have a motor with energy supply and it also causes noise. So that wasn't successful as soon as larger sensors became available. There's a mixture then of the film and the television technology. And this time in the online form there was a Dutch filmmaker. They were exchanging information about how to adapt this technology to a film camera. There was an America film manufacturer. They produced something but this is the first one that was all open source with software and hardware together. And it's good for film. And the first one I was 600, 800 dollars. It was relatively affordable. And it also had relatively high resolution. It was standard HD. HD was relatively one of the kind at the time. In this moment in 2006 this was the birthplace, birth time and the beginning of this project. There were Anis online forums. They were discussing these problems from people from all over the world. There was development. There was a community that was available to everyone. And this was the core first part of the project. Open source cinema and the development of this open source film camera. So there are many projects that were adapted to 35mm cinema cameras. For example there's this theme world force. It's not just the camera that's open source but also the entire film process would be open source. So the challenge of this room is the camera okay. Sequence take one. The first big processors were from Canon for example. They are compact cameras with the video. They are SLDR cameras that had this function for a symbol for an ND camera. For example the Canon 5D Mark II was a staple in this community. They had difficulties. They needed to have a new open source project that they could go to. In 2009 the Canon camera was reverse engineered. And then this was called the Magic Lantern project. There are also other mirror reflex cameras. There are many effects here. And Canon didn't want others to be able to do this. These SLDRs didn't have video capabilities often. Many manufacturers had different cameras available for cinema production on the market. In 2012 Eastman Kodak was one of the biggest camera negative producers in the world. They were then considering how much film production was left. And actually this market was broken. It became a niche in the future. There was a handful of 35mm cameras that were used for project. A lot of times these were rented instead of owned. All of the technology was becoming digital. This had effect on the market. There was a paradox in the situation. How do you actually afford all of this? And then they weren't accessible. So there was competition from every part of production, every part of the market. On the one hand they were blocking the production. And then they had their own particular production. They had a black box of proprietary software and hardware. And then they can't open it and they can't repair it, for example. Only licensed partners or their own company were allowed to do maintenance. So the possibilities were actually decreasing, even though possibilities were increasing. And also the lifetime of a product was decreasing by a lot. Because new products were coming out all the time, which meant that support was no longer as interesting. So Magic Lantern became a radicalization pole of this new trend. So they wanted to create a new camera, which should be changeable, modular, compatible to all the standards that were already existing, which is the birth of the Axiom. So everything is being developed under GPL for software and they sell open hardware license for hardware. And the further developments are being shared with the public and the community. The first prototype proof of concept is being presented in MetaLab, the Vienna Hacker Space. MetaLab runs Linux and all the image processing happens in an FPGA in real time. There is no proprietary PCORs. Everything is open source. The team is very diverse. There are software developers, hardware developers, mechanics for filmmakers, artists, and all are walking towards the same goal to create tools that you can change, understand and extend. And this doesn't only concern being able to repair, but also many other facets. Filmmakers often talk about the look of a movie. That means all the elements for image composition, the film stock that has a specific characteristic, the colors, the lighting, the visual character, which is kind of the handwriting of a director. Which means that they can use it to make their movies easily recognizable. So in the past people chose their specific film stock. Now they are choosing their specific camera. This is the result of proprietary image manipulation. And it's a secret what happens inside these cameras. So the Axiom with all these ways to have an influence on this process, there's a lot more possibilities both in terms of technology and creativity to influence that. The Axiom Beta is the second generation hardware. It's a lot smaller than the prototype. It's a lot more modular and is being produced for developers in a small series production. It used to be a manual process. And for a few years, this year, since 2021, it's being produced industrially. It's a second generation of metal housing prototype. And now, very fresh, it's possible to write uncompressed media to Solid State. Solid State media. So the technology has evolved. It's more powerful, but it's less accessible. And that doesn't improve. So the mission of the Axiom project is relevant and exciting as it was on the first day. Thank you very much. The internet has some questions. First question is how much is it in use? The next step to the development kit or production camera. What everyone wants from a camera, a house, a development phase. That's the actual stand. So we were talking about proprietary cameras before. The question was that normally also refers to the data. So the data is proprietary as well. So isn't that a problem for archiving in 20 to 100 years? So there's tools coming from different manufacturers that can be manipulating just like the acquisition. The data structure, the metadata is all proprietary. So of course, that's difficult. Thank you. There's a question. Is everything open source? Is this also the schematics, the very lock, the source code, everything? Actually, everything is open source. This is open hardware, the schematics, the materials. Of course, the firmware and software is also open source. Very cool. Next question, what kind of optics can you use with a camera? That's a big theme right now. So we don't want to bring it to where the manufacturers have brought it to. There's an E-mount. There's a distance, a sensor surface in between the lenses. And then there's a mechanical adapter. It's an 18 millimeter focal distance. This is usually longer for an SLDR camera. So then usually you can mount any lens on top of this. Thank you. There was a question regarding HFR. Why can't you add motion blur in post production as a digital effect? One of the largest arguments against HFR was supposed to be dead. So this is an argument for it. It's intensified, this argument. Normally, when you film with a hybrid setup, the frame rates will have motion blur. And this is maybe the most comfortable rate. We don't know right now a natural feeling effect from a digital-encured motion blur. I've never heard of it at least. I would like to try it if there is one. I would like to see if it actually does work if it's digital for a motion blur. And see can you actually film it this way? And you're asked to explain how a typical post production flow on open source bars is might look like. Fundamentally, the first part is acquisition. So acquired and then backed up. And then you make a daily from this footage. So quickly put together something from. Then see if it's good enough or then edit it further. So this is the classical version of post. Or if you need something, you can go back to the original and see if you can find something else. There's also computer techniques and digital copy of the place. It's then not a typical workflow. You look at what the state of management actually needs at this point. So people are interested in is on-the-fly encryption implemented. From the video data? Yes. I would think fundamentally it's not a problem. Maybe a problem if it's working with excessive amounts of data. Maybe it doesn't make sense to encrypt it. But in principle, you could encrypt it. In Recorder, in Linux, you could have it on board. It's one of the tools that's possible to implement. Thank you. The next question is about image stabilization. Is an excess stabilization in camera? Some vendors do this in body, some vendors do this in the lenses. There's no active picture stabilization with this. And the last question is mass production planned. That's interesting question. What is the mass production for us? For us, it has a other definition. What amounts are actually being produced? The dream is when we are producing it in a factory. But then it's a question of finances right now. What volume is possible? If there were 100 of these produced for us, that would be mass production. And then it would be profitable to produce. So there's some more questions. Can we continue? Are there sequences in XR? There's a viewer raw format. Maybe Yawiki's the wrong word. There's a picture sensor. There's a 12-bit per pixel format. It's not really a data structure. So it's the per pixel and then saved to the file. That's the data for the raw data. Or if you're putting it into a different file is a different question. This will be the next step that should happen. Thank you. Can the Axiom Beta Compact be used without infrared filter for infrared recordings? Yes. Either on the developer kit or compact in between the lens and the sensor. Is it infrared filter or UV filter? So we can take that. So it's like putting together legos. Do you want to remind us of the resolution of the camera? It's 12 megapixels. It's a 12-bit depth of field at full resolution. And then 8 to 12-bit is a little bit slower. And then people are asking if it's possible to combine two cameras for a stereocopic picture. We haven't done that yet. But all the hardware is there fundamentally. It might be interesting to try to synchronize the data coming in from the two different sensors. Then what do we need in terms of the accuracy coming from this? But fundamentally you could, if you built the communication between them you could combine them. It's interesting questions if you need to alter this at all. So you're doing the most questions. Thank you for your great talk. You're very welcome.