 A typical high-end film production generates several terabytes of data per day either as footage from multiple cameras or as background information regarding the set. The EU project in part has been researching solutions that improve the integration and understanding of the quality of the multiple data sources in order to both support creative decisions on set or near it and enhance post-production as well. In order to test the integration of the technology developed within the project, the in-part team met for two days on two different sets to record some test data. I'm now setting parameters for LiDAR scanner. This is the LiDAR scan to get very accurate 3D geometry of them in the picture site. Then it rotates and then should raise the beams to get the accurate distance from the sensor and then get also some photos to get colours and then finally we construct some 3D geometry of the set. We're looking at web rendering of the LiDAR scan which was taken about an hour ago, half an hour ago. Hansung from University University just approached it and registered it and then we've come to the point cloud and you can see it's downloading more fine high-resolution data. I think it's about two and a half million points. Okay, this is the Sperm camera. It's kind of a spraker camera. It has VCR lens on the top of the tripod and it rotates 360 degrees to scan the whole scene. So actually it covers 180 degrees vertically and 360 degrees horizontally. The raw data that we've used are the spherical images from the Sperm cameras which are the images that capture the scene around the camera 360 degrees. We have five of those stereo scans and we feed them to automatic algorithm that merges them together from different position. And in the end we have some kind of point clouds from those spherical cameras and as you can see I can add another scan from a different spherical camera set. So I took the reference-stated photos of the capture site. We did some mirror scan in the morning and then I got here around 20-30 photos and then using this photo recap tool I crewed the process of this kind of scene as a full 3D scene. Sure, you requested from only photos, 3D photos. The point cloud is actually color-pulled and the orange alert means high precision and the violets mean low precision. I can actually see that the capture area is captured quite well and I've got some buildings and the bushes in the background are not captured so well. All these 3D reconstructions can be registered together in one 3D scene and shared via the web. Once the video from the cameras has been ingested it can be added to the 3D unified scene and visualized. Annotations and metadata can be added in the scene. Once the video is dumped on the computers it is analyzed by automatic algorithms. If we take some action that usually came across a real video shooting we have chosen some activities based on the capture parameters. So if we try to simulate a real scene we have multiple actions. The witness cameras shoot all the time. We don't have speeds so the idea is that we will run algorithms on site or after the shooting. Our algorithms can perform temporal video segmentation based on visible human activities. Each video segment can be annotated by our developed algorithms and tools which perform human activity recognition, face clustering and short characterization. So we are doing some action synchronization which works on a low level and produces metadata for faster searching in data because the case is very neat, slower or faster shot so you can cut it air-flutish better and reproduce this kind of metadata. Here we capture the shot which is slightly out of focus. It's not clearly apparent on the shot so we developed a filter which actually visualizes the focus and here we can see some sort of color coded viewfinder. We actually output an XML with the metadata which we extracted from this shot and here we can see the sharpness of the shot plotted over time. So with this technology it's much easier for the reviewers to see the shots after the day and to saw the shots which are nicely focused and contain good data. Most of these new tools produce results on set which saves a lot of time and money on the production and post-production stages. They made such an impact that they are already being used in industrial partners Double Negative Visual Effects and Film Night Limited. All these huge amount of media and data generated is then passed onto the production stage. Film Night Limited introduced Flux to the Double Negative team a post-production server and management tool which provide a revolutionary new way to store, classify and browse assets. Double Negative Visual Effects is a company specialised in digital post-production mainly of major feature films. The company fits perfectly in the scope of impact solutions. Film Night Limited is an established market leading innovator in the digital film, broadcast and commercials post-production industry. Film Night's role in impact is the development of browsing tools and system architectures for data management solutions. A normal film production generates thousands of files. It is impossible to annotate manually and navigate through all the assets without having sorted them. Flux Manager provides many solutions for data managing. Hi, so today we've had Film Night in, Port Gang and Ant have been over using Flux Manager on our production data here at Double Negative pointing out two shows Mission Impossible and Vista Terminator film. This has been a good test because we've been using data with respect to the impact project to date of a couple of tens of terabytes of information but the production size data sizes are much more significant into the hundreds of terabytes. So, Port Gang, how useful has the test been for you to survive? It's been very useful. I wouldn't say it was a very successful day. There's still a lot of work for us to do to make that happen more smoothly but I think it is obviously only through tests like that that you find out where the real problems are. And as we've sort of seen, looking at the British today from the Flux Manager there's a lot of data we have and a lot of it is quite clearly unsorted when it comes in from on set. And that's been a lot of challenges that we've been having to deal with both the amount keeps going up, show on show, the different types of data and that leads to a lot of manual processing and a lot of frustration and also duplication of data as we sort it. We need to verify that we don't lose anything and there needs to be a relationship between files once they've been sorted and unsorted and we would like to move away from that scenario. Simon has been working on some of that with our own systems. How do you see Flux Manager helping out through what you've found so far? Yeah, it could be quite useful. It's able to read a massive folder with lots of intravenous data types. Most of what we have, what we get from set is probably images in various sources but also lots of video streams that come in in very different formats. But it's not just about the size of the data, it's about the number of files and it's about the type of files that we have and in the specific case of Flux it is very much structured around the idea that you can have economies of scale by treating sequences as if they are essentially a single file with some extended properties. But the good thing is that we can take the metadata that we collect from what I assume is quite a typical modern, heavily 3D-oriented production and tune the system to fit into that environment. The whole point of the Flux architecture is that you can add your own metadata into the database and that can be things that can be scripted and automated to a large degree or it can be plugins that actually extract metadata from the images themselves that have been developed by Flux. Very much so. We found it very useful in our own in-house software jigsaw as well helping us to extend as a result. In our own agreement this is a key problem for the industry as a whole and any tool that can make the data management easier, more flexible and more automated is a good thing. While there remains work to be done to take full advantage of the Flux Plus platform in a real production environment the integration of a metadata-based workflow, at least for colour on set and post, has progressed well. Filmlight has been promoting this new workflow over the last 12 months in a wide range of events, demonstrations, workshops and showcases. The reactions to these events have been consistently positive and the early adopters are full of praise. In September 2015 the IMPART project organised two special paper sessions at the IEEE International Conference on Image Processing in Quebec City, Canada. This conference is the reference annual conference on image processing with over 1,000 attendees from all over the world. Hi, I'm Alan Evans from UPF and here we are at the IEEE International Conference on Image Processing. It's quite a big deal for us to be here because IMPART has organised two special sessions within the conference. It's pretty much the biggest conference on image processing in the world. It happens every year as you can see, it's quite a big venue here. We've just finished our second special session, it's quite a big success. It was really good and yes, we're really happy. Hello, I'd like to welcome you on this special session. The University of Surrey and Double Negative presented a joint paper on the management of multimodal big data for film production. Universitat Pompeo Fabra presented a paper on the visualisation of this data. The Aristotle University of Thessaloniki presented a technical paper on improved methods for K-means clustering. The Brno University of Technology published a paper on quality assurance in large collections of video sequences. The ISIP special sessions were a fantastic showcase for the academic work of the IMPART project, demonstrating the quality of...