 Yes, let's go. So hello everyone. My name is Olivier Aubert. I'm happy to be there. Yeah, okay. Okay. It's fine. So I'm from France, from Nantes, and I'm research engineer and currently freelance consulting for research projects. And I'm going to talk to you today about the video software I developed, which is called Advine, which is a video notation software and how it's been used recently because it's a software which has some history already, but it still has some uses and actualities. So I will present two use cases, two usage of the application. So recent. So since it's a lightning talk, I would be quick on the project itself and then talk about the two examples, which are the Remind Method application, which is a museology investigation method, and the Ada project, which is a media studies project with the Berlin. So Advine, it's a project which started in 2002. So that's rather hold with Yannick Prié and Pierre-Antoine Champagne, University of Lyon. And we wanted to provide tooling for accompanying active reading of audiovisual documents. And so active reading is the possibility for a user to immerse himself in a document, to take notes and to structure, to have a scholarship workflow based on these annotations on the document, so to accompany reflections on the document. So the goal is to create, share analysis of audiovisual documents, as things that we call hyper videos, basically a mix of annotations and video. It's a free software. So the project itself gave birth to a concrete artifact, which is an application. The Advine application, which is a free software, GPL, cross-platform desktop application using Python, GTK and Gstreamer. It's been used in different contexts, but I will talk about just two today. Quickly, the interface is rather common. It's centered around a video, can I show that? Yes. There's a video player here, which is always there. And then around the video player, you have multiple places, which we call, I don't know the name, we use the, yeah. Okay, places for different views for interacting with the metadata. So you put metadata, for instance, here, I've got below the video, a view, which is a timeline view, which is rather common in audiovisual annotation domain or audiovisual manipulation. On the right, you've got the same data, which is presented in a different way. It's as a transcription with time codes presented as this kind of thumbnails on the video. And then on the right, you've got an output of the process. So this is the first two views are the kind of tools. There are multiple other views, but these two views are the kind of tools that you're using in your process, in your scholar process of exploring the content and constructing your analysis. And this on the right can be seen as the output of the process. So Advent tries to be a tool that you can use throughout the whole process from structuring and analyzing to producing outputs. Here and we'll see how it goes. So basically, the important notion to take from this figure is the black rectangle that surrounds the annotation structure, the annotations themselves, the annotation structure, which is user defined and the different views, templates and queries that are all put in the single package, which is the unit, the documentary unit that you can exchange. Independently from the video. So the video is on the side. It's metadata by the video. It we tackle different scientific changes in this. So knowledge engineering, document engineering, HCI, and also data visualization and the analysis of activity traces. So that's what's for the scientific part that we were interested as researchers. And then I'll go to the two use cases in digital humanities. So there are reasons the first one dates from four years ago, and the other one is from 2017. So remind method is a project done by Daniel Schmidt, which is no professor at University of Valenciennes. The goal was to study the museum visitors experience during a visit and through the methodology used video based auto confrontation. So the visitors were equipped during an exhibit with a camera on glasses to capture a subjective view of the of their visit of their experience. And after the visit, they were they were interviewed by a researcher based on the video of the visit. And then the video, this interview was captured. And this is the primary material. So the capture of the interview that was used in Advine to be analyzed by the researchers. They transcribed the interview using using Advine. They identified. So they had different categories in their methodology. So they could identify the different categories. And OK, and they used relations to express our courses of experience. So that's basically a group of categories that form a unit, a meaningful unit for the researchers for the methodology. So the underlying structure in Advine provided the support for this kind of for expressing this kind of information. And then they could generate visualization through templates. So these visualizations, they they used them for their analysis during their exploratory analysis. And then they could also put them on the on a website afterwards as a kind of publishing. So this is what it gives basically. So you've got on the on the on your left, on your left, you've got the application with the timeline, the transcription and so on. You see different lines here that correspond to the different categories of analysis. So the identified categories in the discourse in the interview. And then here you've got on the right what you can find one of the views that is produced by the tool directly through a template system. That is available on museum graph feed off of our website. So this was the first example. So I've got to be quick, but if you have questions, I'm here today and tomorrow. So the other example is the Ada project, which is carried out with in a collaboration with the Senate politics team in Friar University, Berlin. And the HP HPI in Potsdam. So Senate politics is doing media studies, so they're the final users. HPI is about they have an expertise, a good expertise in video analysis, so feature extraction and so on. And we brought our expertise in video annotation manipulation, interaction and so on. And so the goal of the project was to study the staging patterns in the audiovisual presentations of the financial crisis. So they wanted to know if there are patterns that always come again and again when this is presented in documentaries, in movie, in feature movies or in TV broadcasts. And for this, they wanted to apply quantitative methods. So annotate systematically, annotate movies and then dig into this data, this metadata they produced to to see first to see for the Senate for themselves. If there are things interesting to to dig in and then also to provide a ground truth, I think I said my next slide. Yes, to to build a ground truth for future automation of the of the system. So we need, yeah, we wanted to build some feature extraction specific for the for the task and we needed data for this. So I've been provided the bridge between these issues. So the idea was to optimize manual annotation process. So in order to to be able to to put students to work so that the teams of students annotated movies and to provide a bridge between the users manipulation of the data and its semantic representation. Because in the end, with the HPI team, they wanted to to have semantic. So we produced an ontology that was stored in a triple store and so on. So they wanted to have semantic data, but the users at the other end didn't want to deal with semantic data. They wanted to do their work keywords or whatever. So the tool here is what bridges the gap between both sides. And so the application is the same. Basically, we did some adaptations, but then optimizations and I don't have time for the process. So yeah, I'll just say that I've been was used to produce an ontology. So you can find the ontology that was produced through Advine through this kind of annotation structure definition that was carried out in Advine. And then used to build the first ontology bootstrap ontology that was after used for over multiple iterations to to refine the ontology. And since we now got multiple data, so the the the the current news for the project is that we are still working together. The the data project itself is not yet completely over and we're working on data visualization. So we now have tens of thousands of annotations on movies. And so this raises questions of how to visualize them for the for the for the scooters for the media studies course. So we're working on this. And oh, yeah, just one point. This is free software, which is developed for a long time. And the data project was the opportunity for me just the first point of the developments for that project to update code for adapting to new systems. So it was it's another application which was Python to GTK to and so on. I had to do the an update to the new versions of these libraries and I didn't have the time or opportunities. And this project was the opportunity. So do not hesitate to to fund projects that may not be fit for you just right now, but may fit the task, but need some developments and yes, contribute to the free software ecosystem by funding such projects that we can we can advance the free software development. And so that's it. So through these two examples, I try to show you that this is a flexible extensible usable tool for digital humanities. And I'm available also for development or consulting for this. Thank you. Yes, it's the same. Yeah, the question is, yes, whether it's comparable to and vivo. So and vivo is one of the property tools that are used in ethnography or ethnographic studies. This is comparable. It doesn't have the same features, obviously, because it's free software, but it's open. You can and basically fit the bill for many of the of the needs. So yes, definitely this is comparable to and vivo. Thank you.