 So, thank you for having me and thank you for staying. So yes, I did my PhD in the late 90s working for the Ministry of Transportation in France, working mainly in the analysis of images for safety studies. And then I moved on here, coming in Ireland, on EU project working as a research fellow on multimedia indexing and retrieval and video restoration. So, I have worked on different topics that involve mainly visual data. And this slide I'm going to present today are mainly a reflection of some of the things that I have done as part of my team, which is about 10 people at the moment. But some of the work presented is also coming from past members of the team. Okay, so that's an example of application that I worked on. So, you may remember the Xbox coming out circa 2010 and associated with it was the Kinect camera, which was a very interesting camera because it was very cheap, but not only it was recording color images, but also depth information. So, how far the object was from the camera, basically. So, it was giving an opportunity there to actually construct a 3D scanner for cheap. In other words, we could bridge the gap in between data acquisition, 3D reconstruction, and then going forwards towards 3D printing, for instance, or creating 3D content for a game or for the film industry. Another application that I work on was color, changing colors in movies. So, colors is very important to transfer the mood or the feeling of the storyline in a movie. So, it's something that is very much used and there is still research very active in the field. And here, when you work in the post-processing industry, you need to provide a way for the artist to interact with the media. Okay, you cannot let DAI to do all the work. The DAI still want to have some form of control. So, there it comes, the problem of deep learning and artificial intelligence, as we know it, is how can we control it to have the effect that the artist wants. Okay, so moving away from Du's little example, here this is a footage from drone capture that was done in 2017 in collaboration with Intel Movedius over Trinity College in 2017. So, this is an example of data that can be used to reconstruct a 3D model of a full campus here as an example. You have a Voxel representation in 3D here. So, again, why Voxel versus a 3D mesh? Voxel is very easy for manipulation with convolutional neural network, for instance. Again, for artificial intelligence, being able to segment the buildings and to put label or semantic label onto it. So, this links to one of my main topics recently, which is about creating a 3D cementing map. Okay, it's very important in the context of autonomous driving and maybe autonomous drone and having autonomous deliveries going to the heads of people and therefore to have an up-to-date 3D map with information associated with it is going to be very important. Okay? So, this is an example that was done in 2017 to drive a drone over an area, actually require that you be careful that there is no one below that can, you know, in case there is an accident. So, that was done at 5 a.m. on a Saturday morning or 6 a.m., so very early. And this is, you know, in practice here we were supported by Intel but that can be quite expensive to grab data with a drone. Okay? The second thing is that it was done in 2017 and you may know that since Trinity College has knocked down a few buildings and built new ones, so the map that we have is actually out of date already. Okay? So, the main problem for autonomous driving will be to have access to a very informative map but also that is up-to-date. So, how can we do that? So, the sort of research that I'm trying to do is to see if there is multiple sources of information that we can use to have this up-to-date 3D map with information of value. So, here this is an example from an EU project that I coordinated. The great search projects, in that project, we were mainly interested in looking at social media data. So, you're familiar with Twitter, Twitter. People tweet, some text is associated with the tweet, some GPS information is associated with the tweet. You have sometimes an image associated with the tweet, the timestamp and so on. Okay? So, what we did was we wanted to know what sort of information can we visualize the activities that there is on Twitter. And for this visualization, we chose all the source of data to construct a 3D model of the city. We use OpenStreetMap, which is the equivalent of Google Map, but this is for free and it is filled in by volunteers, if you like. So, GIS information. Combined with Google Street View data. And so, we reconstruct for 3D modeling of different cities. So, here you have example of Dublin, Pittsburgh and Rome. And the flashlight represents a tweet where there is an image that has been taken. And we recognize from the image content where the image was taken. And from the text analysis, we have also an idea of the sentiment associated with the tweet, if it is a sad tweet or if it is a happy tweet, okay? And this is encoded as part of the color information of the flashlight. Blue is sad, yellow is a little bit happier. So, you may imagine that what we recover in terms of information from Twitter was mainly the touristic landmarks that we are familiar with in big cities, right? But sometimes we discover new things that the Tourist Information Center would not know. For instance, in Dublin, we discover a little painting that appears on the wall in Temple Bar overnight. And people were reacting to it. Okay? So, there's a sort of activity we were looking into. Can we get value out of information that is posted on Twitter? Okay? Can he help you to tell you where to go when you land in a city and you don't know what's going on in that city? Okay. So, again, coming back to those 3D semantic maps, you may know about autonomous driving a little bit, and we cannot get data to train cars because some scenario would be very, very dangerous if you were trying to get them done in real life. So, the question is, can we create virtual environment where we can actually train robots to navigate in those virtual environments? So, I have presented a way for creating a 3D city that is a digital twin, if you like, from real cities. We can also use Twitter to actually see activities of pedestrian. Okay? So, if people tweet several times during the day, it gives us some sparse information about the trajectory of these people. So, this is what we use here. We use different... different information coming from Twitter. If people were tweeting several times during the day, then we could reconstruct trajectories. And we also use information from the image themselves, people that are appearing in the photograph. Okay? So, the idea there was to for cheap, again, to try to grab some information about the pattern of displacement of people in a real environment, such that you would help to create an environment for robots to use to navigate with humans. Okay? Okay, so here, this is an example of another project. So, again, I'm very interested in geolocation. And here, that was about geolocation of telegraphic poles. So, we were contacted by Air, who was interested in having an inventory being done using artificial intelligence and neural network with a source of data coming from Google Street View. Okay? So, this goes a little bit beyond detection of an object in an image. We actually do want to have the GPS location of that object in the real world. Okay? So, the pipeline we actually introduced there was having three modules. One was detecting the object in the image, neural network, deep learning CNN. The second was about an estimation of how far the object was from the Google Street View camera. Okay? Again, a neural network, CNN. And then there were fusion modules to actually infer all the GPS coordinates of the detected objects because we don't want to have duplicate in our inventory. Okay? So, we have multiple images looking at the same pole. The Google Street View are actually 360 views and they are collected every five meters or every 10 meters. Therefore, if you don't pay attention, then you can have duplicate in your dataset. So, the fusion module was using a standard statistical technique called the macro-frontum field. So, again, the data scientist in that context is using different tools depending on exactly what you want. And neural network cannot solve everything. So, we had to reverse back to some statistical technique for the fusion of information. And again, this work was continued on by a Marie Curie Fellowship supported by the ADAPS Center. And it is continuing now as we try to commercialize the technology for doing all the type of inventory for companies. Okay, so one of the latest work is actually... This is a paper published this year in my team that was using a combination of IAL imagery with social media data. So, you know that people react if there is flood in their street. They are going to react. And the question that was asked as part of a competition was, can you assess how possible the road is? You know, can you drive through the water or not? Okay. And that was a sort of fusion of information. Again, every image source is analyzed by modules that are based on artificial intelligence. And then you have a fusion module that is going to decide, this is the possible road, yes and no. So this is the type of information or competition that are very important because we are supposed to be subject more and more to do sort of extreme events like flooding and that will have an impact on transportation and all services. So currently we are working with the ordinance of Ireland working on IAL imagery and deploying some AI as well to have an idea of what's going on on the ground and where things are. And again, the importance of having very accurate and up-to-date information is very important in that context. Okay, so this is the latest program that I am involved with. This is an H2020 project called Bonsai. So we are reaching the end. It's supposed to end in January 2020. And the purpose there was to develop a marketplace for providing access to small to medium-sized company access to AI, but AI on the edge. So you know that no network is very well accessible now with beautiful libraries optimized for GPUs. But you don't have that much to actually push the AI onto the edge where you have processors that are a little bit different and also that don't have the same memory and compute power. So the purpose of Bonsai is actually to develop a marketplace where people sell and can buy different artifacts. So this is a combination of sharing data, a combination of having access to a particular model, having access to a particular implementation for a particular hardware, as well as having access to the parameters controlling the model. Okay, so the focus here is really pushing the neural network working on the edge and making it easier for deployment for mobile phone or Raspberry Pi or whatever hardware you're using. Okay, so I have managed to have slide without one equations. There were only slides with images, but if you're interested in the equations, there is publication associated with this work. Okay, different event that I am involved with in Ireland. So the Irish Machine Vision is happening every year. This is where the PhD students are presenting their work. So if you want to network, it's a good place to go and we are trying to bring the European signal processing conference in 2021 in Dublin as well. Thank you. Thank you. Thanks very much, Rosanne. I'm interested in the Bonsai project and the AI marketplace. You said it's coming to an end soon. How's the engagement from the community in that? The engagement is very good. We are trying to get a follow-up momentum. So there is two websites. There is Bonsai.eu, which is associated with the EU project. And there is Bonsai.com, which is supposed to actually bring the platform for consumers to use. Yeah, yeah, very good. All right, I'll keep an eye on that one. So thanks, Rosanne. Very good.