 Before, we will have a descriptor for each interest point, and then be in high-dimensional space, and we want to then cluster them to come up with the words, and that will be our vocabulary. The English language, I look at it before 1060 ago, but visual words, we don't have vocabulary, so we have to come up with our own vocabulary. Yeah. Do you make the point? Oh, okay, so we're called the Scandinavian Institute of Computational Vandalism, which, okay, thanks. Which, one meaning of that, the CV is in part upon on computer vision, and as you just saw, that was a clip from a series of lectures published from the University of Central Florida, I believe it is, from a computer engineering course about computer vision. But it has another meaning. Yes, the name of the Institute is derived from a project from the painter Asger Yorn that started in the 60s, a sort of photographic survey of the arts of the vandals. And the vandals had a very particular characteristic, that they didn't make art from a clean and empty surface, but they would draw upon already existing images. And so, for instance, the most earlier graffiti that we can find in art history come from the vandals. And the idea of Yorn while doing that was to look for a culture where he could find a way to talk about images, using images. And the different publications he made from this archive are what you can see here, a sort of long collections of images that are put next to each other and that are in dialogue. So here you can see a long series of images containing feet or foot. And here, one of his most well-known books, La langue crue la cuit, where he gathered images from multiple sources and highlights. So it's not only a collection of images, but it's also a painterly intervention on these images to highlight the presence of Tongue in surprising ways in the different pictures he has collected. So the presentation we will make is about this attempt to try to engage in a dialogue with images, trying to find ways to not describe the images from the outside with words, with historiography, with textual description, but to try to see how it can make from the image, to make emerge a series of descriptions that come from the image itself. And to do that, what we started a few years ago was to look with curiosity in what computer vision algorithms would offer us to do so. And one of our first, let's say, experiment with that was to look on something called the surf features. Basically, surf features are elements in an image that remain stable even if this image is rotated, for instance. These are elements, many of you are familiar with it, but these are elements that are used, for instance, to create a panorama of different images and to stitch these images together. And we were interested not only in knowing about the existence of these elements, but also in looking into what they are and to sort of use the lens of the algorithm to discover new aspects within the images. So, for instance, what Michael is doing here is to scan through an image that's ordered by size, all the surf features discovered in an image. And of course, the more we go in the bigger details, we see what for the algorithm is important, is somehow what identifies this image proper. Can go to the next slide. OK, press the plus. Yeah, press the plus. Maybe go again, a bit longer. Yeah, so that's how, for instance, this one. So, for instance, what we started to do then was to see how these little elements, these surf features, could, how they could help to connect different images together. So, for instance, here are a feature taken from, sorry, taken from one image and a feature taken from the other. And it's where the algorithm finds a sort of point that is extremely similar between the two images. And of course, it directs our look to completely different parts of the image than what we would do if we want to identify the content of this image or if we want to somehow look at it spontaneously. So, it really changes, it directs our look to a very different vocabulary to talk about the image and a visual vocabulary to talk about it. Maybe I can show the network. So, actually, indeed, as we started to work with, in this case, I started working with SIFT, which is indeed a sort of patent-encumbered, not completely free software algorithm for doing features as well, like surf. And actually just trying to get a grip on indeed what are these features. So, this is an interface, actually, well, actually, I quite like, oh, I didn't pull that one open. So, anyway, this is an interface that you can kind of scroll through a series of images and looking at their features. You can place them back in their original position in the image, eventually kind of putting the image back to kind of see. But you can also really, literally take them apart to start to kind of consider what they are as elements and eventually, for instance, features have a predominant direction in addition to a size. So, you can, of course, like order them in a way by their kind of orientation or look at the scale and eventually consider the entire collection of images in this way, which is kind of a kind of collage method when you start to see, also becomes this kind of very strange sort of chewing monster. But, yeah, and I mean, it was just sort of playful and it was an interesting link, let's say. I mean, continuously, I think we're interested in these kinds of links. Despite the quote we started from, we're interested in actually how the history of kind of visual arts can talk to, let's say, computer vision of today. And so sort of, let's say, taking these tools and considering them actually sort of means of producing collage is very interesting, not just in kind of simulating the surfaces of the past, but in a sense, reinvestigating the materials of the past with kind of contemporary techniques and considering what that could mean. This? Yeah. Okay. Then, so this is an interface, sorry. That, well, we started actually then looking, we made a kind of interface where we wanted to look at actually images in their layers. So in some sense, actually inspired by a kind of GIMP or bitmap, let's say, kind of program, imagining where you could take images and look at, say, red channel, look at the gradients, look at the contours, switch off, actually, let's see. Actually, even in some cases, apply text. Sorry, actually, that was not the same image, but consider actually as well what happens when you throw OCR software at it and look for text within an image. So just actually, in a way, trying to get a grip on this sort of, in fact, when we start to take as something as simple, let's say, as sift, which is not simple at all, in fact, you come to a whole kind of layer cake. We talk about the princess and the pea of there's an incredible stack of actually decisions and representations and algorithms that are involved in picking out a particular segment of an image and considering that a feature, for instance, to do with image gradients. And so we enjoyed this interface as a way to, let's say, look at an archive. So in this case, it's a kind of random walk through the archive where based on each of the different orderings, sorry, based on each of the different filters, you have ways of ordering. Was that five? Five, yeah. Okay, just a second. And so this kind of walks through, actually what it does is here it builds up all the layers of a single image, then picks, in this case, sift features and then it's moving through adjacent images based on, I mean, they're global properties in this case, like maybe total number of sift features or total length of the contour or total amount of text duality determined by using OCR, what's the OCR software, I can't. Okay, sorry, Tesseract, sorry, I wanted to say Tesseract. Yeah, and these are all indeed kinds of ways to, let's say, look at an archive in a different way. For instance, we were also actually using the same, oh, I took it away, but for instance, simply tracing the algorithms. And we were gonna mention this because it's actually, at the moment, this is on display at the Photographers Gallery here in London, a tracings installation where we just take the original photographs from Oscar Jorn's SICV, the original comparative analyzms, photographs and then just apply open CV contour detection and then eventually you start to see the image trace. So it's just a way actually to look again at these images and kind of revalue them, reconsider them. And I think the last thing that we're gonna show, or have I skipped something, I mean, the last thing that we did was an installation which was quite playful and I think looks nice to see, which was we're both working in Constant in an office in Brussels, this is some test images. And in this case, what you have is, was an installation that ran for 40 days in the window of Constant and basically doing face detection. Again, one of the very high level detection, a hard cascade based on a training set of the default frontal face, probably the most popular kind of application of hard cascade. And basically just detecting faces, first in live images, then replacing faces from other faces detected in an archive, in this case, from a Norwegian graphic designer we work with as well, and Gujamsk Gujamskard is his name, who collects Oscar Jorn, so there's a sort of link in there. And we really liked actually, yeah, this kind of, indeed it sort of, and then it's all kind of facing outward so people can actually look at the process of what's happening when it detects faces, finds faces in the archive, shows the original archived pictures with their faces and replaces them and produces these kinds of loops. Yeah, which I think is a kind of, indeed a sort of playful way to make people look at actually the kinds of algorithms that of course increasingly people are familiar with face detection from their cameras. And yeah, I think we're good to end in a minute. And actually there's a particular moment that I quite like and I thought that was a nice ending. Because of course, I mean, part of this is I think a way to maybe ask questions about, increasingly we're in a situation where these techniques are often used for surveillance. I mean, I think it would be a great super cut to put all the kind of computer vision lectures that contextualize their work in terms of surveillance, surveillance, applications. And I think it's really an interesting question, how can you kind of empower people to maybe engage with these, rather than just be scared about these technologies and feel kind of out of control of them, really take them on and consider what they are as materials. And so in that sense, one of the last, yeah, the last image I'll show is just, oh, which one was it? Yes, so this was indeed captured one night, which I kind of really liked that it was somebody both, sort of giving the finger to this face detector, but actually also doing it proudly. This is not somebody kind of hiding away from it, but really performing for the algorithm, which, yeah, I thought had quite some strength. Thanks. Thank you. Thank you. Yep, for sure. So we have time for like three, four questions. If they're short, anyone has a question for the violinist? Now they're here. Well, we stay. They're staying, yes. I thought it was really interesting particularly that you're sort of exploring computer vision and I wondered in the techniques that you use, whether you, how closely they represent what you understand in terms of how human vision works and visual processing, have you kind of looked into those areas? I mean, do you want to say? I'll start, yeah. Well, okay, so yeah. So, I mean, the quick just response to say, based on, I mean, for me, it was really interesting watching because I would look in on how this particular installation was working and yeah, I mean, of course, it does, you know, there's sort of a beautiful thing that happens with as you get to sort of twilight, images start to pop up as the light changes and as indeed reflections start to become a bit strange and so it's hard not to kind of anthropomorphize in some sense what the algorithm is doing and imagine, you know, it's imagining things at twilight or it become the kind of dreams at night. You know, in fact, I think part of what's, but at the same time, what's very interesting is looking at the mishits. I mean, I think part of what this project, what we tried to feature is the fact that face detection often doesn't work. It's very particular what its vision is and indeed as a kind of recurrent theme with all kinds of machine learning and, you know, looking at how, yeah, it does, it is certainly based on kind of ideas about human perception and indeed sometimes it really feels like it's working in that sense of, wow, it's seeing something, but it's also a very particular, strange kind of vision and so we're as interested, I think, in those kind of more exceptional aspects of it too. I think too often when they're presented, they're sort of overrated. People kind of present facial recognition as something that really works really well and to some degree that's true and to some degree there's something else, I think, well. I think that one of the point most moving, I think, is this strange intersection between what we think human vision is and what the algorithm shows and how, as you say, it looks like it works and then it surprisingly doesn't at some point. I think there is another element that was very important for us is the dimension of scanning and to consider that what we learn from working with those algorithms is that they scan images and scanning is a very strange word because it means at the same time really like going very quickly over something and it means also going very minutely with a lot of precision and this discrepancy between precision and moving across, going through a large quantity then focusing on a sift point that becomes really the nail around which you can articulate a series of connection. This is not so far from something you can experience when I look at the scene here, probably that I will really brush off and then focus on something, so it's not so far but it's really extended to another magnitude of elements and I think that's very important for, because we are obviously working with archives, it's very important I think because it really connects memory and vision in a way because for instance to scan is also scanned in French, it's to make verse and to make verse it's really, it's not just a way to embellish discourse but it's also a way to be able to remember something because it has verses and this whole, what is revealed by all these features, these affinities are ways to sort of anchor memory within the visual. I think that's also a very important aspect of that. So we have time for maybe one quick one. So I noticed that the facial recognition thing seemed to have a bunch of things in the catalog that weren't actually faces. Did you set the threshold low or did it actually pick those things up as faces? Yeah, I think part of what we were interested in in that case was in fact we were using different archives on it and we switched back to this one. That was actually of Gautam's guard, the Norwegian graphic designer and from a book which he calls archive by the way and we particularly liked that book because it revealed something about the kinds of objects he has, he has these beautiful kind of wire instruments or tapestries with particular graphic patterns and that he himself plays with in his layouts as faces. He put or any kind of symmetry like there's a beautiful book, the book of Job, the Bible laid out by Baskerville will detect faces in kind of symmetrical typography. So there was again a sort of overlap between this algorithm, the kind of mishits of a face recognition and something which said something which even Gautam, who himself is very skeptical of all kinds of software, he's in his 70s, 80s and but he has to kind of laugh as well when he sees the results, he recognizes something in it and he's immediately suspicious of exactly how it's working or as at one point we tried to describe what contours were and at some point he says, ah, I understand, it's cheating. Yes, I think so. So, a round of applause for the Wendelist.