 Mae gennymcaf i pell seisdsion gael y cyfnod wahanol o'r ddigonnig. Felly, rym ni gilydd yn bwysig unrhyw gyd... Prosesion ar y cyfrannu. Rwyf yn oed. Rydym wedi bod yn ddreadwyd yn ddim? Fy yw amser gyda'r rhai. Roedden ni'n adnwys pan hynny yn edrych. Rwyf yn ddreadwyd. Roedden y gallai'r beth yn ei cael ei ddweud? Roedden ni'n adrodd? Felly arnynt ni'n adrodd? Felly, yna'r beth yn gweddwyd? Yna ar y cyfrannu. Hoso. Hoso. Dyma. Ac yn ymweld hyn. Rwy'n risiau. Ymlaen o'i gw父 yn eny forum y Llyfrgell ffrindigr y Llyfrgell. Am ér dda Niferol. A i gyntaf rydyn ni hefyd… Yn 2D, fel y gallwn cwyl, yn 3D, yn gyflawni gwybodol, yw'r rhan o'r hwbl yn rhan o'r ddwyllfa. Yn ymgyrchu'r pethau, ac mae hynny'n gennymau rhan o'r ddwyllfa o'r digwydd, mae'n dweud i ddwyllfa arwain i ddwyllfa. Mae'n ddweud yn gwneud gael ymddangos, ddwy'r cyflonau cyrraffydd yn ymdwyllfa. Mae'n ddwyllfa o'r ddwyllfa. Mae'n ddwyllfa o'r ddwyllfa, mae'n ddwyllfa, ..a Gwithwyr Cymysgau. Mae'r rhannwch yn cyfrifiadol, yn gynllun. Mae'n gynllun eich ffaint a ddwy'n meddwl... ...y'n ddweud o 15,000 ymlaen yn cael ei ddod... ..y ddwy'r corau. 100 o ddwy o ddwy o ddwy o ddwy... ..y ddwy o ddwy o ddwy o ddwy o ddwy... ..y'n ddwy o ddwy o ddwy o ddwy o ddwy o ddwy o ddwy o ddwy o ddwy o ddwy o ddwy. Rwy'n meddwl, rwy'n meddwl... ..y'n meddwl gynnig... a long way, before the dinosaurs, all the way back to 1990. This is the Vasari scanner, the National Gallery here in London, which I help work on. This is a thing for scanning old master paintings to look for evidence of long-term colour change. It's a little hard to make out, but there's a high-resolution monochrome camera here. Here's the painting on an easel at the back. This whole thing is a stage, it moves over about a two-metre square. There's a turnt and halogen projector here, a filter box here with seven broadband interference filters and a fibre optic guide to carry the coloured light in front of the camera. So this thing would scan, it kind of saw a postcard size piece of the painting at once. So it would drive to each part of the painting, take seven monochrome photos with the seven filters and then assemble them all to make a high-resolution sectoral image of the painting. Now, back in 1990, computers were not as they are now. We spent £40,000 on a Sun 4 330. It was a very, very nice computer. It had 32 megabytes of RAM and it hadn't extraordinary. Is it just not working? Sorry, is that... Okay, better. Yeah, 32 megabytes of RAM and an amazing 25 megahertz processor. So that made assembling these large datasets a gigabyte, at least for an image, very challenging. And it forces you into a structure rather like this for the software. You have a source data on disk. You stream it through memory through a set of processing operations and you run it to a disk again at the end. And to get reasonable performance, you have to only scan once and do as much as you can on the data as it passes through the system. And this isn't working. I don't think this is working either. Does this work? It's working just in the picture. All right, thank you. And this is still more or less the structure Vips has today. It has a couple of interesting features from a technical point of view. So it does horizontal threading. So each core on your computer gets a whole copy of the image pipeline and this reduces the amount of locking you need. So Vips is able to run almost without locks. There's a single mutex on the input, a single one on the output, but the rest of the system is lockless. So it scales very well with large numbers of cores. It's tireless. Most image processing systems have divide images into a grid, a regular grid of tiles. Vips doesn't have that. Instead it has sets of overlapping regions plus a set of rules to try to keep re-competition down. And again, this removes locking. And it does various things like runtime code generation as well. So you give it a job to do and it'll write you a small program at runtime which implements exactly that operation on your dataset. So here are some benchmarks. Vips is at the top, obviously, because I'm at the benchmark. So here's the takeaway thing, which is the graphics magic image magic, which I'm sure most people are familiar with. Vips is typically four times faster and needs one tenth of the memory. So finally, getting into the applications. So I worked on technical imaging in museums for a long time. And this is often based around different imaging modalities. So you'll have ultraviolets, visible, infrared, x-ray, all these different things. Every museum will have a different set up for imaging. And this makes comparing images between institutions very difficult because differences in images don't necessarily reflect differences in the objects. So this is a little program, or I should say, this is one of the nip, sorry, this is one of the vips. And it's kind of an image processing spreadsheet. And it's halfway between Excel and Photoshop, if you can imagine that kind of horrible combination. So this thing, you stick in, visible, infrared, ultraviolets. This is ultraviolet-induced visible fluorescence. And this is visible-induced infrared fluorescence. So you stick in all these images. And there are calibration targets in there, too. So obviously you can't see it. But there's Beth here, and there's a set of spectral arms, which are reflectant standards, which have pretty much a flat reflectance value all the way from ultraviolet to far infrared. Put them into this thing. You just drag them in. Each of these images is large. And then there's a set of tabs across the top. The first one, you mark a few control points to show how the images line up. And then the whole thing calculates and all ripples through. And you have this as the final slide. And it's calibrated all of the images using them at Beth plus the reflectant standards. Generates a lot of metrics for how good the calibration is. It does false color, infrared, and visibles. And here, this is the quite interesting bit. It does cabal camonc. Does anyone know what cabal camonc is? This is one of the models for paint mixing. So this is, hang on, UV-induced visible fluorescence. This one here. Now, as the light is produced in the paint nose from the fluorescence, obviously the light is going to light up the paint as well. You don't just see the light, it picks up color from the paint medium it's being emitted from. So what you can do is use cabal camonc, which is this paint mixing model, to take the contribution of the visible reflectance out of your emission image. So you just see the light that's being emitted from the surface. And this example actually isn't different, isn't dramatic. But honestly, it does help a bit. This is the one with the most of the visible color removed. It's a chalk drawing by Perugino, by the way, if you're curious. So here's another example. This is a medical imaging. So I work at Imperial College doing medical research. And this is a two workspace. Hang on, one thing about this, I'm meant to finish with something about Vips. So this is a big workspace. This enormous thing is a huge graph of image processing operations joined up. It's around 10,000 operations joined together. And it takes about 20 seconds to load the workspace. And at that point, it's got over 400 gigabytes of images because these images have been processed repeatedly. But because these aren't real images, it's all data flow on demand, lazy stuff. It actually runs at only 500 megabytes of RAM on this quite modest laptop. And if you change, like a spreadsheet, if you change something on one of the early tabs, all the calculations just ripple through. And it takes maybe 10 seconds to recalculate the whole thing after that. So because Vips has its low memories, it enables this type of application, which wouldn't really be possible with a more conventional imaging library. Then here's the same thing. This is an even more complicated workspace. I'll skip through it very quickly. This is doing modeling of tracer uptake in cancer patients and looking for evidence of pulmonary disease. And again, this one has 15,000 nodes. It's a much more complicated workspace. But Vips does work at these large scales. Virtual microscopy, this has become very popular now in the university sector. So instead of having microscopes slides, which you hand around your department, which get broken because people tread on them, instead, you put the slide into one of these scanners once, and then everyone can view the slide on their desktop. And these images are enormous. They're typically 200,000 by 200,000 pixels. They're absolutely huge. So they're very difficult to work with. But Vips is popular in this field. It's used for most of the large slide pathology libraries now for stuff like that. Image resizing, there's a lot of websites now do image resizing on the fly. So they don't shrink images beforehand on upload. Instead, they size images in real time as people view things. And Vips is popular for this. I think Tumblr and Wikipedia and lots of sites are using it internally now for that kind of thing. So this is a really fun thing. This is RTI imaging, which is, does anyone come across this? You have a single camera, a single fixed camera, a single fixed object. But you get 3D, and it works by moving the light source. So this dome has 80 LEDs inside, which you fire in sequence. You take 80 photos, and then from those 80 photos you can reconstruct a 3D image of the object. And you end up with quite a large data set because the camera is an expensive 50 megapixel camera times 80 photos. So it's a large amount of data to work with. So the main fitter for making these things is based on Vips. And then here's a viewer as well. So she's upside down, I don't know why. But this is quite a nice thing on the full website. So this is a RTI image. So sitting on the website, it's all written in WebGL. And you can pick up a light bulb here and move it around over the surface. And it all relights. And it works really nicely at a kind of good speed. And these images can be very large, and you can share them over the web. And again, this is Vips for the back end for making the images. I can demo that if anyone's curious. It's a nice technique, I've got the ultimate more widely used. Archaeologists love it because it enables them to republish all their work again. You have a little Babylonian clay tablet. Do you know the kind of thing with lots of writing called? Kiniform. Thank you very much. Yeah, Kiniform scripts on. And the script was made by scribes with little wooden sticks. And the end of each wooden stick has a very distinctive pattern because the sticks lasted six months. You see the wood grain. But with RTI, you can see into the pattern of the end of the marks that the stick made. And you can match them up across tablets. So you can say that these two tablets were made within six months of each other by the same scribe. And archaeologists have been using this to construct very elaborate networks and timelines of all of these Babylonian kiniform tablets. And it's using RTI for that. And it's very popular with Egyptologists as well. Because you can actually do it in the field. I talked about a dome and lights one minute. So yes, I'm coming to a climax. It's not obvious, I know. It's really fun because you put a snooker ball next to your stone tablet out in the desert somewhere, put the camera on a tripod, and then go bang, bang, bang, bang, bang, maybe 10 shots with a flash. And then from the position of the bright spot on the snooker ball plus the brightness, you can get the distance and the angle of the light source. And then you can reconstruct 3D from that. So you can make 3D models out in the Egyptian deserts of scratches on stone surfaces. And then later use RTI to pull 3D out of that and get a really nice model of the surface. And you can exaggerate the surface normals so to make these scratches much more readable as well. So that's quite fun, too. I want to have a page of collaborators to finish. We showed the people whose work I've talked about. OK, thank you very much.