 So as you know, AI is everywhere, and especially neural networks. So I'm going to share some of our work on AI for medical imaging. So this work is in collaboration with two hospitals in Tilburg, one in Tilburg and one in Den Bosch. And one of our projects is about detecting osteolytic bone lesions from multiple myeloma from full-body CD scans. So you can see this is one example of the image and these are the lesions that we are trying to find. So hospitals gather a lot of x-rays and CD scans and there's a considerable amount of images available for diagnosis. And AI for diagnosis, as Marie has mentioned earlier, can reduce the time you need to check images and it can reduce the misdiagnosis rate and help in clinical decision making. For example, if we take existing data such as x-rays or medical reports or cognitive tests from two types of diagnosis, we can train a classifier to detect diagnosis A where the patient for specific data has diagnosis A or diagnosis B. So if we get new data from a new patient, we can run this classifier that we train on existing data to provide a recommendation to a clinician on which diagnosis a patient has and what accuracy. And these predictions can also be visualized and reported. So there are many important tasks for AI, particularly machine learning for on images and these tasks can be divided into classification, object detection and image segmentation. So classification, it's a machine learning task for you to determine whether the objects and image belong to a specific label, a specific class. So it could be a yes-no label to decide on whether this image contains a certain object or a certain anomaly. So for example here, we have an image of a scaphoid and we're trying to decide whether this picture, we would want to classify whether the scaphoid is fractured or not and the answer is yes, it's fractured right over here. So another separate task from classification is localization. So this is when you want to determine the position of the classified object in an image. So here, and this combines classification and localization. So here we have an extra image of a risk and we want to find the scaphoid which we've highlighted with this bounding box. So another machine learning task for images is called image segmentation and this involves highlighting foreground elements to make it easier to evaluate them. So what image segmentation does is that it provides a pixel by pixel detail of an image of an object. For example here, this is that and then there's also semantic segmentation and it refers to the process of linking each pixel with image to a specific label. So you can think of this as classification at the pixel level. So here we have a full-body CT scan and we can for example do a semantic segmentation to detect different organs on the body. For example here, the red pixels refer to bones and the double pixels are your lungs. So I'm going to talk about some of what Marie mentioned earlier. So it's a project to automatically detect fractures in the scaphoid. The scaphoid is a small bone in your wrist and we're using x-ray images, x-ray images of hands and wrists to detect. Scaphoid fractures are often undiagnosed and when that happens, it leads to further complications. So Neil's particular, who's my PhD student, has built an AI system to detect scaphoid and this AI system involves a detection task which is where we need to detect the scaphoids in the hand and wrist x-rays and then we have a deep learning classification task where we classify whether these detected scaphoids contain a fracture or not. So this paper was published recently. So this system, we showed that we could classify the fractures just as well as radiologists and our classifiers also generated sensitivity maps shown here. So these are examples of the sensitivity maps they show, where the redder, the regions, the more influential these pixels, these regions to the system decision and we found that the sensitivity maps correlated with their fracture lines in the scaphoid and therefore this can be used to localize potential fractures. So another of projects is to develop an AI system to predict cognitive outcomes for many geomass patients after brain surgery and this is mainly the work of Sander, who is also a PhD student at Tilburg and ETZ. So many geomass are a type of brain tumor and in this project, we are using MRI data combined with cognitive test performance and clinical information from patients who went through tumor resection. So our AI system that we're trying to build will segment the tumors from MRI images and combine that information with data from the patient to predict whether a patient, to predict a patient's cognitive outcome after surgery. And the goal, of course, is to spot decision making to decide whether this patient should undergo surgery or not. And the third project I'm going to talk about is the one where we are going to develop and deploy an AI system to detect and segment blown lesions in the clinical workflow. So the disease that we are looking at is called multiple myeloma. So multiple myeloma, which is also known as Carlos disease, is a cancer of the plasma cells within the bone marrow. So as of today, multiple myeloma remains a disease with no cure. And the most important symptom of multiple myeloma is austerelytic lesions. So at ETZ, the radiologists perform full body, low-dose CD scans on the patient and the scans produce images, which are slices of the body. So you can imagine the image like slices of a loaf of bread. And with this full body, body CD scan, we plan to build an AI system to detect where the lesions are in the body. So we want to visualize the human and want to locate where the lesions are in the body. So currently, there's no AI to detect austerelytic lesions. These lesions are around five millimeters or slightly larger. Here are some of the images. So here is one of the lesions here. There's another lesion here. There's another lesion here. And in this image here, there are two lesions. OK, so as you can imagine, manual inspection by the radiologists involves them going through each of these CD scans one by one, slice by slice, to look for lesions around five millimeters and above and measuring them. So these lesions can be missed. And not only that, our collaborators need to monitor these the progression of these lesions with follow up scans in order, for example, to see how the lesions are progressing and to see if new lesions have formed. So this can be quite coversome and time consuming because you have to connect the lesions from the same patient in one scan to another scan in the follow up, which is performed later. And also the main challenge we face is that we have limited annotated data especially when compared to many other applications. Because in the main reason is that we need experts to annotate this sort of data. So OK, so we've built a preliminary AI system here. We use a unit model. It's a neural network that's typically used for biomedical image segmentation. So the network looks like this, where we put in an image and it and it performs, it has a lot of convolution operations and max pooling operations and output will be a segmentation map. So we divide our image into patches for efficiency. And here are some of the patches we have. We make sure that all these patches contain the lesions. So here are some of our examples. And feel free to have a go in finding the lesions in these images. So to overcome the limited training data, we used data augmentation and transfer learning. So data augmentation is a method of increasing the amount of data by by creating slightly modified copies of the of the already existing data and transfer learning involves taking a neural network trained on a much larger image, a much larger, much larger data set as a base. And using that train neural network, we would fine tune our model on our lesion data set. So these are the two methods that we use to to resolve the issue of limited data. And in addition, we know that bone lesions only occur in the bone tissue. So it's not very efficient to look for lesions where there are no bones. So therefore, we got the best results when we also perform bone segmentation so that we could focus our research area to just the bone tissues. And we're also collaborating with a medical center in Germany to extend our data set. So here, as you can see here, these are our from these, these are our patches and these are our annotations from our radiologists. And which are shown in blue. And the third row is our is the results of our segmentation. So here, green is what we found. Red is the overlap between our segmentation solution and the ground truth. So red is good. And what's blue is the lesions that we missed. So in conclusion, I've shown three projects involving AI for diagnostic imaging. So fracture detection for scaphoids, MRI. So prediction of cognitive outcomes with MRI and clinical variables. And lastly, detection of acolytic bone lesions from full body CT scans. So our end goal is to implement our tools in the clinical workflow in order to provide clinical decision support and to augment the role of radiologists. So there are many challenges in incorporating AI for the clinical workflow and the paper which our collaborator has written summarizes some of these challenges. Thank you.