 Hi everyone, I think I had the privilege to do the introduction today so we're very very happy to welcome Irina Boykulescu, Associate Professor of the Department of Computer Science University of Oxford at AOA. I think this is a big honour for us and I'm doubly happy because I've had the privilege of being her student for five years there and essentially those were the years that formed me as a scholar so all my gratitude to her for that and this is only a glimpse into the multitude of the different types of research that Irina and her group work on at the University of Oxford. So please ask questions whenever you're interested and let's make this an interesting discussion and without further I'm going to thank you very much for the introduction. The pleasure is mine and whatever we have done together is because of your interest not necessarily solely because of mine or my groups. I think it's always exciting when people get together and there is some chemistry there that that helps us move science forward and when I was at your stage I went to talks where scientists from elsewhere were talking about their work and it's not necessarily that I followed in exactly their footsteps. I didn't necessarily go and say yes this is the area I want to work in but I think just seeing how close research actually is to the scientific curriculum of the degree I think helps you realise the unsolved problems are within reach and you can go and pick your own domain and think of where you feel you can get excited enough about the science to spend a number of years or a lifetime and and pick something and move forward with it and I think that that's what we all do. We find something interesting and we spend five minutes on it and then we spend five hours on it and then five years later we're still there and in our case the five years have long gone but I think you know 15 years from now we may still be working in this area because we will not necessarily exhaust it. So I would like to talk about semantic segmentation because this is something that has preoccupied me and my group for a number of years and we've done some stuff and I will show you kind of where we got to with it and what methods we've used but I also want to emphasise that it's not important or it's not essential, it is important but it's not essential to be perfectionist about it. So how good is good enough is it that we have achieved a segmentation that is going to get us somewhere or not and I think you know the the question is open to interpretation and you will you could tell me no actually what you've done is not enough but for some applications I think I think that we we can help the science the computer science science and the clinical science move forward. So we call ourselves actually this is due to you you came up with the Oxford Medical Image Segmentation Group and we called ourselves the image segmentation group for a while until we started working in landmark detection and doing some other stuff so we are now Oxford Medical Image Solutions or Imaging Solutions okay there's a there's a delay and we do and for the purpose of today I will just say that we do largely segmentation I can talk about landmark identification tomorrow or something like that and what makes us tick is that we're interested in the maths we spend all our time and we have collectively between us spent most of our time thinking about mathematics and a little bit of algorithmics and data formats but largely the science of it but we find particular excitement in the fact that the surgeons also want to know about this so we don't necessarily just take a public data set and measure something about it and then apply an algorithm but rather we talk to clinicians and what we do here we're not going to take over their clinical decisions this is not something that they are interested in and we don't feel that what we do has the backing to claim that we are going to do automated diagnosis that that that whole ethical discussion is for other people to to debate but through what we do through through semantically taking things out of images and and interpreting their meaning we help them make their decisions and we can reconstruct it and display it if you ever need a plastic spine I know how to make one um we give them maybe a way of turning their information around looking looking at the models that we create and and seeing their data from a different angle both physically and metaphorically and we have also approximated volumes the volumes of the organs and bones and other things that we have reconstructed um we have taken an interest in anatomical measurements these can be angles they can be distances again not necessarily within segmentation so easily and ultimately what they do with this information is that they can plan their life they can help themselves um decide whether this patient is going to belong to the group that needs immediate help or whether they can just go home and do some exercises and come later this is in the case of osteoarthritis for example so if you know if they have joint pain are we going to cut their bones out or are we going to let them suffer in pain um and so I can show you a few examples of clinical applications so with a volumetric I talked about volumetric measurements right why do we need to know the volume well we would need to know the volumes of tumors but that again is a different debate to be had because tumors the way the way the clinical profession has measured tumors before 3d reconstruction came about is that they were looking at slice by slice um images of tumors and they are just measuring a contour they are not going to work with a with a three dimensional entity however when they heard we were doing volumes they said oh and can you correlate these volumes with anything else and it turns out that in some conditions they take some of the kidney out and they want to know how much is there and how it is functioning and whether whether the functionality of what's left correlates with the volume and sure enough it does and what we did was to measure the kidney before and after this procedure and then they have a way of measuring its functionality before and after this procedure essentially the patient goes and drinks some water and then eliminates it a new measure what goes in and what comes out and that gives you an impression of how much kidney tissue there is and and whether it's in a state of producing of of of doing its job in the body um and we are told so you know we do these studies and then the clinicians go away but we are told that this helps because instead of then having to monitor the patient in the clinic they now know to evaluate the volume of the kidney and they can tell something about its functionality and what they can expect of that patient something else we do is we look at joints so I have here a mini model of a human hip there's a femur a femoral bone and a socket the pelvis right and we have a ball and socket joint in the femur and we've built what you're seeing here is a volume of the bone because it's big and you can recognize it we've done three 3D reconstruction of the bones and of the cartilage the cartilage is the part the soft part that goes over the bone and it's present in the uh joint on both the bone and the other bone in the pelvis and as we all get older and as the population of the globe gets older more and more people will be in pain and we live longer and we we sit on chairs for longer and we have all these conditions conditions associated with aging and so the clinicians are in quite a hurry to understand both what happens mechanically and how to treat these patients when to intervene how long the treatment lasts all of these questions are still open questions and so detecting wear and tear on in of the cartilage in a hip joint is really quite important and so we contribute to that by measuring things that happen in the cartilage and we can look at physical things that happen happen in the cartilage we can tell well it's worn out more on the inside of the joint and not so much on the outside or the other way around or that's with a what's called a morphological MRI so a water-saturated MRI will give us a shape we can then analyze the properties of that shape the thickness or we can take a T2 kind of MRI that's another flavor of MRI we turn the knobs on the machine we take a different scan of the same body part of the same patient and even though nowhere and tear can be detected physically the T2 scan gives us biochemical information about the cartilage and so we can detect where the cartilage is likely to dry out and wear afterwards so these are the sorts of things that we do just out of the maths so we're interested in the maths but our interest is fed by these clinical applications we also look at ultrasounds these are ultrasounds of babies at the moment babies who have a displacement in their hip so that they were born with their femur not quite in the socket and the femur can become dislocated it's a condition that can be treated very easily if it's detected in the first week or two of life at the moment in the UK not all babies receive this scan so there is a clinical consultation maybe a midwife consults the baby but it's not something that is offered routinely if we can bring this procedure down to something that's automated and cheap this suddenly becomes available to everybody and we hope to catch more people why would we want that well the consequences of this condition not being treated in the first couple of weeks of life is that in later life people develop osteoarthritis and then they come to us and we scan their cartilage again and we go through the other stuff but you know on a more serious note it's it's important to to detect it early so we do segmentation for all of these things in order to do reconstruction we need to take a scan and we need to segment it and if you've read a computer vision textbook you might think of segmentation as being this thing you have an image and you find some contours and you split the image into areas of interest and this is indeed what segmentation does and we do this in some of our methods we very much do exactly this but then there is also the other thing called segmentation where you have to also label the the individual parts that you're you're finding if you read more modern texts on semantic segmentation you might be looking at maybe the machine learning literature in the machine learning literature semantic segmentation is largely related to things like self-driving cars or interpreting complex scenes you get a city landscape there is even a database called cityscapes and you are putting a label on every pixel in that landscape trying semantically trying to find the meaning of that pixel in that in that image and this is great it's also important it's it's a task that we need to do more and more but the accuracy of this doesn't need to be particularly precise you don't need to know where the little toe of the pedestrian is before you say there is a pedestrian in front of me I am going to stop the car here so you don't wait for every last thing you just have an impression that there is something like a pedestrian in front of you and then you make a decision on the basis of that very approximate segmentation by contrast if you're working with anatomy obviously you need to know every last pixel if possible in practice we don't in practice how good is good enough well we'll we'll see and it depends on the on the application but we will have a more precise approximation of say organs or other features that we segment than we would in another context so this stuff comes from data and the data comes from scanners and I don't know how to use this so I'm going to do that so scams you're used to possibly seeing scans as a slice by slice presentation you put the person in the tube and you make a big noise and whether it's a CT or an MRI what comes out is a collection of these slices and you don't need to know much about anatomy to recognize that this is going sideways through somebody's head and you're you're perceiving when their eyes appear here are the eyes the person is looking towards my back and you see the various formations in the brain and so forth so we perceive these as a collection of slices mathematically what we're getting out of that is a volume we put those slices in a box and we treat then every pixel or every voxel as it's called in the in the area as a three coordinate representation of some form of measurement that comes from the scanner it can be an intensity it can be a density for the CT scans you get a signal that is proportional to the density of the tissue for MRI you get something else you get how much water there is whether you're a bound stuff of protein and so forth so we're going to apply some stuff to these things and label the pixels and get some plastic spine how good how realistic is my plastic spine well we'll have to measure something and in order to measure how how realistic our our segmentation has been we need to work out how the machine compares to a human and so one of the most difficult and time-consuming tasks of this field is that a human really needs to sit down and analyze the data in the first place so we need to have some form of human annotation if we have one human we don't know how accurate they were maybe they're a radiology student maybe they're somebody who's just playing maybe they're a specialist we don't know if we have more than one person that's even better because then we can look at the differences between them and work out where they disagreed and then maybe allow for the machine also to be less sure around those areas so between these two raters now if we have two humans one of them has selected the kidney and everything inside it what you're seeing here is a slice through somebody's abdomen but you don't need to know about the anatomy you can just look at the the contours right so this person decided that whatever that black stuff is inside is going to be part of the selection and this person has decided that it's not and then our machine will have to make a decision one way or the other but more seriously if we have two raters and they disagree then our hope is or we can then state that the machine is accurate enough this is one measure of enough it's when the difference the disagreement between the machine and one or the other human is measurably no bigger than the disagreement between the humans themselves so if we're doing something really precise and if we have the luxury of having multiple raters then that's one thing we can we can check and there are a variety of different evaluation measures so what we evaluate varies according to the task according to what we have been able to identify one of the most common things that we do is that we look at where there is agreement so here you are seeing in the first column what the human has measured so the ground truth is something that we have to believe to be true it we know it's not perfect when when if we knew it were perfect we could call it the gold standard we don't we just say this is some ground truth some way of saying this is where the feature is and then the machine also produces some output and then where they coincide that's a true positive where the machine has given us pixels but that the human hasn't given us that's a false positive and you can work out the interpretation of true negatives and false negatives and then we have different mathematical ways of combining these into different evaluation measures and working out some sort of percentage of success so that's the classical thing that we do and funnily enough in the literature people who work in this area but don't necessarily have a link to a clinical setting are still successful at developing algorithms and there are measures that we can all apply and we all compare our algorithms against each other and we can tell whether my algorithm is better than yours just by picking one of these evaluation measures and saying okay I achieved 92.5 percent whereas your algorithm only had 89 percent and so there is a whole body of literature in the computer science field in the imaging field that does exactly that people download a database and some annotations they run their algorithm and they try to outperform the existing measures that have already been published so that's what we try to do and to a first approximation I think that's what we can do most reliably so I'd like to talk about a couple of ways in which in which we do this and before we do before we look at the details I'd like to explain the metaphor that we use so all of these scans are going to give us an intensity if we have a slice through the scan and once again you're seeing a slice through somebody's abdomen you've now seen it enough times that you will recognize it there is an intensity associated with that if you turn that into a terrain metaphor you will have some sort of XY coordinates and the intensity can represent an altitude and then various algorithms exist that will just process that terrain we are aiming to find out the peaks or the troughs or other features of that terrain that will correspond to the boundaries between the different structures we do this I'm showing everything in 2d because it fits on the screen so that's a 2d slice turns into a three-dimensional terrain map what I showed you earlier when I talked about acquiring the data is that we acquire a three-dimensional volume which then turns into so if we also take the intensity into account that turns into a 4d terrain and I apologize for not putting a 4d terrain on the screen because we can't right we could hard to visualize it but that it's exactly the same idea so whenever I point to x and y you can in your head make the transition to a to a higher dimensionality and those of you who work with even higher dimensional data you can do the same but in higher dimensions so you can always increase the dimensionality of a problem if it helps you solve it so that's one interesting thing about it and then the other interesting thing about it is that instead of working with the terrain itself we can look at where there are cliffs what we're interested in is working out where the boundaries are between the features so we go to the foot of a hill and we say oh look there's a cliff here for me to climb that's an interesting boundary well that's something that we can already detect a priori because if we take the gradient in that direction simple mathematical calculation we already know that there is a cliff there so what we do then is we do that as a pre-processing step for all the data and instead of working with the terrain we work with the gradient of the terrain and we feed that into the processes that we then trigger and then what we do is we start flooding it and you're very lucky here because you're at fairly high altitude in Oxford we're very close to sea level and we get a lot of rain so here are pictures that I've actually taken in Oxford myself what you do is you take two of these troughs or all of these troughs right you start in two different valleys and you start flooding and when those two lakes accumulate and meet to form a single one they will always meet across a ridge and so what happens is that if we can stop the process at that stage and detect the pixels that are involved in that ridge those pixels will correspond to boundaries between regions of interest so we get exactly what the initial interpretation of segmentation was in computer vision so we simulate this flooding and then when we stop it right we can stop it at any point so if we stop it at different points in time we'll get the shallower cliffs first the shortest cliffs first and then as the process progresses we will get bigger regions bigger lakes and bigger lakes still and it will progress into a coarser environment so the the regions that we get get larger at each stage and the other interesting thing about this is that several of those little regions will get combined into a larger one so there is in fact a hierarchy and those of you who are doing computer science we'll think of this as a tree we will know that a large region will have a number of descendants all of which are part of it geometrically and that this process gets refined further down so this is one of the data structures with which we work and because of the cliffs being natural cliffs in the image what happens when we form those lakes is that eventually this data structure molds itself around the contours of the features that we actually want to detect so you can see here if we're staying with the kidneys you can see that this is the kidney and that large thing is the liver you're seeing some bright white things here where the vertebrae are you're seeing the sorry the ribs are and then you might see the profile of the vertebra here in the middle of the image so we are getting some information which then is a natural representation of the anatomy even doing this is going to help a user so the user can then given this data structure go and click in some of these regions of interest and select the things that they think are useful and this will give them some representation that then turns into a volumetric representation so the simplest thing is just to have the right data structure and this already helps the user to segment and and interrogate the data in ways that previously were not possible in ways that just contouring does not necessarily help right because what I'm showing you here is a contour but what you are doing in your head as we agreed earlier is that you're thinking this is a three-dimensional structure so from this you can get the volume so this is how that application worked that I talked about initially where we have part of a kidney and that's enough this technique is enough to give you its volume just by adding up the space occupied by the features of interest so that's one thing you can do yeah it's it's an algorithm that will climb up the cliff so I don't think I'd call it dilation necessarily it would be a way of saying I am at a local minimum I'm going to look around me and see where I'm climbing and I'm going to climb in this direction this direction and then hopefully if I go this way when I see over the cliff I'm going to shake hands with the lake that's climbing on the other side yeah I guess you can call it that you can call it that it's it's it's simultaneously propagating from every local minimum so it's not necessarily a convolutional process so that that gives you the data structure now whether without that data structure you can dilate maybe maybe this is what you're thinking of I don't know so if you know roughly where your feature is and I'm staying once again with the kidney I should have brought a kidney not a spine you can start a process of growing the region and this is not necessarily growing it in in the in the data structure so whether without the data that data structure present you can start from a point and just expand that point into a little lake and into a bigger lake and into a bigger lake still so it's it's this process is known as as region growing or fast marching and you have one of the world's experts in fast marching sitting here in the audience there are a lot of mathematical considerations to to implement and to take into account in this algorithm it's important to know where to start and at the time we were seeding it by hand I think or you can have an atlas based seeding process it's important to know how fast to go if you're in a hurry and you're jumping over all these cliffs you might jump over a ridge that is of interest to you it's also absolutely essential to stop at some point are you going to let it grow if you let it grow too much it might grow somewhere that you don't want it so there are mathematical considerations to be implemented in all of these aspects the speed the the stopping criteria are all essential and I think what you're seeing here is a way of overlapping some of the results of these algorithms onto data that you have by now become familiar with and it it does well enough and it also detects I remember talking about this procedure at at a conference on a different topic and somebody said well hang on a minute what this front is doing there the reason why that has happened is because there are fascias there the kidney isn't just floating in thin air there is a structure there is a membrane called a fascia that holds organs in place and what your algorithm is detecting is the cliff at the point where that fascia holds whatever other tissue is there so we were inadvertently discovering other anatomical structures that we hadn't even considered at the time so if we do this in two dimensions we get the pictures you've seen evidently what we do in practice is to build the region hierarchy and to do the seeding in all of the three dimensions so it's it could be then a four dimensional process because we have the three dimensions that go inside our volume of data and then one that represents the intensity and then grow from there and we can either do it a pixel at a time or a region at a time and get different different criteria slightly different results and then there are other mathematical considerations to be to be applied at the end so if we end up with little gaps I don't know how visible the little gaps are so the the liver here maybe isn't completely filled but we can complete that there are morphological operations we can just go and say if there is a gap fill that so that's completely deterministic this work it predates all the machine learning craze it predates the existence of GPUs on a commercial basis it predates all the modern technology that you might be reading about now but it worked well enough to feed into our clinical applications and we managed to get clinical mileage out of it by by applying these these algorithms clinicians had their own agenda we were delighted to contribute to that but the excitement was that we were working with second degree derivatives and so forth so the maths that you are being made to do keep it under your hat I don't think I need to say like if of everything that I'm saying today supervised machine learning is perhaps something that even the average person in the street knows about you show the computer a number of examples this is the training stage you go through maybe hundreds maybe thousands of something that it needs to see and then you show it something fresh and it will give you a prediction on where that thing is that you're looking for so in our case we're showing the computer or the algorithm will look at a mask so what you're seeing here is somebody's knee and so knee facing that way person facing that way this is this bone the femur and the pelvic bone and because it's an MRI of a particular kind you're seeing the bones in black right no disease present it's just the color that that flavor of MRI produces and we've done different things with this so what you're seeing this this crescent shape is going to be the cartilage that's on top of the bone which is of interest to us because of osteoarthritis and this other shape is a cross section through the femur we generate enough of these masks so this is the hard work right if if we need to show the machine hundreds or thousands of examples that's hundreds of hours of somebody's life right this is the poor soul the poor medical student who sits in the basement and draws contours and gets more and more tired right it's a very expensive both financially but more so in in time and effort taken to produce these masks so if we have these masks then supervised learning does very well the problem is that we don't usually have that many masks and the more we ask the clinicians the less interested they are because they have the clinical work to do they don't necessarily have the physical power to be doing these things so there are and I say here designer learning model what you can do if you have an interest in this is to take a model off the shelf and do this there are public data sets there are models on shelves there are shelves and you can try and do it and see what you get just by applying the technology and then this will hopefully get you to think of improvements to the algorithms themselves so we've tried this on a number of things and you know when we've tried lungs when the covid data started coming out we did we did it with covid data we got different levels of accuracy on different things apparently people are interested in the in the back of the eye I don't know why they need to apply machine learning the back of the eye is pretty round I think you can probably have a first year project students to detect that but anyway we've tried it it worked it's very good and what you're seeing here is also the transition from a single terrain to a multiple terrain what's happening well that's grayscale stuff right single intensity for the color image you'll get an RGB red green and blue channels so you'll have three different terrains and then you have to do some stuff to work out how to combine the the information that you get in those channels but nevertheless you know you're seeing here a back of an eye and some cells that we have selected well somebody has put on a slide that then we photographed and it's the adrenal glands so there are applicable there are multiple interesting applications of this stuff but what do we do when we don't have the clinicians what do we do when there aren't enough masks for this for these things well we have to devise another way of going about it and if we don't have enough clinicians but we have enough data we can then consider doing something called semi supervised learning which is that we have a large data set but only few labels for it and again I there are various models out there people are trying to do more and more complicated things and they pull the data and then they generate something and then it comes back and there there are equal weight agents in that model and there are hierarchical you know like a student teacher type hierarchy and there are a variety of different conceptually different ways of tackling the problem but ultimately ultimately what's happening is that the labeled part of the data set informs the learning process in the usual way and then the unlabeled part of the data set kicks in at some point and will generate some pseudo labels and then we have to have a way of retaining or discarding those according to criteria that are no longer based on masks if I knew that I've generated a label and I have the real label for that image then I can just compare it against the real label if I don't have a real label I need to have some other criteria for tackling that and again it can be done based on similarity with the existing labels it can be done based on other properties of the uniformity of the intensities different criteria and it's not just obviously not just us so what you're seeing here is our evaluation measures how many true positives false positives all that stuff that we talked about at the beginning there are if 10 percent of the data has been labeled and we've done exactly what everybody else does right we've looked at who else has worked with this data who else has produced algorithms and we have a table in which we say well now our numbers are better great and we've improved it by a whole three percent and that's roughly where the computer science field is at this is the typical setup so I also haven't said I haven't said this is on a cardiac MRI data set so if you don't recognize the shape of this blob is because you won't have looked at necessarily the heart from that side um so we can improve the performance of how how well this these algorithms work on a data set or we can improve the the amount of data that's required so what you were seeing earlier is this 10 percent of the data has been labeled right ours is the big thick line at the top and we have improved it by whatever three percent okay what we are measuring in this graph is that we're also we've also tried it so we've tried the existing algorithms and we tried ours when five not ten percent of the data has been labeled so we are doing better and then also when I don't know what this is one percent of the data has been labeled so for a very large data set this works well and you can see that we're still getting kind of seventy something percent of overlap well sometimes that's good enough sometimes it's not but what we were getting with the algorithm the other algorithms was not really usable so whether seventy percent is usable is it is a different question and entirely depends on the clinical setup um but uh what we're saying here is that if one percent of the data is labeled and the data set is large enough then we are we are going somewhere right clearly if you have a hundred points in your data set and you've labeled one you're not going to get a seventy percent accuracy so this also raises the question of having vast amounts of unlabeled data and this is the case for semi-supervised and it's also the case for weakly supervised learning which I don't propose to cover today weakly supervised will be you have labels for a small amount of the data but the labels aren't even the masks so they they will be just some squiggles and there is a whole theory around that and I guess it's interesting to evaluate what is good enough right what what are these things for if I go back to my femoral bone and here's here's the cartilage you can see the cartilage in the in the pictures what we've done here this is the study that looks at the health of the cartilage the biochemical composition of the cartilage based on a t2 scan and we are um producing a traffic light system whereby in red we're saying well the t2 composition the composition in the the chemical that indicates the cartilage is on its way to drying out essentially that that composition is low and then we work out where on the femoral bone where where spatially that thing is located and for that we don't necessarily need a perfect segmentation it's more important to know where it is and that we've got most of the tissue but we don't necessarily need a perfect segmentation and what we've also done obviously I don't I have just the bone here what you're seeing here is the cartilage overlapped the the area of risk overlapped onto the bone so that the surgeon can then work out exactly where it is so they're going from slice by for a slice by slice view to a physical model that they can hold in their hand so that they plan their surgery and sometimes the surgery if it's if it's radical right if it's hip replacement then they don't necessarily need to know this but if it's keyhole surgery they need to know where to enter the site and how to act on the cartilage to remedy some of the condition so this leaves us with are we going to let machines do all this all this work or not and there have been multiple headlines out in the press out in the kind of daily media where the machine has beaten humans on whichever diagnosis there's one on skin lesions that keeps going around and there are increasingly I think we're going to see this in in the press and people are going to embrace this as part of their their reality but what we are never told however much we we look at those articles and even at the published work it's not always obvious what the machine was allowed to see what the human was allowed to see did they each have other information about the patient did they just make the decision so I saw one recently on femoral fracture and the machine was only allowed to look at a subset of the pixels or some such and then the humans were also only allowed to look at the same subset because it was deemed that that was a fair comparison between the human and the machine so the human didn't get to see outside that that small subregion either again this is this is more like an ethical and philosophical debate is this a fair comparison who beats whom and what are we measuring but where we are heading is that if we claim that the machine has um produced anything that is either on a par or better than humans then the clinicians need to understand exactly why and when and where the machine took those decisions so explainability is an up and coming field and so rather than just producing the true positives and true negatives we will increasingly see a need for the medical profession to see similar examples on the basis of which our machine has made the decision or some other criteria that it has taken into account and so it's important for us to make models which are mathematically robust it's important for us to explain their decisions and for that I would like to stay in a world where we understand our models and don't come up with something so complicated that we don't then we can't then unpick what has happened so that's about it thank you very much evidently I don't do this thing myself Varduhi has played an important role on a lot of this work and there are many other students and research assistants who have also contributed thank you any questions we tried to reduce the noise by smoothing the image in the first place so there are some steps that I haven't discussed which is that if you apply something like a Gaussian filter or an isotropic diffusion filtering there are all these different things that that you can apply then some of the small bumps will disappear but ultimately we just have to accept that the the information will not be smooth the mm-hmm there are no thresholds involved as such we just say we want five layers in this hierarchy or 15 or whatever but a lot so yes any difference will be a difference and the gradient will tell you how steep that difference is and that's usually enough to just climb so that if I if I'm to stay with a flooding metaphor the water will still go up and sometimes it goes up more slowly if it's shallow and sometimes it goes quickly if it's steep and then if they have a common ridge somewhere then they will meet regardless of the speed at which they have grown but what I think is also important to note is that the different modalities will have different gradients in in some of the directions so for a CT scan for example you get very thick slices CTs are like sending an x-ray through each of in order to produce each of the slices of the volume they are sending x rays in the physical electromagnetic sense which will radiate the patient so we can't afford to take very fine variations so everything is an approximation in the sense that you originally asked the question everything is an approximation MRI scans will also have an approximation of a cube and and the signal is approximated and then interpolated across that cube so everything is just an impression of the anatomy that is there and so if in one dimension if in the dimension of the slices we get some really rough terrain we just have to accept that that's the best we can get thanks for your question hi you told in your lecture that MRI could be a very super accurate MRI is it possible that for example MRI semantic segmentation combined for example with resonance remnant spectroscopy which is which has very high precision and accuracy we we could see for example the accumulation of cross links in the brain which accumulate with the age we will for example 30 or 40 of them with colored images and which they are very small with one two nanometers in size and this to differentiate a colored with a colored image for these types of cross links so what is the acquisition like for the spectroscopy method do you resonance yeah yeah but where does the patient sit is it a separate is it a separate acquisition step do you put a hat on the patient and then put the patient in the MRI scanner or can you acquire them at the same time right so so what i'm thinking here is that if we can acquire them at the same time and overlay them then there is the spatial co-location of the information if we acquire them at separate times then the first thing we need to do is to align the two structures and if we know how to align them then yes we can combine the information but then this alignment which in the field is known as registration this is a really hard problem and so i think that the jury is still out on how to do that in in the way that that matches because there will be a different scale any you know deformation and all that stuff so i don't know enough about the the data acquisition for spectroscopy but if if we had a way of overlaying that information onto structures that we already have from here then absolutely and in some sense fmri functional MRI does something like that except everything is co-located so we we get the signal and it's all there and it's just four dimensional or something i think it's four dimensional information that we're getting but yeah i mean let's do it yeah yeah so if you think of it as a as a grayscale image i don't know whether you're a computer scientist but i'll assume you are and you can tell me okay so you're getting the slice right what you're getting from the scanner is this picture here you don't get it quite so blurry this is blurry because we're looking at a at a sized down version so we're not where we've simplified it so for each x and y coordinate you get a grayscale i don't know 200 and then you say that you are 200 meters over sea level at that pixel pixel next door might be 254 right so anything between 0 and 255 and then it will be a collection of bars that will tell you the altitude at every pixel and you can do the algorithm with that you know i'm not saying you can't but because of what we were discussing earlier it's quite bumpy we get better results if we then say okay let's do another thing and look at where the cliffs are and then work on this image rather than this image does that answer your question yeah what happens in practice is even more complicated than that because from the CT scanner you get 3000 values and then good luck because there aren't 3000 grayscale values and so sometimes we work with the raw hounds filled units that come from the scanner but that is too much and so what we do if we know what we're looking for if we know that we're looking for say a kidney or a liver we know that the information that we're looking for is around 70 to 90 if i remember correctly hounds filled units and so we keep those we keep everything that has come from the scanner between 70 and 90 and then we compress everything else so if it goes to 2000 from 90 to 255 we can't fit the remaining 1930 values but that's okay because they will not be as interesting to us so we compress those into the remaining grayscale values at the top and then we do the same at the bottom so they come from like minus a thousand and again we'll have a thousand and seventy values to compress into 69 but that's also okay because that's not tissue that's of interest to us for the detection of this soft tissue here yeah and for MRI it's even harder because the the intensities that come into MRI don't represent anything that we can intuitively understand they're just a number and so we need to work out how to use those numbers how to turn them into something that's of interest sorry what about what about yes i don't know which way to go no this way the tomorrow's extra i don't think i have any ultrasounds this one's i'll put some ultrasounds into tomorrow's talk there you go this is like advertising i can promise a lot of ultrasounds so this is ultrasound it's ridiculously noisy so what you're seeing here this bright stuff is the bone so it's the pelvis bone but this time where the baby is lying down this is you know one week old babies and so they're lying down that way and then this darker rounder thing right it's obvious to us that it kind of belongs together because we're good at knowing which bumps to ignore so although it's very noisy and it has those white specks we can see that that's a single thing and that's the femoral head this thing but if we try to segment it it's harder so what but we nevertheless what so what you're seeing here is in the middle is what the human has segmented and i'm only showing one human and then on the right is what the segmentation has segmented what the algorithm has segmented but we're not interested here for this problem we're not interested in the overlap it's okay that there are gaps at the bottom and that it looks i mean this one has probably missed quite a chunk there but what we're doing with this problem is that we're fitting a horizontal line along the bone and we want to know whether the horizontal line goes roughly halfway through the femoral head in which case the femoral head is safely lodged inside the pelvis and if it's not then what we're seeing is that more than 50 percent of the femoral head is above that line and this means that the the pelvis has not developed yet and so what the treatment for that is very easy that the baby is then immobilized or placed in a particular position in their in their nappy sometimes they wear a harness but a lot of the time it can be corrected just with multiple nappy wear i mean it's it's really really that easy and what we're doing with this it should we be allowed to scan all babies which again translation translating this process into the clinic is a whole different kind of worms because we need to have all the regulatory approvals all the we can't just go to the hospital and say right we've solved your problem do it tomorrow it's much more complicated than that and adoption will take three or four or five or more years but should we be able to get this to be implemented in every clinic should be be able to allow for the entire infant population to have an ultrasound what we're aiming to do here is to say which infants can be discharged without further intervention and we want to be conservative here right we want definitely for these to be caught and if we get a few of those if it's maybe 50 percent or maybe 49 percent we still keep them under observation we maybe call a consultant and get the consultant to look at other factors such as family history and all those things that they know clinically about the patient that we don't when we look at the images but what we want is for the algorithm to be conservative to allow for infants to be discharged if we can be absolutely sure that they will not develop osteoarthritis too early in their life and we do this at the cost of increasing the number of infants that get retained for consultation so the number of false positives can be higher right we can tell an infant who is marginal we can still tell them well maybe you have dysplasia let's look at you for another week if you're looking at a different problem if you're looking at I don't know breast cancer then you want to you don't necessarily that's not necessarily a good strategy so giving somebody a false positive telling somebody you have breast cancer has such major impact in on their psychology they are worried you might give them more invasive procedures to take a biopsy so a false positive is not a good or you know it has to be administered with much more care than we would do with this problem that's clearly my signal that I've spoken too much thank you very much