 I'm pleased to meet you and I would like to start these couple of hours we spent together today with the first three years of considerations on multimodal image integration, which is the first topic we have to discuss together today. The second one will be motion management and as you will see, there are a lot of things that are interplaying between the two topics together. So let's start with this one. Before going to more technical considerations, I would like to point out what the difference between image integration and image registration is because it will be a very important part of our considerations, especially later in the second part of this first talk. For image integration, we will mean the use of two or more image sets in the process of, for example, a treatment planning process. So just using CT together with MR, for example, okay? Which means that the physician can see the CT on one screen, for example, and see the MR on the other one and make his consideration or her consideration on how to put them together, which is not what we intend for image registration. Image registration is a mathematical concept, okay? For image registration, we mean the process of making two or more image sets, which are generally three-dimensional image sets, especially current to each other, to be used for, for example, building a target for treatment planning. Another term which is very often used in this field is image fusion, which is very often confused with image registration. But just to clarify, we will intend, at least for today, but it's quite common in the scientific literature. For image registration, we will mean the mathematical process that puts the images together. For image fusion, we will mean the simultaneous visualization of the two or more image sets. So how we look at them, okay? Which can be one at the left, the other one to the right, and we have a common pointer that moves together on the two images, or it can be an image blended to the other one to see them superimposed, okay? So for fusion, we just mean how we put the two images together, one overlay to the other one. And for image registration, we will mean the mathematical process of making two or more image sets going to each other. What are the main modalities that we use for treatment planning? Of course, computer tomography is still the most important modality that we use. And we will see in the next slide why it's still so today. Then another important modality is magnetic resonance, of course, which is increasingly being used in radiation therapy in the process of treatment planning, and also PET-CT, which is also increasingly being used in current years, it started in the years, I would think in the 2000, which is the year where PET-CT actually started for clinical applications. And then in some cases, we also have ultrasound and other emerging modalities such as PET-MR and so on. But for the considerations we will do together today, we will consider computer tomography as the primary data set always and then magnetic resonance and PET-CT. Why CT is still the most important modality for treatment planning? For these three reasons. First, CT is the tomographic modality, tomographic, that offers the best spatial accuracy, which means freedom from significant distortion and so on. Geometrical accuracy is a matter of fact for CT. Then CT is also the tomographic modality that is made up by a map of attenuation coefficients, which are very useful in most models that allow us to compute a dose distribution. Many mathematical models that we use to compute the dose and so to build a dose distribution use a map of attenuation coefficients which are provided by CT and not directly by MR. And then especially in current, modern applications, there is a third reason. The third reason is that verification systems, in-room verification system today, are quite often based on X-ray transmission imaging. For example, on con-beam CT, which is a tomographic modality, which is very similar to CT in which it is very easy to register to CT, okay? And this is the third reason why CT is of primary interest today in, continues to be of primary interest today in aviation therapy. But of course we know that MR can offer us details that are not seen in CT. For example, here we see a comparison between CT and MR in the prostate region and we can easily see how details are more visible here than in details of soft tissue. I mean, are more visible here than here, which is quite known today. What is the mathematical difficulty of putting together two images like this? Well, it is the fact that there is no direct correspondence between gray levels from here to here. Look for example at the bone details here and here and the soft tissue here and here, okay? Or at the muscle here and here or fat and so on. There is no direct obvious correspondence between high levels here and high levels here, low levels here and low levels here. So this makes quite difficult to put the two images together in a mathematical way. And we will see in the second part of this talk what is the most common tool used today to do this registration. We also have to keep in mind that MR is actually a multi-modality problem. We do not have a single kind of MR. We have T1 and 2T waiting. For example, we correspond to really different imaging modalities. For example, T2 enhances the fluids, water in particular, T1 enhances muscle and fat and so on. So still we think MR, we do have very different images in nature. And this once again is a problem of a core registration even between one modality in MR and the other modality in MR. And of course, this difficulty is much more a problem if we take into account functional information which can be provided by MR imaging. For example, by means of activation maps with a bold effect like this or from diffusion weighted imaging or even by spectroscopy and so on. These are very difficult modalities to be put together with CT because they are characterized by low spatial resolution, very often by low signal to noise ratio. And if we think, for example, of functional MRI, this is often reported on anatomical atlases for reference which are not directly superimposable to the real anatomy. They are distorted with respect to the real anatomy and require a much more difficult problem to be connected to the real anatomy of the patient. In general, with these modalities, the registration to CT might be difficult because of the poor common information contained in the two data sets. And we will see how to put them together. Today, MR is much more than morphology and function in the classical way. There is a common trend towards the multi-parametric use of MR imaging, which means the inclusion of not just morphology, but for example, maps of the upper and diffusion of coefficients together with maps of metabolites or concentration of metabolites seen by spectroscopy and so on. These are usually not employed in the treatment planning process, okay? Not yet, at least. But if they are, they definitely need special attention because, once again, they are not directly comparable in terms of distortions and superimposure to the anatomy of the patient. When we deal with MR to be co-registered to CT, we often deal with problems in the brain, okay? For the brain, there is, in my opinion, no question. We strictly use rigid registration. We do not deform one image to the other one, okay? Because in the brain, we do not have significant deformations to be taken into account. So it's more simple, but above all, it's more robust and more safe to use just a rigid transformation, much more safe. And we will see later a few examples why deformable registration can be very dangerous. In the brain, there is no question. We use rigid transformation, but rigid transformation, sorry. That means we need to compute six parameters, six numbers, which are the three translations and the three angles of rotations that we have to use to move this moving image to be superimposed to this one, which is the fixed target image, okay? So the outcome of rigid transformation will be a series of six numbers, a translation in the lateral direction, for example, a translation in the anterior-posterior direction and a translation in the superior-inferior direction. And again, we have three angles of rotations around these single axes. But take into account and consider that even in brain applications, we quite often have to deal with situations like this one or better, the following, this one here, which includes a portion of the neck. If we have a portion of the neck more than the first cervical vertebra to be taken into account, we might have a deformation in this part here, okay? So a correction is needed and very often we have to use tools like clip boxes or something like that that allow us to limit the region in which we perform the registration just to the region that we are sure that is not going to be deformed with respect to the other modality, okay? So for example, if we have a patient, like in this case, which is scanned in the MR with the head like this, in extension like this, and in CT we use a mask, for example, and the head is like this, we have to be sure when we perform the registration that the region that we're taking account is just here and does not take into account the cervical part that can be very deformed with respect to the other modality and can lead to a wrong result from the mathematical point of view for the registration. Commercially available treatment planning systems or other systems quite often today have this modality or this functionality already implemented and if you use registration between the two modalities in the brain region, you must be aware of this and be sure that you can use consistently the tools that you have. But anyway, it's very important to obtain similar initial orientation when we position the patient for the first scan and for the second scan and to use patient positioning devices that allow us to start at least from a very good initial position or initial solution. So we do not have to use very heavy deformations or tools that are quite dangerous to be used. And of course, when you use a patient positioning device in MR, you also have to be aware of safety problems and to pay attention to a marked compatibility of the system that you use to put the patients on, okay? I'm not sure if this is going to work here. This isn't, I'm afraid it's not. It was a movie showing you a transformation that allows you to put a CT together with a scanning CT. Maybe this one is, yes, it is, okay? This is a movie that shows you how a pet volume, in this case, is moved to be superimposed to the CT, which is this part that you see here. And as you can see, the registration, which is a rigid registration of the type that we have seen at the beginning now, is very fast. In, I would say that 10 years ago, an operation like this one would have taken at least a couple of minutes to be performed. With modern software registration, with at least a rigid registration, it's performed very quickly. And this allows you to try and to have a trial and error approach if you prefer to see what is the best solution to be taken. Pet is also important, as I told you before, for integration to CT in modern systems, not just MR. And pet is especially difficult because it's a modality that does not have very defined volumes, does not have very defined surfaces. So the problem of registering pet to CT is not trivial, but in modern scanners, in all modern scanners, we have in reality pet CT and not just pet. So the trick is always, and this is mandatory in my opinion, to register the CT to the CT part of the pet CT. So it's an intramodality problem. And then to apply the transformation that you get to the pet volume, which is transformed together with the underlying CT, which is, which it was acquired. Another very important problem in the use of pet is the use of the standardized uptake volume to define targets in radiation therapy, which is not trivial and requires a strong standardization to be used without errors. And lesion motion is also a very important problem that we will see really in the next model later. Use of the standardized uptake volume to define biological volumes is difficult, as I said, because it's generally based on very rough algorithms. For example, if you use a fixed threshold for the definition of what is to be included in your biological target volume, like 2.2, for example, which is a typical threshold on the standardized uptake volume here, you can get very different behavior between small and large lesions. For example, look at this image here. If you have a large lesion and you use a threshold like 2.2, you are going to get a boundary which is very close to the external boundary of the lesion. But if you use the same on a small lesion, you are likely to get an underestimation of the volume, okay? So someone proposed in the past to use a percentage of the maximum value that you see within your image for the standardized uptake value, which means with disregarding the fact that you have the maximum like two or three or four or 14, you just take 40% of the maximum you have, which can be of help in some cases, especially if you have homogeneous lesions, but if the lesion is very homogeneous like this one, and you have, for example, a very strong hot spot here, 40% of this one results always in an underestimation of the volume. So be aware of the two simplistic algorithms that are usually implemented in treatment planning software today for using PET-CT for treatment planning because they tend to underestimate the volumes, especially for small lesions and for in homogeneous lesions. And this is a problem to be really taken into account. We also have more refined algorithms that can be used to solve this problem. For example, algorithms based on the maximum gradient on the lesion, which means I do not look at a single absolute value here or a percentage here, but I just look at the point where the gradient is maximum from outside to inside. And I define the boundary of the lesion like that one, which is a bit better than these two, but still there is no recognized best-in-class algorithm to do this operation. So just be aware that everything you use on SUV is still not optimal in any implementation that we have commercially today. New algorithms are being developed, especially based on object recognition or classification techniques, but they are still in the investigational mode today. This is an example, this is another movie that shows you how probably shows you if it works. It doesn't, I'm sorry. I tested today before leaving, but maybe I can show it to you separately later. It was a movie that shows you how in this case, a gradient-based algorithm is able to identify the boundary of the lesion here without the problems of underestimation that we have seen before. Well, and this is an anticipation of the second talk, which is devoted to motion management. This is the problem that you can get if you use the SUV value to build a biological target volume in presence of motion of the lesion. Here, for example, we see a very extreme case which is a motion of a very small mass with a total escortion of roughly two centimeters. In this case, we tested, this is a work we did in our center to get the optimal number of phases in which we divide the respiratory cycle to use this, but you will understand better this one in the second test. We saw that with no explicit control of motion lesion, we got an SUV value which was under two, and if you remember, 2.2 is often proposed as the threshold to define the volume, so in this case, 2.2 is even above the value that we get for the maximum SUV if we do not control motion, and if we control motion, by means in this case of a gating technique, we recover the signal of SUV up to 4.7, 5.5, and even greater than six in some cases. So also be aware, it's just, in this part, it's just to tell you, be aware of motion when you use these tools because it can be very dangerous and misleading in the use of PET to define a biological target volume. And in fact, this is an analysis of what you get with the two simplistic algorithms that we have seen before. We have the blue line, which is the application of a threshold, 2.2, and the estimation of the volume in the cases of no control and gating here, and the red curve, which is what we get with the 40% of the maximum SUV. And as you can see, if we do not have any explicit control of motion, we get values which are 10 times, in this case, compared to this one. Once again, this is a very extreme case. It's not always like this, okay? But it's very, in my opinion, interesting to see what can happen. A volume which is 10 times smaller if we use one algorithm with respect to the other one. If we use a motion control like gating, the two volumes tend to converge towards a similar value. But still there are cases like this one in which there is no trend towards a common value, which means that the two algorithms that we are using, a fixed threshold, and the percentage of the SUV max are not good and not to be used in all cases because there could be problems that are much more than motion itself. Yes, please. Yes, please. Yeah? Sorry, this one? Yes, it is. Yeah, we are, yes, yeah, yeah, yes, yeah. Yes, please. No, no, this is a single corona slice. It's not the maximum, but it's a single corona slice in four phases in this case. Okay, okay. And now let's go to the second part of this first talk, which deals with the various methods that we have to put the two images together. So the methods for image registration, okay. Of course, special coherence between the two different images is at least it's thought to be a key factor for treatment success. So we have to be as accurate as we can in putting the two images together. This means that manual registration is quite often not a good tool because we usually have to deal with three-dimensional datasets. It's not just putting two photographs together in 2D. It's a three-dimensional problem, which is very difficult to deal with if we just use a manual movement and rotation of one volume to the other, okay. So we need an automatic method. And fortunately, there really are very good automatic methods implemented in modern treatment planning software today for rigid registration. As regards deformable registration, my personal opinion at least is that it's a very seldom implemented in a good way and always requires careful evaluation of results. And I will show you a couple of examples later of why the form of registration can be very dangerous. Rigid registration as we already saw is described by six parameters, which are three translations and three rotations around the three axes. Deformable registration, which is this one, is a locally rigid, usually, registration, but leaves the volume free to deform with respect to the other one in any region of the space. Between the two of them, we have the affine registration, which is a global registration. It means that it can apply a deformation to the volume, but this kind of deformation is applied uniformly over all the volume, okay. So we have basically factors of scaling in the single axis, x, y, and z, and factors of shear, which means an angle which is originally of 90 degrees can be tilted like this, but on the whole volume, okay. This is the affine transformation, which is described by 12 parameters, which are the six we are already familiar with, plus three scaling factors and three shear factors like this. Deformable registration is a different thing. Deformable registration leaves any region free to deform with respect to the other one with generally no respect of what is happening in another region of the volume. This is the mathematical structure. I will not go into the details of it, of course, of a typical deformable registration algorithm. It's generally composed by a similarity measurement. This is this one, which alone constitutes also the basis for the rigid registration plus a regularization term here, which is added, which penalizes improbable transformations, which is mandatory if we want to constrain the system not to deform in ways that are not, sorry, that are not natural or physically possible. I would like to spend a couple of minutes now to try to show you this is my personal interpretation if you wish of how similarity measurements work, okay. Which is the joint histogram analysis which can also be regarded as the most frequently used algorithm for putting the two images together, which is the maximization of the mutual information index. We will see first from a point of view of the analysis of images through the histograms and then from a more mathematical point of view. And I think it's important if we get in some details here because this is really the universally recognized algorithm to put together two images of different modalities like CT and MR, okay. Once again, it's called maximization of the mutual information index, but we will see it quite later. We start from a semi qualitative point of view if you wish. We have this image here, which is a CT and we look at what happens in this region here, for example, but then the algorithm works on the whole image, okay. But focus your attention just here. If we focus our attention just here and we make an Instagram of the values that we see here, we have a lot of white, which is a high number. So we have a histogram which is high here and we have low histograms for the other values because we do not have a medium gray or black here. If we look at the same region, which is a bone region for MR and we build the same histogram, we see that we have a black here, which is a low value, a lot of numbers and a few numbers with higher values. So of course we cannot just put the two images together and tell the system to put white, what is white and black, what is black, for example, okay. No, we can do the opposite because it does not work if we just say put together black with white and vice versa. So what do we do? We put the two images together like this, like they are, okay. We focus on this region here and we build a two dimensional histogram, okay, for CT and for MR, like this. This, for example, is CT and this is MR. We still have a high values with a lot of occurrences for CT, low values with a lot of occurrences for MR, but the two images are not put together properly. The superposition is not good. So we also have dispersions of values in the histogram here and here. What the algorithm does is taking steps randomly in X, Y, and Z and then rotation randomly in X, Y, and Z and recalculating at each step, at each random step if this situation improves or is getting worse, okay. If it improves, it goes in the same direction. If it is getting worse, it goes backwards to the opposite direction until you get a situation like this one which have the perfect superposition of the two images and no dispersion of values around here, okay. This is the basics, if you wish, of how we put together two three dimensional images, even if they are of very different modalities like this. It can be seen from the mathematical point of view with these couple of slides that I'm going to show you. Consider image entropy, which is a measure of information. This is a very classical concept that I'm sure most of you are familiar with, okay. And take this one like a very simple image just made up by three, oh, I'm sorry, by five voxels. Each voxel contains a value which is always the same, three, three, three, three. This is a very predictable message, okay. Each step does not add any information to the previous one because it's three here, again, three, again, three, again, three. So we say that this image contains the minimum value of information. And in fact, if you make this calculation here of the image entropy, taking by P, I, the probability of getting the value three, okay, and the value zero, the value one, the value two, four, five, and so on, you get exactly zero in this case. The opposite case is this one. When you have an image which is very variable, okay, you have one here, then five, then four. Here at each step, we add a different information from the previous one. And in fact, if you make this calculation to get the entropy of this image here, you get a higher number. And then we also have the intermediate cases like this one in which we have a string of repeated values like here and we have different values like here and we get intermediate situations like this one, okay. This is not yet a registration. Of course, it's just how we measure the information within an image. But we need it to understand what is following now. We define the index of mutual information in this way, mathematically speaking. The index of mutual information is the entropy, which is the information we have just defined, of image A plus information of image B minus the information of the superposition between the two images. Well, minus this one. In your opinion, you have just seen this example here. Okay. In your opinion, does this image have more or less information than this one or this one from the mathematical point of view? Not from the clinical or anatomical point of view, but it's, this is more information, okay. This is more information. We have four eyes here and just two here, just to say, okay. So this is more information and this is our own information. So we subtract it here, okay. When we move the second image over the first one to get a good superposition of the two, we get the minimum added information here, okay. So we subtract the minimum and the index becomes the maximum. So what an algorithm actually implemented in modern treatment software, treatment planning software is this one. At each random step of motion in X, Y and Z, at each random rotation around X, Y and Z, the computer recalculates this index here and decides if the direction is the right one or if it has to go in the opposite direction, separately for X, Y, Z and rotation line around X, Y and Z. Okay. This is the real algorithm that is implemented in the treatment planning software. Okay. A couple of considerations on the formable registration. The formable registration, as I told you, allows you to move an image with freedom in any part of the image and you can, for example, get straight lines that become curves like this one and so on, okay. These kind of tools of algorithms are very useful in some cases. For example, they are very useful in adaptive radiation therapy because if you have to compute, it does distribution that must be accumulated to a previous one. Very often you have to deal with deformations. So they are useful tools but you have to constrain the algorithm to force the algorithm not to introduce very strong deformations like this and this is made by the regularization term I introduced you before. For example, this is just to understand the problem. This is a problem of the formable registration with this image here, which is the original image, sorry. This is the same image registered to a target image with no control on regularization. You can see very strange details here and here, for example, which are not of any sense from the anatomical point of view. If we constrain the algorithm through a control on this regularization term, we get a better situation like this but still we have deformation which do not make sense like this one and if we operate a stronger control on regularization, we can get a good result like this one. Well, what is the problem with treatment planning software that we use every day? The problem is that this kind of control over the regularization term is usually not implemented. So the user does not have any control on regularization. It just takes what the software does. So you have in this case to be aware of situations like this one. This is a real case. We treated 10 days ago in our center. It was a rectum tumor. This is the target image onto which we had to register this previously taken CT. This is the target which means the image used for treatment planning just before treatment. This was taken clinically a couple of months before I think and you see that the situation is completely different. Here we have a flat top. Here a flat top with a cushion. Here we have a curved bed like here and we performed the formable registration to get this image similar to this one. What we obtained is this which in my opinion is quite good. This is the deformed CT taken before two. The CT taken just before treatment and this corresponds to this one quite well. And if we look at the formation map within the region of interest we see that we have a very low values of deformation. This is a scale that goes from cold colors up to hot ones. But if we just go up a few centimeters with respect to this slice we get this situation here. This was the target image. This is what has to be deformed and we have a completely different situation in this region here. Of course, we have the interesting here. And if you see here we have a very strange deformations which does not make any sense from the anatomical point of view. If you look at the deformation maps in fact you have very strong deformation here. So we did it because the region of interest was this one and it was very useful to have this information register but be aware that a few, sorry, centimeters above you have a very non reliable situation like this one. So the message is use the formable registration algorithm. If you need it but be always very critic towards the results that you get. Once again, the formable registration very often and I'm speaking to physicists so there is no problem on this is very often based on spline models which means the mathematical functions that are underlying the transformation are spline based which are functions that are continuous and differentiable in every point. Very simple to implement very fast to calculate, okay? But in some cases they cannot describe anatomic discontinuities like in this case here when we see the motion of a lung tumor very close to the chest. And in this part, there is no continuous and inferential function that can exactly describe what is happening here. So this is another case where you can have problems with the formable registration algorithm which in turn is quite useful if you have problems related to those accumulation as I told you before. For example, if you have a dose distribution which is given in one reference situation like this one then you have a deformed anatomy like this one and you still irradiate the patient with the original treatment plan what you actually get is this one. This can be dealt with if you want to calculate the real dose distribution only with a deformable registration algorithm. So my message is not please discard the formable registration. My message is please use a rigid registration anytime you can and reserve the formable registration to very special applications like this one and always with a very critical approach. So the take-home messages from this first part are this one. Image registration is the process that makes two or more image sets specially current to each other as we have seen. Application to radiation oncology include treatment planning applications and verifications. Rigid transformation as I already said is to be preferred any case in any case you can just use rigid transformation. You can still use the formable registration but please be aware that you need an expert judgment to see what the results are and for other considerations or image registration applied to motion management we see them in the next module. Thank you very much.