 Well, as you see in the picture, I'm working at Beam Lines. Actually, the actual Beam Line is ICON at Folgera Institute, where I'm a Beam Line scientist. And, well, this was, we had an extremely rainy weekend, so I spent the whole weekend at the Beam Line instead. So, about me and my role. So, first of all, I'm an instrument scientist doing neutron imaging. But, at the same time, I'm also working on image processing and developing software for analyzing neutron images. My background is not physics at all, I would say. I have a background in computer systems engineering and signal processing, so that actually brings me to the image processing side. I came in contact with the neutrons as postdoc at ETH when I was doing image processing for soil physics. And that also gave me something to work with, in particular, studying processes in porous media. Right now, I'm working as an instrument scientist for neutron imaging. And I'm also teaching image processing for experimentalists, essentially, at ETH in Zurich. And as I mentioned before, my research interests are in the area of porous media and image processing. So, that's a little short introduction to who I am. Well, I actually am Swedish, grew up on the Swedish West Coast. And some 18, 19 years ago, I moved to Switzerland. So, so much about me. Let's take a look at what I'm going to talk about today. Let's see, where is my screen? There it is. Yeah. This one, that one, and that one. So, let's see if this one works now. Oh, yeah, it does. So, I'm going to talk about introduction to computer tomography. This, as such, is a topic which is both about physics, but also about mathematics and image processing. So, it's a very wide topic. And if you look at groups working on computer tomography, you find them actually in departments in all these fields. So, they can be strict mathematicians, they can be physicists, optimizing the setups. I'm sorry, there's something funny about your screen chair that we see, sort of a small version of it. Oh, okay, sorry. I'll... Yeah, sharing is pulsed. Why? Do you see it now? I'll do it in another way. Okay, you only see this from this small one. Yeah, so, I hope this is better. Yes, good. So, let's go back again. So, I was in the middle of telling about where you can find computer tomography. So, this is the outline of what I'm going to talk about now. And in principle, going through all the topics, all of them, actually, if I go back to this, a very relevant thing is actually that it's theory and practical details for the experimentalist. And that is actually focused on this lecture. So, what we want to do is to learn how the images are formed. Now, Robin talked already quite much about that. So, there will be some overlap. You will also see that there are differences between different ways to reconstruct the data. And also, with key parameters, you have to tune and tweak in order to get the best images from the reconstruction. In the end, artifacts is something that is always present. As soon as you do experiments, you will have some kind of artifact noise, and you have to learn to deal with it in order to get your result. So, the problem is we have a solid item to investigate. And the first thing we do is to take it, look at it from the outside. Okay, it has elongated shape, it has two buttons on it, and so on. The baby would probably continue trying to put it in the mouth, check it out. Okay, it's also firm, can't chew on it. But still, we don't only know about the outside. So, the next step is to cut the thing in pieces. But often we don't want that, because if we cut it into pieces, it's already lost, because it's damaged. You can't use it again. And in some cases, for example, if you have very rare objects from the museum, I don't think they are so happy if you take the big saw and cut it into pieces. So, it's also not a good idea. So, the next step is to use some kind of transmission image. And well, in this case, you see it down here. I only have a candle through our glass of wine. You see something through the menu behind it. But this is a very weak source, so you don't see very much. And it's not really penetrating than some materials. So, the single light source is maybe not the right thing to work with. We have different kinds of sources to illuminate our samples with. So, on one side, which you know already from the hospitals are the X-rays, which most people know about. It's an electromagnetic radiation with higher energies than the visible light. It interacts with the electron shells. And then you get some attenuation coefficients related to the electron density. With neutrons, we can also penetrate the sample, but we have different mechanisms here. First of all, it's a particle beam. And the neutrons, they interact with the nucleus instead. And here it's more about the constellation of how many neutrons and protons you have in each atom or isotope. And let's see, I got something on the chat. Okay, that was an old one. So, this will give us a new set of attenuation coefficients. But let's first step back a little bit and instead of talking about attenuation coefficients directly, let's see what the transmission image actually is. So what we do is we have this ray. We have this ray that is going through the sample and it's attenuated. And you can see here the shaded gray that is attenuated more or less through this, thanks to this object. And in the end, you will have here some attenuation and intensity profile corresponding to how much beam came through the sample. And this is described by B.L. Lambert's law, which Robin already mentioned. And what it is, you have here mainly this exponential, which is to the power of the line integral through some material with some coordinates. And this tells you how much the beam is attenuated. Actually, I see here that I missed a minus sign, should be a minus here. So you have the incident beam here, which is attenuated and this is afterwards what you actually see. And this in practice is measured like we have first this image, which I hear called R for the radiograph. And there we subtract some dark current at technical detail because the camera contains some dark current bias. So there is more or less a bias in the detector image of some hundred counts, which you have to subtract otherwise you will have biases afterwards in your normalized image. You have also the open beam or flat field or incident beam image. There are many names for it, but essentially the same thing. It's the beam profile. So the beam is not or actually mostly not completely flat, but it has some kind of profile depending on how the source is constructed. Here you can see it's more or less a roundish shape, but in other cases it can be very different shape on it. And of course it also depends on the field of view how much you see this curvature of the background. You also see the same thing in the image with the sample. So it's like a halo around it. And this is what we want to normalize and then afterwards you get here. Actually, this gives you the transmission image and this here gives you the optical thickness. And the optical thickness is the attenuation coefficient times the thickness of the object. And this can be looked into. We have this integral or sum depending on the material. In some materials it's actually more, you can put it into a discrete set of objects like you have a container, you have water, you have some solid material. And in the end you can just sum the three of them times their attenuation coefficients and that makes life a bit easier. But for tomography we're actually interested in the continuous case and then we let this delta x go towards zero and then we have this integral instead. And then you can see that it can actually cope with gradual changes within the material. And this is what we are going to need now for the tomography. I talked a lot about the attenuation coefficients and so did also Robin before. And just to compare what you have, I use these two periodic systems. And I have colored or toned the different elements with how much they attenuate when the beam goes through. And you can see here from the x-rays that it's gradually going towards the higher densities and that's the high densities, they're all more and more opaque. With neutrons you don't have this logic configuration. So for example hydrogen has a lot of attenuation whereas you see here down here we have lead which you know from x-rays that lead is good for shielding, lead is useless for shielding with neutrons. So you can see here the differences and if we look at this camera you can see that in the x-rays you see all the metal components. And with neutrons you can see all the plastics and even you can see the small notches here from the film that is used to feed forward with the film. You can even see them through the camera. That gives you how sensitive the neutrons are compared to x-rays when it comes to hydrogen containing materials. Another example is looking at these attenuation curves. You can see how much you transmit through different materials. So here I have water, aluminum, iron and lead which is the combination in this object here. This object is a fist size cannonball from the Battle of Bosworth back in 1485. And back then they started experimenting on battlefield ballistics and they used this kind of lead cannonballs and cast in some iron pieces. In this case it was some kind of ring like objects. With x-rays you can't forget essentially to get through a lead object like this. With neutrons you easily get through. You're also relatively easy to go through the iron. But where we got the problem was actually on the corrosion. The corrosion contains a lot of hydrogen and these were the hotspots in our images. But with radiography it's not always that we see very well contrast between things. So if you have a very thick object and you see have a little little item in it and you want to see it. If you have a radiography the contribution given by all the material surrounding or embedding this little item is so high that you don't see it. The contrast is too little. So we need something different. And the different thing is computer tomography where you can actually get the information along the line. So in this slide. Let's see where do I have the drawing. Yeah, so in this slide. I didn't get it. I said I wanted to draw. Let's see if it works. Yeah, it works. Good. So in this slide, what I'm doing is I'm pulling the ray through here and see here which intensity I get. Then I take the ray through here and get see what I get with kind of intensity I get from there. You can see that in this case you have a little bit of the brighter blob here about the same amount of the black. And you see the difference here is not very much. It's something like 20 ish percent or even less something around 20% contrast difference. But within the object, you can see the variations goes from 0.7 to 0. So you would actually be able to if you can follow what's happening along this line like this. Then you see much better contrast. And that is the reason why we want to do the tomography. The other thing is you can't tell where you can if you can see through here that you have something, but you can't tell in the depth where it is. You can just say that, okay, in the green one, I have more or less of something, but you can't tell where it is. So you want to know that it's buried about this place, but you can't from just the summing of along the way. Let's see if we can switch that off. And then we come to stereography, which was... I need to get rid of my... So that's it. So also Robin showed a little bit about stereography. So to get the depth information, you could do the trick that you actually look at the object from two different directions. So the first one would be looking through this way. So you have this clock, you look at it through the short end, and you look through your wide end. And you can see that you get profiles like this on the wide end and profile like this on the short end. With this, we can now combine. So we just more or less draw this profile throughout the whole image in this direction. Then the other side of the profile would draw out like here. And you can see already that there are a few... Let's see if I can get... So you can see that there are a couple of corner points that tells us about what extent this object has, but you can't tell much more. You can say that there are some lines that are crossing each other, but much more you can't tell. You can just tell more or less where it's located in the depth. So we still need something new. It's not enough to do this serography. And to continue, we have this example where we have a single projection. And you have very many solutions. So if you have a very dense dot, you would get the same signal as if you would have a long object, which is less dense, which look very much the same. But you can't tell which one is what. Then you have yet another one. If you do a serography, you could have this constellation that you have. Either you have four objects spread out with half the attenuation coefficient, as in this case where you only have two. And again, you can only get the unique solution if you have infinite information, which we never can get because that would take much, much too long time and we don't have the detectors for it. So we have to do something else. So now is the question, what is tomography? You have seen the title. I mentioned it a little bit. I hinted a little bit on it. So it is a method to capture three-dimensional images, but it's an indirect method because you can't measure 3D directly. So you have to collect a lot of images, projections, in order to reconstruct what's inside the object. And tomography is a word constructed from Greek, where you have tomos, which is a section or cutting, and graph is writing. So you're actually writing the cross-section of an object. This is a method that has a very long history. It started out in 1917 with Johannes Radon, an Austrian mathematician who developed a mathematical foundation for required to do the inversion that is required for tomography. But it had to take 40 years about before Bracewell came up with a relationship that made this inversion give it a relation to the Fourier transform, which gives us a much easier life to work with and understanding it. And then it took yet another, what? So seven years, and then become the first application where they did some back projection and did some reconstruction based on Radon's result. And in 1970, then the first publication with the CT image was published. And now we're coming into these three guys. And it was Cormac and Hansfield that built the first CT scanner. And what they did was essentially they had a point source and they had a single detector, a single point detector. And then they just measured point-wise throughout the image. Then they rotated the object and then they did the same scanning. So it was extremely time-consuming task to get a CT image. Still, they had just some kind of slice of a brain, I think, in resin. So they had plenty of time to do this. But it's not very practicable in real medical applications. Anyway, in the end afterwards, in 1979, Cormac and Hansfield they were awarded with the Nobel Prize in Medicine for this machine. And today, computer tomography is every day a method in the hospitals for diagnostics. And to get the volume of some slices over the body takes a few seconds and you've got already your 3D data. Well, actually reconstruction takes a little bit of time afterwards, but the scanning goes in a few seconds. So I already talked about rotating and looking at different views. We looked at 0 and 90 degrees. But for the tomography, we need a lot more. So we have to rotate the sample and get images from different positions. And here you can see a fly, which I'm rotating on the turntable. And normally what we do is taking something like between 300 and 1,000 projections. It's what we normally do within nodron imaging. I'll come to back to that about how many projections you should take. But in general, it's several hundred projections that you need to do a good reconstruction. And the first attempt of reconstructing the data is to set up an algebraic solution. So in principle, what you have is you have the measurements of different intensities. And then you have also positions throughout the image. And then you want to know what is the actual information. And this results in we have a system matrix, which tells us how the object is rotated. We have the measurements, and then we want to know the X. This is a linear equation system. So in principle, it would be solvable. But first of all, it's extremely many equations you have. It's also very sparse matrix A. So there is no unique solution to this problem. And I would say it's classified as severely ill-posed system. The other way of doing it is to do back projection. And that was what I did when I had a stereogram. So let's just take a single projection. You plot it in, you paint it in, into the image. And then you see first, OK, I got the bump. If you have two projections, 90 degrees offset, then you can see, OK, I have a large blob and a little blob. And then when you add in more and more projections, you can see that with four projections, you can start seeing, OK, maybe it's roundish. And the thing here is also roundish, but you see there is a lot of streaks. Adding more and more projections in the end, you can see, you get a very smooth, you can see that object is round and there is a round thing within it. But this is very smooth. So something is still missing in the solution. And that is something we have to come up with in the reconstruction algorithm. And in principle, what the reconstruction is, is the reverse process of what physics is doing for us. So we have a lot of projections and we want to find out what is the cross-section that gave us all these projections. You can do the inverse red-on transform or you can solve the equation system. So the two options we have to get this cross-section information. The first thing we need, or actually there's a piece of terminology, is we need a projection from each direction. And when you extract information from all projections, let's take the red one here from each line and put them together. So then you would have on one side you have the horizontal axis of the image and on the other axis you have the angle that it was acquired. And that gives us a sinogram. And a sinogram is the information that is required to reconstruct a single image. And some examples here, as you can see here, we have three cases of sinograms. The first one here is from the region around the neck and the wings. You can see the neck is relatively dense, so that's on that curly thing here. And then you have the wings, which are a little bit closer to rotation center and also a little bit more narrow together. And that's these double helix that you can see here. And you can now you can also see the reason why it's called sinogram because what you see is actually a sine curve corresponding to each point in the reconstructed image. In the main body, you can see it's a lot of attenuation. And you can see the two arms, which are following two separate lines here. And this last one is around the legs, where you have this double helix that only has two dots in it, or two lines in it. Now I'm coming to some equations. We already looked at the Bialamus law, which gives us the projection information that we are using to reconstruct. So this is what we measure. This is described by this equation where we have the object distribution, spatial distribution of attenuation coefficients. And we have the observing ray, which is a delta function at some specific point. And well, then this is the position along the projection axis. And this together is what we want to look at. And this equation is essentially the Radon transform of the object. Transform as such is what physics is doing, but now we want to have the inverse of this. And one way to get there is to use the Fourier slice theorem. And for that, we need to take help of some Fourier transforms. And let's just start with looking at the projection. Oops, sorry. A single projection in, say at direction zero gives us this profile. Okay. You can see that there are some denser regions on the corners, something in the middle, and some gaps in the middle somewhere here. With this, we apply the first one-dimensional Fourier transform, and we get something like this, tells us essentially there is a lot of bias information, so DC information, and some ripple at low frequencies, but almost nothing at the higher frequencies. Now, if we take the whole image here and compute the two-dimensional Fourier transform, we get this spectrum, two-dimensional spectrum. And with that, you have in the middle, you have a very bright dot. That's the constant level information. And now it's a question how these two are connected with each other. Let's take this single line through the middle and look at it. Then it's actually exactly the same as this one. And thanks to that, we can now actually do the Fourier transform in 1D and paint it into the 2D Fourier spectrum. And then, if we would have, for example, something, a projection in this direction, then you would have to draw in your spectrum like this. Or in this direction, then you would have to draw it in like this. And then you can just do the one-dimensional spectra and actually paint them in and rotate them. And after a while, if you can fill it in a lot, then you have actually filled in the whole two-dimensional Fourier spectrum, and you are ready to go back again doing the inverse 2D Fourier transform and get back and then you have your reconstructed image. So with the help of this, Bracewell were able to formulate the analytical solution to the inverse. And that is actually what we're going to use in our reconstructions. What we also need when you put all these spectra together, there is a high data density in the middle and almost nothing out at the high frequencies. So what we need to do is actually to normalize with a ramp function to get a flat image, a flat spectrum. This is what you see down here, this omega. So you have to actually multiply by omega to scale the projection data in the spectrum. This can be moved over to the spatial domain and multiplying by some function in the frequency domain is the same as convolving it in the spatial domain. So now we're coming towards doing the reconstruction and the reconstruction is again the back projection, but we also need to filter the image with this convolution kernel. Now the shape of this kernel is the shape of this kernel is just like a V around the zero frequency. And that is the same as the derivative. So it's actually the U of the projection. And that is what we are using as our reconstruction filter. In many algorithms, this convolution is actually done in the frequency domain, but you don't have to. You can actually define this convolution kernel also in spatial domain. So that's up to the implementation, how you would do it. It's pretty trivial to do it in the frequency domain because it's already defined there. So that is what's often done. And typically what we have when we do our reconstruction is we have our projection data where the images are in the coordinate system U and V. Over the angles, we create our synograms and we compute log norm to handle BLM but slow, may remove some artifacts. Then we apply the filter during the back projection and finally we have some reconstructive volume. Just to come back to the synogram, I have some line integrals that correspond to different points in the data. So for example, if you want to reach this uppermost corner, then you can follow this magenta sine curve and do the integral along that one. If you want to have this point, you see it's now centered. So it's going just like an arc out like this. Oops. Come on. There it is. If you have this center point, it's actually just a straight line through the synogram. Now it's very inconvenient to do it actually following this sine curves. So the algorithms are doing it a bit differently but in principle, the contributions to reach these points are the ones that correspond to what you see here along these curves. Now the reconstruction filter, let's take a little more look at it. So we have this derivative contribution which is just a ramp but the ramp is amplifying high frequencies and at high frequencies, there's usually very little information left about the image but there is a lot of noise. So what you do is you apply also an additional apodization filter and they have different names depending on their shapes. There is Shep Logan, Hamming, Phonham, Butterworth, Blackwell, etc. There are many of these windows that are used for the reconstruction apodization. With the aim to reduce the noise but at the cost of some sharpness. So let's take a look at the projection. We have the profile of it here looking at the profile. If we transform the profile, you can see that you just have this peak. Then you multiply by this omega ramp and you can see that there's a lot of noise in here but you don't want this noise to appear because that's also inconvenient afterwards. So what you can do is here, you can see the different filter functions. You have the Hamming window for example which is essentially just a cosine shape plus a bias. Then if you multiply the two of them you can see that the filter shape is something like this and after applying this filter you can see this part. You don't have so much high frequency noise like you had over here and that will help you to get nicer reconstructions afterwards. So first, this is what we tried doing in the beginning when I said something is missing. We have a very smooth solution. But adding the Ramlach filter you see, okay, now I got some nice sharp edges in the data and you can start seeing features but maybe it's a little bit too noisy. That's for you to tell actually. So I decide here, okay, I need a Hamming filter with this or that cutoff frequency and then we get rid of the noise but you can also see that this comes at a cost that you may miss some of the features that could be relevant for your investigation. So when you apply filters you have always to be careful that you don't remove information that you would need. So, well, there are a set of different filters which are available within the software so if you'd like to try them out and see the effects you are free to do so. Then this was the analytical way of reconstructing. It's still an underdetermined system but we can actually cope pretty well anyway. But our case is when the analytical solution has a problem and that is when you have too few projections so the system is under-sampled. Then you start getting a lot of artifacts. If you remember from this blob thing you could see that there were streaks coming out from the smaller blob. These streaks are due to under-sampling. Another case which could happen is that you have irregular distribution of the acquisition angles. There are some systems that actually produce this kind of data. The analytical solution doesn't really cope with that very well. You would get quite some ugly streaks from these reconstructions. Another one which is actually also pretty common in medical applications in particular for mammography is to do a limited view tomography. As you can see you have a very dense sampling in these few angles but the rest you don't have anything. The artifact you get looks like small hats on each object. Instead of having a round object you would possibly get something that looks like you would expect that you had a round object but if you do this limited view what you get instead is something that goes out like this. You will still get a reconstruction that tells you where the object is but you can't really say how large it is or the extent of it. But sometimes it's actually enough just to tell if you have a slab. In my typical example is that you have a slab with roots. It's good to know if the root is on the front side or the back side and that would be sufficient to information here then. Another thing that can also be a problem is when you have too low signal to noise ratio so extremely noisy data. It's also hard to handle or actually it's always bad to have low signal to noise ratio because you can't actually determine things within it. Another one is that you have too few gray levels in very fast tomographies which Nikola is going to talk about tomorrow. It can be that you maybe only count something like 10 to 20 neutrons per voxel or pixel in the projections and that's very little. That gives you also low contrast. You can handle it a little bit by increasing number of rejections but still few levels is also not good. You want to have high dynamics in your images and you also want low signal to noise ratio to get good data. If you have all these ugly cases there are iterative methods that can give you an help to get data which looks more reasonable. These are two types. One is the algebraic set which essentially give you ways to invert this very huge matrix in a more efficient numerical way. The other way would be if you have very low signal to noise ratio is that you can use statistical methods where you model the noise and with that you can also improve the reconstruction quality. You can also in these methods you can even include physical models that handles scattering, beam hardening and how the beam interacts. You can even have a Monte Carlo simulator included in the iterations. That's very time consuming by the way. With these methods you can also provide prior information. If you have an idea what it should look like then you can provide this and often it's a good idea to provide a prior in order to get good performance. On the other negative side of it it's extremely time consuming. What is done in each iteration is essentially a back projection and a forward projection. It's already twice the time of the filter back projection for each iteration. You do this maybe 50 or 100 times and of course it takes a lot of time. For that people have developed these reconstruction algorithms on graphic cards and then they can speed it up and actually do iterative reconstructions in reasonable time. What we need to do is building up some equation system where you have the different weights which are related to the orientation of the beam and then you should see that these matrices are the ones that describe the material at the different points. Building this matrix is a big thing. If you have a thousand projections which are a thousand pixels wide and you want to reconstruct a thousand by a thousand pixels then you have a very large matrix and it's usually not feasible to bring it into a normal computer but also not be very efficient to try to invert it with brute force. So that is why you need to do these iterative things. So this A matrix is sparse. It's ill-posed which means you have infinitely many equations. You need infinitely many equations and also if you try to invert it it doesn't provide a unique solution anyway. One of the methods that is presented in the literature is the so-called algebraic reconstruction method ART or ART which iterates over the columns in the A matrix together with the data for iteration something and then you have some kind of regularization parameter for each iteration and then you can go on iterating updating the columns by column all the time until you reach some kind of stability and other methods are SIRT or IRT no SIRT or yeah ART yes there are modifications of this this is working column-wise other ones are using the whole projections at once and then you have other regularization techniques like based on total variation but the idea of all of them is that they want to find the solution that is stable and as noise free as possible. Then if you have statistic methods then you not only have this AX equal Y but you also have some noise with some noise model usually this noise model is Poisson based but you could actually add other noise models on top of it because you also have noise from the sampling you have noise from the camera and then you have also binomial noise and you have Gaussian noise you can put everything into a big bin but usually the main noise source is Poisson distributed and then you have this iteration scheme where you do some error calculations update strategies, compute some probabilities so actually what you're doing here is to optimize probabilities is the task of this likelihood maximization so that was a lot of math now let's go to geometry actually beam geometry and we have different beam line configurations in photography one is that you work with a static beam line which is the case when you have a large scale facility in the end you want to have images from different sides of the object and it's much easier to rotate the object than a neutron source around the object, usually the object is about a few centimeters so it's easier to put it on the timetable and let it rotate and then take images while doing that the other one is to have a rotating beam line and then you have a gantry that rotates with source and detector and it rotates around the patient and that is of course very convenient for the medical imaging because I don't think the patient would like to be but I don't know a couple of RPM during the scan so then it's much better to let the source and detector rotate we also have different kinds of beam pencil beam is the very first step when you start doing basic experiments with acquisition is actually to have a pencil beam well collimated that you scan across a grid until you have the whole image this is extremely slow process of acquiring data so it's not used for tomographic mostly not used for tomographic purposes this is how Hansville did it I already told but it's not really practical feasible for the use what we are doing is parallel beam the beam is coming and hitting the detector and you read out the whole image using some kind of camera or detector array detector this is a nice kind of beam because you have no geometric unsharpness you can use all the reconstructions you can use the filter back projection algorithm and you are happy but in most cases you can't have a perfect parallel beam you can have it synchrotrons you can get a very nice parallel beam with nutrient imaging we say we have a parallel beam but we still have a slight beam divergence the next step with the normal X-ray lab sources is to use a fan beam then you use the central beam it's collimated up and down and then you let a fan go out and then you can have a line detector or maybe a few line detector it's what is used in most of the medical systems it's easy to have this small array and then if you go on many lab based X-ray systems they use cone beam so then you actually also look in the vertical direction of the cone so in this and also in fan beam you have a magnification so in a detector where you place the sample between them you get different magnification so this is a pretty nice thing you don't want a high resolution detector but you want to have small pixels and you can just move closer to the source and then you have your magnification the reconstruction is non-trivial so one way is actually an approximation which is used very much as defined by Feldkampf it is doing a pretty good job but when you use this algorithm you should try it anyway to have a very shallow cone angle so if you have the wider cone angle you have the more artifacts you can expect and with cone beam only the central slice is the one that is exact the rest has some kind of distortion and this can be shown here that we have our object and we want to scan it and if you have a stack of disks this is this example is actually called a Feldkampf killer because it is showing that the Feldkampf actually the backside of the Feldkampf algorithm you want to see the stack but if you have a cone angle of 30 degrees you will get a nice reconstruction but as soon as you go more and more away from the central slice you will get really ugly cone beam artifacts so that is something you have to be aware of when you set up a cone beam system that you should maybe try to get the magnification by putting the source and detector further away which gives you a more narrow cone angle and then you get a better image out of it so this was not a nice solution I'll come to that the other one soon but anyway actually Robin already mentioned this penumbra blurring and that is in our pinhole objects that we use we have some aperture and then we put the source far away from the detector plane and we still have a slight divergence not much but slight so we are talking sub-degree but it is still enough to show there is some blurring and this blurring depends on how far away the object is from the detector and it's described by this collimation ratio or L over D that we usually are talking about and at many beam lines neutron imaging beam lines we are talking about an L over D in the range from low end beam lines maybe around 50 and high end 1000 so that's the range for example the beam line I'm working at we have depending on the aperture 150 for the largest aperture which means a lot of neutrons but you have also a lot of blurring our normal working range is about 350 and the aperture size even further we can maybe get up to 600 we can even get further but usually we are not so these are the ranges that you can talk about and this is what you can see what happens if you have an object this is a carolinium sheet which is an extreme absorber and I have here L over D up to 2000 what you see here is an image which is taken just 3 mm away from the detector and if I move it 30 cm away you can see this is the blurring you get the same object but just by moving it away from the detector you get also much blurrier data this is clearly visible in particular because this detector I used has 13 micron pixel size so if you have to actually balance the L over D against object size and pixel size and in the end you can see what you need to do here is an example of what can happen if you have an object a relatively large object and measure at high resolution then you can see here we have 1.A and B that's A very exaggerated actually it paints out many more pixels than the B does and it's much closer to the detector and now if we do a tomography rotating the object we have these points that B would be over here and then instead of just painting out this little area it would paint out this area so depending on where you're looking from you will get different projections from the same point in the object on these images that we had an object it's a diesel particulate filter so it's a very grid shaped object and in the central region of the object it's fine when we go out to the periphery at 90 degrees just straight line through in the middle here it's relatively ok but you can see already when you come out in this area that it's already getting distorted and if we look at 45 degrees meaning we're looking at these squares like this instead then you get this kind of reconstruction so it's getting really ugly so also with this very small beam divergence start using cone beam reconstruction then you have a better interpolation in the data and you can actually reconstruct the same data with much better quality even this one which would be useful for the investigation all of a sudden can actually show the structures you're looking for sometimes even though we are usually saying parallel beam it can be beneficial for neutron imaging to actually work with the cone beam reconstruction another way to get around with the cone beam is to use a helical scan and the reason for doing that or what's happening is that while you are rotating the sample you're not only rotating but you're also translating it vertically by doing that you can actually also get an exact solution of the object and you don't have these felt-comb artifacts anymore this is very good method for x-rays where you have at least some beam divergence to talk about with neutron imaging actually I don't think anyone has tried it because the pitch would be so so small that it's hard to actually motivate it but with x-rays it makes a lot of sense and it's actually the technique that is used nowadays in medical scanners as well because you get so much better quality in the reconstructions now another problem when you come talk about geometry is that you have samples that are too large for your object and the requirement is that you should have projections from at least 180 degrees and that the sample must always be visible under the detector now if the detector is too small you have two cases one is that it's too tall so this is the rotation axis and it goes outside of the detector vertically and you have no problem what you do is just to do a scan translate it and do a scan of the next piece everything is fine however if the object is truncated from the sides you will lose information relevant information and then we are talking about a truncated reconstruction and when the reconstruction is truncated that we introduce for one artifact bioses and in some cases you say okay I can live with the bioses I want to reconstruct that thing in the middle but it's not reconstructed at teneuration coefficients they are not quantitative anymore because the things at the outside they are in the beam in some places and you can see the contribution from them one way to handle the large sample object is to translate the objects so if you have your reconstruction axis and you would normally you would just rotate it around the axis like this but it's too wide so what you do is you move the axis out to the corner and you rotate it around that axis instead of that point instead and then you have to do some stitching and you can reconstruct everything I have done some such experiments it's very popular to do with X-rays and it is in principle feasible but the problem is the experiment time will increase radically and that may not be what you can do otherwise you will introduce a lot of artefacts the noise ratio is going down but this is an option that you can play with a little bit more about the truncated tomography that's already said and you get these truncation artefacts like spikes on the edges and that's in effect in some reconstructors that they don't filter in a nice way and then you get really ringing artefacts along the edges and instead what for example I have done in my reconstruction tool is to do some smart padding of the data and then we move out all these artefacts to places where they are not seen what is this brighter edge effect here and also that it gets brighter around here okay it's maybe not so much in this case but it can be worse and really extremely high intensity artefacts and by doing this padding trick you can see that these streaks along the border are less and also here in this area is also less so you can actually get rid of some of the artefacts caused by truncation but it's better to try to fit in the object as much as possible within the field of view then I was talking about the reconstruction axis so the object is actually rotating around a certain point as I showed before and it's very important for the reconstruction that you know exactly where you have this point in the data so you can have it if you're rotating it around the wrong point you get artefacts that looks like what you have here on these two examples this should be a round pin but due to this centering misalignment you see that you get these streaks that goes out also on this knot you can also see that there are some streaks going out and actually the rest of it also looks a bit weird so you will get a lot of artefacts if you don't centre well if you centre it almost good then it appears more like some kind of unsharpness of objects so if you expect the images to get sharper to be sharper then you can also tweak the centre rotation a little bit and we are talking about sub-pixel changes so it can be half a pixel change can actually make the image sharper here are some examples what happens so here I have a centre offset of 8 pixels and you can see that here you got some really ugly streaks if it's perfect then you see nice round shapes and if you do a centre offset of plus 8 pixels then you get the streaks in the other direction but this only applies if you have done a 180 degree scan if you do a 360 degree scan you can't distinguish between two plus or minus 8 pixels because they both draw unsharp halos around the objects that looks the same so then it's actually only to guess two plus or minus a little bit and you see if it's getting better or worse that's the only way to do it the reason why I'm doing a 360 degree scan in Neutron Imaging is because of the divergence because that's the divergence we have the cost by the LOD is actually introducing some kind of artefacts that looks like this the centering artefacts but if you do a 360 degree scan it averages out and compensates for each other so we mostly do 360 degree scans with Neutron Imaging and then we allow ourselves to say we have a parallel beam it's a bit of cheating but it looks pretty ok so we live with this little cheat so finding the centre there are different ways of doing it so what we do is we have two projections you have projections one and you have projection two projection two you have got at the opposite side or at the opposite view so this one is taken at 0 degrees this one is at 180 degrees and when you want to find the centre rotation what you do is you mirror that one and you have two different techniques to find the point where they actually overlap and that gives you some kind of position offset and that tells you where you have your centre rotation if you have reconstructed data and you see these streaks there is a quick fix to do it you can just measure going with a mouse pointer or whatever is your preference and then you get this solid part and this streaky part and that gives you some kind of delta radius divided by two and then you try it plus minus the current centre rotation and with that you get pretty close to the real value so that gives you some indication how you can quick correct it manually with your reconstruction centre rotation algorithms don't do what they are supposed to do what they're supposed to more things about the centre rotation is that when we have tilted sample or tilted table that's two different cases so both cases the sample is tilted but the question is where it's tilted so first of all we have our tomography axis so if we rotate it around it it's fine if it's perfectly vertically aligned or aligned with the detector grid everything is fine however if it's tilted standing tilted on the detector board what is happening it is rotating around like this so it's actually following the edge of the table so it's easy to rotate like this but anyway that is also okay because the only thing is that you will get elliptic cross sections when you look at these cut planes but that can be aligned in software afterwards and as long as you get sharp reconstructions that's fine the problem is when you have the object the sample table is tilted and then you rotate it around like this then you're not aligned with the detector grid and the effect is that if you find the centre rotation at this green slice then you think hey fine it's good but if you look at the reconstruction as a piece up then all of a sudden you have the effect as if you were not centred so it gives you on one side a very nice centering but it can also give you several less nice centering and actually the effect is that you have like a cone going out of unsharpeness from the central slice that you reconstructed first you have a cone of unsharpeness on the bottom then this cone gives you extremely unsharpened ugly images so what you did and what you have to do is to do some kind of tilt correction so one way is to actually rotate all the projections by this angle that your sample table is rotated by so you're corrected by that way so if you have tilts which are I would say less than half a degree then you can actually correct it in the reconstruction also by moving the centre rotation a little bit for each slide and that works pretty well as well so I have a question here about why would you 180 and then I go back let's see no it's okay I think I just have to take the whiteboard let's see can I do that not so easily okay then I draw it on the side somewhere see if I can find a place so why we want to do the rotation with 360 degrees instead is that if we have an object like this and the beam is slightly divergent then you get some unsharpeness so that would be some kind of unsharpeness around the object like this from that side and that will give you some blurry effect that you see down here if you take the beam from the other side well you rotate and then you take the contribution from this side as well and then you also get some blurring like this that is kind of compensating the errors that you get from only one sided imaging so actually it's an averaging effect that so this object for one gets a very wide effect and if you rotate it it gets a narrow effect and together they average out and that gives you a better reconstruction than if you only did 180 degrees coming back to the tilted acquisition axis there are also two ways to tilt the axis one is if you go in the in the beam direction and it's tilted in that direction that is the example that I shared before no problem that we can handle that can be handled by rotating the projections so they are straight up or it can also be done by correcting the center rotation it's actually when the beam would go in that direction and it's tilted in that direction that is first much more difficult to detect and to correct it you can do it but then you need a reconstruction that includes the whole geometry of how the sample is placed and in a parallel beam reconstruction tool it's usually not included you can include it in a comb beam reconstruction but I haven't seen any out there who actually gives you this possibility oh well maybe you can do it in Astra somehow but I haven't played with it myself so I can't really tell how it really works well someone says it feels dangerous to do data manipulation for small details that we are interested in micro structures in metals for example well right now it's not really data manipulation that comes later the reconstruction itself is just about setting the geometry in a way that you get the sharpest images so it's not really a data manipulation of course if you want to resolve very small details you have to go for methods that or actually detector systems that are high resolving and then already the objects already get so small that you can get close to the detector and then the penumbra blurring doesn't affect you that much anymore so if you want to see small details usually you have to have so small samples that you can get close to the detector and then the effect of the collimation ratio is less severe so this thing about doing 360 degrees is a kind of averaging effect but anyway if you are looking at very small details it doesn't really it's not that severe anymore it's more the whole thing about low L over D comes more into the effect cases when you have large objects and large distance away from the detector so the smaller the objects the less the impact is of L over D so sampling is the next topic the geometry now let's take another question before I do that tilt along the beam it is also related to the rotation table it's usually the table that is standing in the wrong way or else I would say it's mainly because of the table is standing in the wrong way the whole beam line is tilted which usually the construction engineers have made sure that it's actually pretty horizontal so that is mainly that the stage is maybe slightly tilted in the beam direction normally we can actually align the table very good so really below half a degree even better just using the plain old water level we can actually align it pretty good so normally when we do set up our experiments we just take the water level and align it in the beam direction what I usually do myself is actually also to rotate the camera I first take the water level on the table and rotate the camera in a way that I have a vertical line that it is as vertical as possible on the detector plane and this combination together I am usually on 0.2 degrees alignment between sample and detector so it can be done pretty well and this last fraction of a degree it is acceptable to correct for I mean that's what we have to live for and then there's another question about finding a center of an irregular object no problem you just do it the same way so what I normally do is some kind of correlation thing that I actually sweep over I have the projection line of the object and then I just do correlation and see when I get a nice peak and that is the top and it could be for example my hand and you can see here that if you would do the correlation between the fingers here you see that okay in the beginning there is not so much information and then at some point it's perfect overlap maybe so if you had to take only three fingers oh four fingers it's now the thumb is also down here it's less signal than if I would get all five and that is so it's no problem to have irregular shaped objects I mentioned a while ago now that we have to have many projections and the sampling theorem actually tells us how many we need and typically what we need is pi over two NU and NU is the number of pixels that the object takes on the detector screen so for example if you have the object needs 100 pixels on the detector that means you should in principle take 150 projections to get a fully sampled image now factor 1.5 that would for a 2K detector mean you need to take 3000 projections sounds like very much it is also very much so let's take a look at what happens when you increase the number of projections so here I have one projection of course it doesn't give any good reconstruction two, neither, four by eight you start seeing actually structures already but there's a lot of these line artifacts which is due to the under sampling then when you increase you can see that these artifacts they get more and more dense 32 so this image here is actually 256 by 256 so the ideal sampling would be to do 384 projections and of course it looks fantastic but you can actually go down to half the width of the object and still you don't see much of these artifacts so that is actually what we're doing a lot we are often in the range of between half or even a quarter of the projections that are needed and that's for the noise levels that we have we don't see these artifacts mostly so you can actually go down a lot at the cost of some noise possibly but the noise levels that we get in the images in the projections is mostly stronger than this sampling artifact so why do you get this kind of artifact that is looking into the sampling theorem to go back to the Fourier domain if you remember with the Fourier slice theorem we took the lines and were drawing them at different directions and the idea is that we need to fill the whole Fourier space with these projection lines if we don't we get a sparser wheel like this and you can also see out here you get all these spots and all these spokes here and each one of them is producing lines in the reconstructed image you can also see that for low frequencies in the middle here it's better so you can actually by reducing or down sampling the images if you have too few projections you can actually down sample the sampling theorem of course at the cost of pixel size and resolution but anyway that is a trick you can use if you for some reason need to scan fast or don't have time to get all the projections you need the trick you can take is actually to down sample and there are two fault gain in it if you down sample by a factor 2 for example you have increased your signal-to-noise ratio by a factor 2 as well so you get less noise and you fulfill the sampling theorem which brings me over to noise and dose typically what we have in the images is some noise that comes with us actually additive is not always true for example Poisson is more rather multiplicative but anyway if you look at the rather than transform of an image you can see that the clean image the rather than transform as such is an additive transform so if you have contributions of two kinds they are actually added together to the final result so if you have an ideal synogram adding noise it's actually the same as doing the inverse transform of the noisy image and then you get the information like this we have different noise sources we have the thermal noise from the electronics which is Gaussian has a Gaussian distribution we have algorithmic rounding interpolation noise sampling noise distribution and then we have also noise from the radiation source so it's kind of a counting noise and that's Poisson distributed and when I'm talking about dose it's actually the amount of radiation events that are hitting the detector or actually hitting the detector and are detected and the more you get the better signal to noise ratio we get so in principle the longer you measure the nicer images you get and typically as we are talking about mainly Poisson noise if we are talking about we are mainly talking about Poisson noise and the signal to noise ratio is related to the count as a square root the signal to noise ratio is equal to the square root of counts meaning we have our dose which is more or less equal to flux versus times the time and to improve the dose we have different options one changing the beam intensity that's limited possibility you can sometimes play with an aperture to get more or less tweaking the source itself if you have a reactor or accelerate the base source it's set to a maximum level you usually can't go beyond that you can do it less but that's not in your interest you can work with the exposure time and doing the same exposure you will get doubling in signal to noise ratio which is good already you can also work with the number of projections so instead of taking 300 projections and you can do 600 projections with the same exposure time and then you have actually doubled the dose you can also change the detector it's more about the detector efficiency what kind of synthesizer you're using how well it captures the neutrons can also be the camera which has different quantum efficiencies that also improves how much you can get through and actually the whole story is not so simple as I tell it right now because you have different level of different types of noise sources already in the conversion process for one you have the conversion from neutrons into visible light but then you also have how much visible light is produced so that's a second Poisson process so you have different Poisson processes cascaded after each other the contrast is something that we want to also work with how do we get better contrast in the images we have different parameters that we can play with one is the contrast in the slice that you want to see this multiplied by the width of the sample in some degree proportional to the contrast within the projections times the number of projections and you can see the effect of it in this sequence I did a numerical little demonstration where I have a round object with different insets of different contrasts and well one is well kind of visible mainly thanks to the color map that I'm using but then I have the highest 2 to 1 and then you have a really strong contrast in the data and then I'm working here with noise free data and I'm trying to work with 6 bits of acquisition and the result is that if you have 6 bits that is 64 grade levels in the projections you can see that you have a lot of artifacts you can see this is 2 to 1 1 to 1, 1 to 2 and so on so you don't see much at this spot if you double the number of grade levels you can see guess that this one is here maybe if you know it you could expect something to be around here possibly and then when you increase the number of grade levels more and more you can see that you get contrast so that means in principle that also again with longer exposure times you get more grade levels and then you get better reconstructions so it's not only the noise but also how many grade levels you have within the data and artifacts of course that's something that we have all the time and typically artifacts that we have in the neutron images or in tomography in general too is ring artifacts you see them in any tomography reconstruction unless someone has done anything against it line artifacts is something that we see a lot in neutron imaging we have white spots in the projections and each white spot causes afterwards a line in the reconstructed data high contrast artifacts it's a streaky kind of artifact outgoing from for example if you have an enclosure of much water or something like that you can get streaks that goes out from the object in the extreme case it's actually saturating the detector or starving out the detector then motion noise if the requirement for reconstruction is actually that the sample is not moving while you are acquiring and if it's moving then you will get motion artifacts same as you actually get in a normal picture if you are moving fast and have a long exposure time it's a kind of shape that is ghost shape around the moving part beam hardening is something that is not in the first order relevant for neutron imaging it can be used in a chain of corrections but the beam hardening is something you see very much in x-ray imaging as you know already neutrons are mostly scattered by most elements rarely purely absorbed so that will introduce artifacts which I will show soon so the ring artifact is something that looks like this it's concentric rings in the reconstructed images and they are caused by stuck pixels or a badly cleaned open beam image and they can be cleaned in the synogram because you see them as lines parallel to the vertical or to the angular axis and well they are the concentric rings so here is an example of how you can correct it in principle you just compute the projection in this direction subtract that and you get some kind of liney thing which is subtract from the image and then already you have a first approach to doing ring correction there are much more advanced ones which we will look at tomorrow during the training session but that's right now a very basic one to show the principle you can also correct the ring artifacts in the reconstructed matrix then you more or less do a rectangular to polar transform identify where I have the rings and then you go back again and then you do some corrections it's a good method if you want to try and testing different filtering strength but you need to do additional coordinate transformations which you may not want to avoid line artifacts is something that you see a lot in the neutron imaging you see all these white dots all over the image each one of these will produce a line in the reconstructed image and the lines they can look something like these but it can also be like looking at a haystack everything is just a lot of cross hatched lines all over the image in some case it almost looks like it's noisy but it's line artifacts what this is is actually also an extreme case of a spot it's one gamma photon that goes through the detector or actually painting along the detector plane it's a long, long spot it appeared on a single projection so that's a very extreme one but this is not a line artifact as such but it will produce a lot of line artifacts so here's another one you can see how it can look in a bad case see all these lines going in different directions and you can correct it here you can see the difference which is only lines in principle what we are doing is to compute to a medium filtered version of the image and with that we can detect where we have these spots in the projection and correct for them and that works pretty well so the general principle of detecting and replacing works pretty well there are different ways to do unit detection the medium filter is a basic one you can actually apply a medium filter on the projection only but then with that you will introduce additional blurring which you want to avoid the motion artifacts is something that looks like this what you see here is an assemble of small spheres which are shrinking with time and you can see here as long as time goes the water evaporates and you can see this streaky effect here and to get rid of it one way is to just acquire faster the other way is to use the golden ratio to define which angles you want to use for your acquisition normally the scan is just a sequence with small increments but with a golden ratio that's coming here then I can actually show it now so with the golden ratio what you do is actually you use the golden ratio to determine the next angle so the first one is at zero degrees and the next one is going up here to 42 degrees and I don't remember I think it's 153 whatever so you get a very jumpy sequence of angles but by using them you can surprise this and the other thing is by doing a acquisition in this way you can actually also after the experiment how long time frames you want to reconstruct and that's a very useful thing if you want to follow a process and you don't really know what time resolution you want to do at scan time then you can decide afterwards with the help of the golden ratio scan you can decide afterwards what you want to reconstruct and how many time frames you want to see feature of the golden ratio cupping is an artifact and the reason or what it is is actually that the image is bright along the border and then it's in the middle it goes darker and darker and that is for X-ray people they say if you have this effect it's been hardening so imaging is actually more about background scattering that adds biases to the data and that is what we are trying to correct in our work now so the way to get around the beam hardening is to work with monochromatic beam but it's usually not feasible with most lab sources because they have too low flux then and if you use a polar chromatic you get this dark central region and well you can correct for it numerically you can also correct for it by adding blocks in medical imaging they use something called bow ties so it's a shape you put in the beam that you actually add so it's like kind of a beam filter with a shape that is adjusted to the shape of what you want to look at then we have for the scattering that the innovation law generally assumes that the intensity is absorbed but with neutrons this is not true most neutrons are actually scattered so the neutrons that are scattered they continue to live but they go in a different direction than you would expect them to so what's happening is that you have out here you have a lot of sample scattering that goes all over the place usually it has a smooth shape like this and that adds a bias to the data so instead of seeing this red line you see this and that introduces also these cupping artifacts if you look at the hydrogen you have most of it scattering a little bit of absorption and then you have all these here some metals have a little bit more absorption and still a lot of scattering and there are a few pure absorbers but they are very few so in principle for any sample it actually makes sense to do a scattering correction when you work with neutrons so here is an example if you want to have quantitative data you really want to work with attenuation coefficients that you reconstruct then you need to do a scattering correction also if you are working with image processing you don't like that you have gradients like this within the images so in our group we have done a couple of approaches one was done by Renne Hassenein it was back in 2005 or so but that method we thought well it's not fulfilling all our requirements so we decided to work with our group to work on a new method which is based on a grid of dots which produces a black body pattern and then we measure the intensity behind them and with that we can actually recreate what it looks like with this scattering the background scattering from sample and from environment using these images with this grid we can get from an image that looks like this into images that look like this and so you can actually remove this scattering artifact or this cupping effect caused by the scattering we have also done tests coming to the level that we are within 5-10% accurate to gravimetric quantification of water content in objects using this black body correction so it's really helpful and as comparison you are 50-100% wrong if you don't do it so it's without this black body correction the black body grid, sorry we are using it to get reference images these black dots they are absorbers so there shouldn't be any neutrons that come in behind them but thanks to the scattering there are neutrons anyway coming behind them so from these reference images we measure how much scattered neutrons we have in the image and this information is then used in a normalization procedure so instead of doing the normal biolambus law normalization you have to do a pretty large normalization scheme that we correct on different places for this scattered neutrons so it is only as reference image we have one open beam with the BB grid in to get the background scattering of the instrument and we have one BB with the sample in to get also the contribution by the sample itself when we do tomography we actually also rotate the sample in a way that so we can see actually the contributions in different orientations of the sample what you see here in this image and here it actually makes sense to do tomography with black bodies so in one case you would have it like this and in the other case you would have it like this and that gives very much different scattered fields behind it also if you would have a round object it doesn't really make so much sense to do it full tomography then you can just take it from one direction like also statues that have an arm that hangs out that also produces a very characteristic scattered field at different directions which brings me to the end I've talked now for quite some while about tomography which is an indirect acquisition method to get three-dimensional data you can do it with different radiation sources you can actually also do it with light if you have sufficiently transparent materials I always play with the thought that I should take some of these transparent Lego bricks and do a tomography using that I haven't done it yet but it's a fun play example for the perfect tomography you need many projections and you also want well illuminated projections so you have little noise in them but still you will get artifacts in the data because there's always something that comes in between like all these spots or rings but also the scattering which is a kind of physical artifact that comes into the data and well that was all I had to tell about tomography