 This lecture is about bimodal experiments and until now we have more or less just talked about a single modality that's neutron imaging or maybe we talked about x-rays as a comparison. There are a lot of books that you can read on this topic. I just want to give you a little list that's in general just image processing but there is there are a couple of books about data fusion, image fusion and image registration which are techniques that are very helpful when you work with more than one imaging modality. So this is the outline of this lecture. First a little bit motivation. We have some scientific goals, what we want to achieve. Then I switch over to image fusion and bivariate segmentation and that is the end of it. So we have our imaging modalities with neutron imaging. We do a lot of different experiments. So typically what we are doing is hydrology and soil and geology, color and heritage, metal pieces combined with organic material, also building materials and we also go into material science and usually you have some experimental challenges with the image acquisition in these different experiments. So it could be that it's hard to do the right segmentation, it could also be hard to estimate the water content. Some soil has the characteristic that it is swelling when water is coming in and going out. So I think actually the soil in Skåne has a pretty high clay content, is that right? So that is a soil that produces a lot of cracks and is swelling and well it's not easy to work with when you're doing an image experiment where you want to have all the solid material on the same spot. So if it's all of a sudden it's moving, you don't know anymore where you have what is water or what is structural change. So that is a problem when you start doing the segmentation also. And well you can have a lot of ambiguous readings, for example depending on the energy, neutron energy, it can be hard to distinguish between iron and copper. At thermal energies no doubt, but in a more cold spectrum all of a sudden they have almost the same attenuation coefficients and then you don't know really what is what. So what we want to do then is to select the right modality to see what we want to see. So some things to select a specific modality is you have good transmission, have good contrast of what you want to see, all the relevant things are visible and the materials that are distributed can be well identified. Of course on the negative side you can't come through the sample. At ICON for example more than five millimeters of water, it's black. You can also have low contrast so you don't really see what you want to see. Also that in some cases the features you want to see they're not visible and also ambiguous responses that you don't really know what it is you're seeing. And if we only use a single modality this can be a problem. So what we want to do is to add on a second modality or several modalities if you would like and the idea is to be able to extend the range of operations to better cover what you want to see. You may also want to extend the spatial and temporal coverage so with one modality you may be able to get 10 times higher resolution and that can help you to identify the structures and then afterwards you let the second modality do whatever it does but it delivers the functional information. So then you can limit it in within the boundaries given by the first modality. So with that you reduce the uncertainty in your measurements and you also then increase the reliability and also get a more robust system performance of the whole measurement. In an imaging experiment that's essentially already mentioned I believe in previous lectures you have four different components that you are working with. One thing is you have the top ball in this pyramid is the application that is the starting point for the experiment. You need to have an explanation and that application is based on some kind of physics. The physics is on one side what is happening within your sample or your process but also how this interacts with the beam you're working with. Then you have the acquisition which is essentially a sampling system and the question is how fast you can sample at what's the resolutions you can sample and to get the data into the computer. And finally you have the last step which is the processing because once you got data in your box you have data you have bits bytes but you can't really interpret it. So that is in a processing step where you start doing image analysis to get hopefully quantitative information out of the data. Some modalities that we in the neutron imaging community are talking about is the combination of neutrons and X-rays. X-rays of course well known you see them everywhere in the hospital in different lab sources etc etc and also sometimes as synchrotron. Neutrons are more rare as you already have heard but that's another modality and you can see here from this periodic table which I actually showed yesterday too that's the attenuation coefficients of neutrons and X-ray they look very differently and in some cases they are even complementary. So you can see here the hydrogen which is very dark with neutrons is very bright with X-rays and many of the metals down here they are completely black with the X-rays but you can see through several of them pretty easily with the neutrons. Combining this you can actually get a better picture of what you're seeing. So one example would be that you look at metallic implant in bones then you can see with X-rays you get horrible metal artifacts because the metal is attenuating the X-rays so hard that you get artifacts out of the reconstructed data. While for neutrons okay the bone itself may be pretty wet that is a problem but the screw can be almost transparent depending on what alloy it's made of. We had different types of screws some gave a good contrast other ones didn't so that makes it possible to go much closer to the screw and with the X-rays you may get a very nice picture of the screw itself while with the neutrons you can get how the bone structures are growing onto the screw. In medical imaging it's also pretty popular to combine different modalities and we are talking about MRI, X-rays, PET, SPECT and so on and then you can get a combination of structures which you get from the X-rays mainly but also you can get functional imaging how the flow of different signal substances in this case it's a brain but you can also see how well the flow is in a knee joint for example and with that you can then get more function but you can also see in these images that PET and SPECT they are pretty low-resolving data but if you don't combine and overlay them then you see that you can really nicely identify the boundaries wherein you have these different functions. Another one which is actually using the same kind of beam but you add more information is using grating interferometry and with that you don't only get the normal transmission image which you know but you can also get differential face contrast and you can get a dark field contrast and you can and they show different physical properties of the sample so if you combine all these three you could probably also learn a little bit more about the sample. The question is what you want to learn and how you motivate setting up your fusing rules but you could use this as a combination. Another one is I think you probably heard about this already but looking at different energies or wavelengths of the neutrons then you can get braggage spectra this is not really multimodal in that sense but you have very much data about the same sample under different conditions so different beam conditions and that can also be used in order to do better segmentation so looking at for example the segmentation the way you can find that in one region you have some texture and in another region you have a different texture you don't see these differences in the transmission image but if you combine all these energies then you can extract more information so by taking say 100 wavelength bins putting together with some nice fusion rule then you can get new information about where you have different regions in which you want to drill down into and look into more in detail. Then now until now I've only been talking about data from about the same dimension so you look at 2D or 3Ds together but you could also be single point measurements it could be temperature or temperature fields if you have a camera that can read register temperature variations it could be fluorescence information that you look at or you can have if you do a speedy experiment you may do a tomography in the beginning of the experiment and maybe one in the end and in between you do a lot of radiographs and how do you fuse this information together and with that you can then add information like temperature flow rates or pressures within the system so with that you can also do some fusion how you do it well that is a good question but you probably have a good idea if you're setting up an experiment where you want to look at this then you should also know a little bit about how you should combine this data. The next thing you need to do once you've got the data from these modalities is to fuse the data so in principle is data fusion or image fusion is a theory or set of tools and techniques that you use to combine data from different sources to provide a common format that you can have more information about so the aim is actually to improve the information quality of the images by doing this kind of fusion and hopefully the information you get out of the fusion is more than 1 plus 1 equal 2 but it should maybe be 2 and a half so you get actually you learn more about the sample thanks to the combination of two different modalities. Fusion approaches well in the beginning when I started there was always the question yeah do you have have the solution now how to do the data fusion there is no solution there is no golden recipe that solves all your fusion there are very many different strategies to work with so you can do kind of what I call here a multivariate fusion where each data set is combined on the same level so you're just more or less multiplying weighting them together using the same concept and then there is augmented fusion where you use one for example to do structural segmentation and the other one to see what is going on within the sample and then you combine them there are also approaches to do artifact reduction as I mentioned with this screw and in bones we had these artifacts from from the screw and with x-ray images and with the help of the neutron images you can see that okay this region shouldn't have all these stupid streaks it should be pretty homogeneous and then you can combine data from the neutrons to help the x-ray reconstruction and combine maybe also data from x-rays into the neutrons and in the end the goal is to get a nicer reconstruction without saying cheating because you're filling in the blank kind of using the other modality and then of course if you can do one why not combine them and do two three together and gain the most information in the end and exactly which strategy you're using it depends on if you're using on the sample composition what are the objectives of your experiment in the first place what you want to see and also to some extent it depends on the condition of the data do you have a combination of low signal to noise ratio and the bad resolution on one side and high resolution on the other side then you have to maybe choose a different approach than if you would have great fantastic images on both sides so it's the choice is how you combine the data comes up to the data you have in the end and what you are looking at so it's not not obvious directly and there are different levels of fusion so you can do the fusion having data in and getting the same kind of data out with a little bit more enhancement or segmentation or something and it can also be that you do the fusion that you have a lot of data coming in and outcomes say nice features so you can have like there you have you have say different bones of different structures that you identify with the help of the two sources then you can go one step higher and then you do a lot of analysis on the images reduce each image to some features and then you combine those features into some fusion rule and then getting even more abstract than you have features in the images and those result in decisions based on what you have in the two modalities and finally of course you have decisions versus decisions and if you're talking machine learning that would be something similar to a random forest in machine learning there's a technique called random tree which does a lot of decisions based on essentially decision making and going through a decision tree and each of these trees produce some result and the results of those are weighted together to get the best choice so that would be a decision decision choice more graphically you can also look at the workflow like this where you have first the image acquisition images coming in they may not even be in the same shape and the first step is to do a registration which I'll mention on the next slide then you do pixel-wise fusion so really pixel to pixel you may need to do some alignment and calibration to get the images on the same scale otherwise there will be maybe some saturations which you don't want to see or you may be biased or skewed because of you of no calibration then you can step on to the feature fusion doing different extracting different features and checking these features against each other if you have different properties in them and finally you have the decision where you do labeling of the objects and telling this object is a classic one talking about machine learning is saying it's a dog or cat but I don't think we're talking about dogs or cats here but maybe but maybe saying that this is a screw of that type this is a screw of the other type with these and these materials so that would be the kind of labeling you could do with the help of the fusion from the fusion you continue doing more work so it could be that you want to present it somehow so you have this flame it can use colorful renderings or something like that but you can also go toward statistics and modeling to really get the quantify models based on the fuse data then there is a term called catastrophic fusion and that is something that happens if you bring in data combining it and it performs worse than you have each individual method for itself if you get something like that you should start going back to the design board and think about what you're doing and redesign and get something new but it could be cast by you have a selection of the wrong variables you have you're fusing the wrong information or maybe you're fusing a two complex variation of different variables and maybe doing it completely wrong way or it could also be that the sensor information is cancelling each other so one is showing this and the other is showing that and when you combine them have a zero so stay away from the kitchen where you have too many chefs that's usually not resulting in a good soup image registration I mentioned before and image registration is a process that you need to do when you start doing fusion you have one image which you have fixed say let's say my hand here and then you have another image which maybe have a different scale which is rotated a little bit and then you want to fit them on the same scale and when you start that you see first you need to flip it okay and then we can move it together until they fit perfectly over each other so that you have the same pixel size and also the same rotation and the translation so that it really fit together and that is an optimization system where you try to minimize some cost function sometimes just a mean squared error or something like that registration is done sometimes on the same modality but if you have different modalities it can be difficult for the algorithms to find good metrics that actually combine the two modalities together because in some cases you see there is a lot of holes but these holes may appear as something very bright in the other modality so you have to have images with some sometimes you need to have some landmarks in the images in order to get a nice registration between them and in particular with when you combine complementary modalities it is very useful to start out with a good starting point so you already manually more or less rotate in more or less good position and then afterwards you start the optimization and get the final fitting because this is an optimization system with very many local minima that you can get stuck into there are tools for doing this and also libraries on the open source side there is a software called 3d slicer and that's an interactive tool which is very helpful it guides you through it's based on a library called itk inside toolkit which you can also implement in the form of c++ code or python scripts and the thing is that itk is not so user friendly and some people thought well let's do something specific for registration so they implemented a toolbox based on itk called elastics or simple elastics and that one is a popular one to work with when you do registration and this is really important first step to get the data in the right grid otherwise you can't do any comparison at all and well I think I said a lot about this already in this book there are a lot of metrics mentioned and well it's this book is about registration so if you want to learn more about how registration works this is the go to a book if you want to um then once we have the registration then we can work with here is an example done by my colleague David Mannes it's a sword found in a lake here in Switzerland and it was very well preserved but the problem and the archaeologists they wanted to make a reconstruction to understand how this sword was made so we started out and I think they even started doing a general material analysis and came to conclusion it contains iron of course it's a sword there was amalgam there was which includes a lot of mercury mercury is bad for x-rays and then there was wood which is well more or less transparent for the x-rays but with neutrons you can see it very well and then some other metals I think the cap on the top was also some other metal and it was also wood filled it came out but in the end what they did they used the combination of neutrons and x-rays to really combine out the amalgam dots which produce these horrible artifacts in in the x-ray images you could just see it looked like a lot of needles just streaking out from from the from the grip and with the neutrons it was pretty easily to confine them into these small globs that they actually were and we could also see nicely the annual rings of the wood and I think they tried to even date it based on these annual rings afterwards the final story of this sword was that the experimental archaeologist even made a replicate of this very sword afterwards based on the neutron images that were measured the segment oh sorry the segmentation this one was done in vg studio which is a commercial software so the whole work was done within vg studio which also includes registration algorithms and also partly automated segmentation and also manual segmentation guidance so if we want to do this kind of registration and and visualization together we start with some data we have here a snail shell done with neutrons and with x-rays you can even see that there are some I think there's some beam hardening within it but anyway these are the data we want to combine you can see that it was already registered because this rotated here so they are on the same scale and position right now and now we want to combine this in some different ways one way of doing it which is very helpful when you do the manual registration is to do a checkerboard visualization if you want to do some coding or use this code feel free to you can actually download this presentation as a uberton notebook so the examples that I have with code you can actually execute yourself and now we try this checkerboard with the images that we had we had a neutron image we have the x-ray image and then we mix the two modalities in these checkerboard squares and that's a very good way to see if you fit the data well during the segmentation if you have a good segmentation you can see nicely that it's okay perfect it fits nicely all the way around and you can also see the contrast differences between the two modalities is a quick way to get a qualitative view of what you are looking at and what the two modalities can provide you this is a tool that is available for example also in 3d slicer and then you can select also what's how big these squares should be and do this overlay together another way for visualization is doing color channel mixing and what is typically done what many are doing is to take modality A as a red channel modality B as a green channel and then the blue channels in the end is just the average of the two modalities and well you can do it like this but who says you need to use this permutation you can use any form of permutation so here I tried and played with different permutations that's image A got into the red image B got into the green looks pretty acid like then we can try the next one where we look at image B image A goes into the blue and image B goes into the green yeah I don't know if this color coding makes sense so much this color coding where image A goes into the red and image B goes into the blue is very useful in porous media examples where you have soil because this combination delivers if you have a neutrons on the blue and x-rays on the red then the soil turns into brownish which makes sense and the water turns into blue so it's a very nice color combination to use for soil and water examples or geology examples so in the end we want to do something more quantitative with the data and our first step is often that you want to do a segmentation and segmenting the data the typically way you do it is looking at the histogram of the modality in this case I have the histogram of some data with two classes foreground and background and they are very much overlapping you can see that it's very hard to set a threshold that makes sense in this data and you will also for that reason have a lot of misclassifications both saying that void is solid and solid is void and that's not very useful then we can see here also just to point that out we have modality A is blue and modality B is red and material A is blue and material B is red sorry then we look at a second modality and you can see that that was had low intensities in modality A has now got a high intensity in modality B but still it's the same bad separability between the classes you still have the same general histogram so it doesn't really help us now what happens if we look at the bivariate histogram is that you see that these histograms that were hard to distinguish between the two classes you have a great separability if you plot them up in a bivariate histogram so you have here on one side you have modality A and on this side you have modality B and when you plot them together you see very nicely that they are separated and if you would do a threshold between those two you wouldn't have very much of misclassifications compared to before so this is what we're aiming at and looking at the example with soil which is my favorite here you can see some roots that's are these white blobs around here and that one and those and you can see the roots very well in the neutron imaging they contain water and organic material so they're nicely visible if you look at an x-ray image of the same sample you see just a void you don't see that there is actually a root inside but you can see that here is a stone but that stone is like a hole in a neutron image there is really a nice example how complementary these two methods is for the soil and that makes it very useful if you look at soils with high clay content for example when you know that the soil is growing and moving around then you can use this technique to really capture how the soil is moving and how the water is moving and separate them and looking at the bivariate histogram of this data now I use a logarithmic scale here because there are so few roots there is a high class imbalance it's cold and it's very hard to really find it unless you use the logarithmic counts in principle what you will see is this peak and that peak and nothing else but anyway here you can see this is the background this is a soil this is the container and this are the roots it's very nicely separated within in this data even though it's relatively noisy but we can get a separation between the different classes that are present in this data and how to do the segmentation of this well there are some approaches just drawing regions around each one and use that as a segmentation map but there are also more numerical methods to do this and one way is to work with hypothesis testing and statistics so you would say that you have hypothesis one has some distribution hypothesis two so that would be for example backgrounds soil wall roots etc what is important to do this segmentation if you want to do this bivariate segmentation is for the first you have images of different modalities they have to be registered so you can do a pixel to pixel comparison and they have to be also ideally artifact corrected if we go back to let's see this one you can see that I didn't do a good job of correcting for the beam hardening in this image you can see it's very much brighter at the boundary than in the middle and that is always bad when you try to do segmentation by single thresholds and what we can do the methods that we can think about is a lot of different methods to work with multivariate data but if you go into different machine learning techniques you have methods called k-means which kind of average trying to find average the closest value to some centroids k's nearest neighbors then you have to do some training before you can also do regression methods you can use neural networks what i'm going to show now is Gaussian mixture models which is a method based on the statistical distributions of the gray levels and first just using our toy example what we are talking about is we have the probability map is the sum of different uh Gaussian distributions you have the centroid and you have some covariance matrix that describes the shape of each of these Gaussians and there is a weight how much you have of these so this is an example where you can see probably you can see by the directly that here is something going a little bit tilted in that direction and here is something tilted in that direction and now with the help of Gaussian mixture models we can say that we fit one class and of course then it finds something that is diagonal it doesn't really make sense you can see it directly then you can do with two classes well actually looks reasonable um but then if you increase the number of classes which you can of course do but then you are doing some kind of overfitting and trying to fit in more classes that are actually present within this data so that doesn't also not make sense so you have to do some kind of verification how much sense it is to choose different number of classes in your data once you have decided okay i want to work with two classes then you got also the parameters you get the centroid and you get the covariance matrix describing the shape of this Gaussian and then for the decision you use something called classification distance and yeah it's a lot of ugly equations sorry in principle the Euclidean distance is more or less you just take the distance from the centroid of your distribution and the value combination so the gray level combination you have for a pixel you have one neutron value and you have one x-ray value and the centroid also has the corresponding x-ray and neutron value and you compute the distance and you see between the classes that to one you have maybe distance say 10 and this is the other one you have distance 100 so the first one is the closest one so that is my class then there are more advanced methods where you involve also the covariance matrix of each distribution and to make it really horrible you can also include the local covariance of the dot that you measure and what that looks graphically is that with the Euclidean distance it's a very simple one then you just look at how close am i to these points and then you select the class that is given by that with a Mahanlubis distance then you also include a little bit more about the shape in the distribution and then you can see that okay here it was not really clear it's actually in the middle between the two of them so you can't really tell easily if you add the covariance matrix you see okay it's within the range in the neighborhood of this one not on that one so you'll have a better choice but the sharia is well that's probably too complicated in most cases but then it would be that you have two distributions that are overlapping you see which overlaps the most but I would say this one is already a pretty good choice and then I took my root histogram got some distributions around each of the classes and I computed the Euclidean decision space and you can see that there are lines cutting and everything in this green area is said to be a root everything in this blue area is said to be background and then we have the soil and we have the container now thanks to my beam hardening it is unfortunately that you get some roots are actually said to be container because we have in principle bad training data in our dataset and then on the outside also you can see maybe let's see if I can zoom it in a little bit let's see no yeah there you can go so you can see that there is like a skin effect on on the outside and that's a typical problem you have when you have multiple classes that you want to segment that you are somewhere if you have smooth edges at some point you are between two classes and you are wondering between you should actually go from that one to that one but you have to pass through this area here and for that reason you will get this brownish area let's see if we can go back again yeah something like that I think so this is something you have to watch out for when you do multi-class segmentation and maybe you have to look into how to handle this case better and another way which kind of is something that would be nice is to do bivariate estimation so if we go back to looking at the lambos law and spell out what you actually have in the attenuation coefficient you see that you have the density you have the atomic weight which is given for the different elements you have our goddess constant and you also have the microscopic cross-section which is the radiation interaction parameter and in principle from this you should be able to estimate for example the density in theory it's not very easy to do it I tried a couple of times but you need very clean and safe data but it's kind of a thought experiment that in principle it should be possible in theory but it's not easy to do in practice and then when you have started doing a modeling experiment or multimodal experiments the next thing is you saw that I have had a cylinder with soil and roots well when you're doing that usually you want to go the step next step and looking at the process how the water is going through the sample and then you go to the multimodal real-time experiments so then we start combining the bimodal data by model acquisition with also with time series acquisition this produces a lot of data I can guarantee you maybe not so much data as they do on a shrinker drone but for neutron imaging purposes a few days you can produce easily produce a couple of terabytes of data when you're doing this so what we do at ICON is that we have a setup and this setup has then afterwards also been replicated or similar setups have been replicated at ILL and that in Grenoble and NIST in the US so that you have one comb beam x-rayed beam line across the neutron beam line and then what we have done in one of our experiments is actually that we start spinning the sample going up at high speed like I think I didn't listen to Nikolaj's lecture but he probably talked about that that you go up spinning up the sample and then you start acquiring images at some given rate and then afterwards you can reconstruct time series data from that so this is what we are going in the direction we are going in many forest media experiments and well that was the end of my lecture and in principle what the idea is that with the help of multiple modalities you are able to get more information or more reliable information from about your sample and you can to get this information you need to do some kind of first you need to do a registration and then you have to fuse the data in some way to get to the different level of abstraction and get the quantitative information out of your experiment and well that was it