 You have a treat in store. Marcus Ackser is going to talk to us today. Marcus did his PhD in architecture, physics, and then he did a postdoc in Geneva at CERN. And then he spent the last 10 years at Yilish Research Center working with Carl Zillis and Catherine Ammons, and now leads the fiber mapping group in Yilish. And he's going to talk about a wonderful technology called polarized light imaging, 3D polarized light imaging. The structural connectome goes microscopic. Marcus. Thank you very much, Ellen, for the nice introduction and for inviting me. Yeah. Let's go to neuroanatomy, and I would like to introduce you into a technology which we call 3D PLI or polarized light imaging. And I will tell you briefly what we can do with this technology and what we plan to do with this technology. And our general mission is, as follows, we would like to reveal the fiber architecture in postmortem brains based on whole brain processing, whole brain analysis, reconstruction at microscopic resolutions, which in the end means to us that we have to deal with terabyte to petabyte of data sets. Let me start with a wonderful example given by Ellen and his group by Catherine and her group a couple of years ago, three years ago. It's the big brain, and it's the only and the very first brain model which has been reconstructed completely from traditional histology, from traditional cytoo architecture based on seven and a half thousand single sections measured at a few micron resolution, and they were able to come up with a complete 3D volume at a level of something like 20 micron. And this is of course something we would like to go to also with respect to the fiber architecture. However, you will see during the talk that we need some few other ingredients than needed for this type of cytoo architecture. In general, of course, the key elements are quite simple, right? You need to know the right technique to section the brain. You have to deal with the imaging technology however it looks like for polarized light imaging. We need a very specific type of image analysis, and as you will see, this will be a very visual type of presentation. Visualization becomes more and more important, especially for the high resolution data set, but it's also getting more and more challenging to visualize all these details we are able to reveal. The next step after single section analysis is 3D reconstruction of course, and afterwards we can start the 3D type of fiber orientation analysis. So I will go through this type of workflow in the next few minutes. So just to let you know what we do with the brain, we cut it and we destroy it. And we have to do cryo-sectioning, which means we have to freeze the whole brain down to minus 80 degrees. It is put into a freezer and cut with a very large microtone. And you see everything is done by hand and you introduce for each section very individual deformations, cuts, whatever you can imagine. And this is really the main problem in reconstructing a whole human brain out of these deformed sections in the end. So to give you some numbers, we section at 60 micron thickness. The sections are not stained at all, so the technology utilizes the so-called birefringence of the tissue. And for this we don't need any staining. And we add up something like two and a half thousand, three thousand sections per human brain. We have to measure at one micron level. What we also do during the sectioning process is we take so-called block-face images. And you see here on the left hand side, it's just one of these images. You see here we have a barcode in the background. This barcode allows us to very precisely reconstruct later on all the single block-face images. We have for each single section. And the result is shown on the right side. The pixel or voxel sizes are in the order of 60 by 60 by 60 micron. And we use this type of dataset as a reference to later on reconstruct the deformed sections measured by polarized light imaging. So what is polarized light imaging and how did it all start? We found one publication from Cobinius Brodmann early 1900. He wrote in one of his publications, it wasn't in German, but I translated it for you, of course. So I stumbled upon the idea of formalin fixed tissue much later and realized with satisfaction that formalin fixation does not impair the birefringence of myelinated nerve fibers. Therefore we can study nerve fibers hardened and conserved in formalin with polarization microscopy. And this is exactly what we do in 3D polarized light imaging. Why we call it 3D will become clearer in the content of my talk. So we have set up in Yulish two different types of microscopes. One is called the large area polarimeter. And this guy is a one-shot imager, which means we can image a large human brain section with one imaging shot. This means we have a limited resolution of 64 micron pixel size. And each file size is quite small, so it's just 30 megabyte. On the other hand, we developed together with a small company close to Yulish a real microscope, which allows us to target 1.3 micron sized pixels. Which means each large human brain section will be composed of about 100,000 by 100,000 image pixels we have to deal with. And this also means we have 40 gigabyte of data per image. Which means times 2,500 sections we collect a lot of data just already in the imaging acquisition phase. So the large area polarimeter looks like this, so we have a tilting specimen stage. We put the unstained section into the setup and we rotate different optical filters around this setup. And if you have a look through this section, we start to see strong changes of intensity while rotating all these filters. And here's a movie how it looks like when we do the measurement. On the left side, you see the simple setup. This is composed of two polarizers. They have a very specific orientation. With respect to each other, we have a quarter wave retarder, which is also rotatable. And we have the tissue, which is lighted with an LED panel and we take all these situations with a CCD camera. And you see on the right side, especially in the white matter regions, you see very, very strong effects. Due to the fact that the myelin sheath of the axons is heavily birefringent. And I have put two monitors on three different points within the tissue and you see underneath here the measurement. In general, we measure some sinusoidal curves, but these curves, they are different in terms of amplitude and or phase. And this is exactly what we can use to infer different orientations of fiber tracks, of single fibers, depending on the size of the resolution. And to give you an example, when we measure one pixel, we see this sinusoidal curve here down here. We do a fitting in the analysis and the mean value of this sine curve is called transmittance, which is shown on the top left side. The transmittance is quite similar to things like myelin stained images. It's just an image of the light extinction in the setup. Then we have the face of this curve, which is called direction. It looks like the image in the middle and the amplitude defines the strength of the birefringence of the tissue and it is shown on the top right. It's called retardation. And based on direction and retardation, we try to infer the three-dimensional orientation, local orientation of the fiber structures within each single pixel. And you see I've already indicated that we assign specific orientations, fiber orientations to specific colors in our images. And that's the reason why the next images you will see look so very pretty and impressive. So this is an example of a result showing the 3D orientation of the fibers on the left-hand side. You see it's a visual system in a vervet monkey brain. On the top right, you see a human brain, a complete human brain. The left one has been taken with a microscope at 1.3 micron and you can immediately see and contrast single fibers inside the cortex into different layers of the cortex. And with a large area polarimeter 64 micron, we can have a look at the whole brain to see what's the long-distance connection between the different fibers. And in the end, and that's very important and this is different from most other microscopic technologies. With PLI, we can infer contrast without staining and we can give these contrast specific orientations. So you have more information within each thin section than we usually have with any kind of staining. So this type of analysis doing the fitting of the different sinusoidal curves is of course very compute intensive if you consider 2,000, 2,500 sections, millions, billions of pixels. So at some point we had to decide to utilize high-performance computing to utilize supercomputing environments. Unfortunately, we are very close directly next to us to the really supercomputing center. So it was about four or five years ago when we started to think about how to organize analysis workflows on the supercomputing. And everything currently starts at the lab. Of course, we collect a lot of information, metadata, just from the lab settings and we collect a lot of images. And these are currently all organized and managed in HDF5 file containers. So this really helps us to keep the metadata information and the imaging data together. Plus it helps us to very quickly access the data because parallel IO on HDF5 files is quite well established. So based on this lab data we can access from the supercomputing environment. In our case it's a Eureka multipurpose supercomputer in Jülich. We acquire the data and run the analysis section by section. And this analysis includes the fitting of the data, it includes cleaning of the data. For example, we apply independent component analysis to get rid of noise and dirt on the measurements. We have to do some automated kind of segmentation so we are not interested in having some background in our data. And at the end when we do the highest resolution measurements we measure them tile by tile which are overlapping. We have to do a kind of stitching to end up with a whole human brain without any hole. And to give you an example, so it's shown like this. In the end we would like to come up with this colorful fiber orientation map. Just to give you one example from the workflow, it's a segmentation algorithm. We have implemented a seeded region growing algorithm you see on the top left there. That's one tile of an image of a PLI measurement. And the green part above is the background and the tissue is now highlighted in red. And the algorithm tries to understand which pixel, which image pixel belongs to which. And in this context we tested different situations using just CPU based supercomputing or using CPU and GPU input. And we saw that we can easily get to a factor of 20 faster performance just using the GPUs for the metrics analysis of this region growing algorithm. So when we do all this in general it's not only interesting to have a look at the human brain. I mean this is always the main goal but usually you start with smaller brains as everybody does. And for us it's of course the red or the mouse brain, it's shown on the left. It is the monkey brain which is the next level of size and complexity for us. And of course the human brain. And all these brains they all exhibit this kind of birefringence. And that's pretty nice. So let me show you how a highest possible resolution human brain section looks like. It's a visualization used by MicroDraw. It's an open source project here from the Pasteur University and from the Pasteur lab in Paris. And you see the incredible complexity of a single human brain section. The different colors again mean different orientations. When I switch the gray-valued images these are all the different modalities we can also extract. And this is what you have to deal with if you want to reconstruct the connectome at the level of a few microns. Or even better. So this is an example of the hippocampus if you want to address this. And you see all these single information you can get with polarized light imaging. You can even see in the preliminary image there were some dark dots in the measurement. This can be seen in this transmittance maps in this light extinction maps. And they show fiber bodies. So we can in principle infer both fiber bodies, cell bodies and nerve fibers with polarized light imaging without any kind of staining. So all this already takes a lot of computation time. But the next step is even worse. So we want to rebuild the brain again. And that means we have to take all these sections. We have to find corrections, non-linear corrections for the different sections. And there are many approaches on the market. One has been very nicely demonstrated on the big brain and is even improved. And I just want to show you yet another one. But this is not an approach you can just apply onto your measured images. You already have to have a quite good agreement between the adjacent sections when you want to apply this technique. And this technique is a global approach. And this global approach includes these block-face images. You've seen quite at the beginning of the presentation. And we have the PLI measurements. And these PLI measurements are optimized to their corresponding block-face images. But they are also optimized in the local neighborhood. And we do this procedure for all sections at the same time on the complete supercomputer. So that we have a global simultaneous kind of optimization process. The deformation model we use is a B-spline model. And you can see here in this movie what you can in principle do on single sections. And it's immediately clear we have to be careful where to move which pixel. Because we have to recover the anatomy. It's not that we just want to have a nice brain volume. It should be as realistic as possible. That's always an issue. And it's always important to have a neuroanatomist who are able to tell us if the reconstruction is more or less realistic. From whatever knowledge it can be said. But you see what you can really do. And the optimization process tries to optimize this single section to the corresponding block-face image. The optimizer has been realized here by means of a Markov random field approach. Because the optimization process has in principle so many free parameters that it is not computational. If we would do it by a simple one-to-one optimizing process. Here are some speed-up curves. Of course we have to go to the supercomputer. And we try different types of GPUs to solve the problem. And with nowadays GPUs we are able to reconstruct 180 sections within 18 minutes. Just do one optimization run. Which is fantastic. And usually you have to do many iterations on the reconstructions to come up with a proper result. So what do we gain after all these processing reconstructions? Here's an example of the red brain. So that's the coronal section. The fiber orientation map. As I told you visualization becomes more and more important for us. We realized that there are so many information inside a single section already. That it makes sense to think about reducing this information to the observer. And this kind of, we call it spotlight imaging, is quite interesting. So you can go to the data and you can manipulate by hand the lookup, the color code. And so you can project out your orientations you would like to see. So it's a kind of virtual tractography. So this is the first 3D volume reconstructed at the level of 60 micron voxel sizes. And this is a virtual sectioning through the reconstruction. And you can see we can already quite nicely reconstruct. For example the corpus callosum at a quite precise level. However you still see misalignments. So dealing with this post-mortem reconstruction means continuously improving the curacy of the setup. But at least it's the first time that we could make a kind of proof of principle to show that it is feasible to reconstruct the fibers. And the next step of course is as usual you always need a reference space to provide the data, to share the data and for the rodent brain it's clearly the voxel space. And this is a project in the framework of the human brain project. We are doing together with Oslo University and Jan Bjalli's group. And we managed to bring this PLI data set into the voxel space and applying in the next step some anatomical segmentations to the data sets. It's shown like this. So this is also a way of reducing the immense number of information you get. And this is the reconstructed red corpus callosum based on the segmentation in the voxel space. If you're interested in this type of work you should meet my colleague Nicole Schubert tomorrow at the poster. Next size, vervet monkey brain. It looks like this, right? You can identify many, many anatomical structures. And we were able to reconstruct a couple of weeks ago for the first time 60 sections at 1.3 micron resolution. And you see it here, it's a cut out in the basal ganglia region. And you can nicely see this is the reconstructing just of the amplitude of the birefringent signal. So there's no orientation in it, there's no tractography done. And you can already see that you can identify in some regions where the fibers are not too dense single fiber or small fiber tract. So clearly the last step is the human brain. For this we did the first measurements on 180 coronal sections and we reconstructed them. And what you can see in the next movie is in the back you see the retardation image which means the amplitude image of our PLI measurement. And here you can see the extracted orientation vectors, also color coded. And you see the next 10 consecutive orientation vectors in depth so that you can see how the orientation changes in depth. And we start with let's say MRI resolutions at 1 millimeter. And so we are going to one region, we increase the resolution because we do have the resolution to do that. And now we move into depth and you can nicely see how the extracted orientations they agree with an anatomical basis. But keep in mind these orientation vectors they are in 3D, it's just like a projection now within the image. And now you see the combination of the two measurements of the high resolution microscope and the lower resolution large area polarimeter. So we can go into the same section, do a much higher measurement and now you see within the sagittal stratum crossing fibers. It's usually it's a field with fibers coming out of the coronal section. But here you can still see crossing fibers within this region. So in principle with polarized light imaging we can really address this kind of the idea of Google brain in future. So high resolution is nice but how to integrate these data into well-known technologies such as diffusion MRI which is much, much coarser. And which needs in some cases, especially for tractography, some guidance. And we thought about this problem also and quite recently we came up with an idea that we can have a look into our high resolution data. This is a cut out here. You see a patch of fibers running in different directions, they are crossing. And what we can do is we can make statistics on single regions of interest. Statistics means we create orientation vector histograms on a sphere with the contents of the different orientations measured within for example this region here. And afterwards we fit this distribution with a circular harmonics. And so we end up with a very MRI like diffusion MRI like data set. And this will help us in the end to rescale to find similar features in the different modalities. So coming to the end it looks like as if we can fill a gap which is present between the MRI world and the really ultra high resolution world of small samples here with our PLI technology. And in far future of course we will add more modalities. Receptor architecture is quite interesting. We would like to combine cyto architecture, fiber architecture with receptor architecture into let's say multi-model, multi-scale brain models of different species. And with that I would like to thank you very much. Thank you to all my colleagues and my team in special. Thank you very much for listening. That was fascinating and so entertaining Marcus. Questions for Marcus? So is that giving you a way of seeing how much you lose when you're doing diffusion imaging? Is that like I'm trying to think of ways of using that as validation of or not validation of diffusion imaging. Given the recent papers on like with the tracers in the macaque. I think we get an idea what we lose if you would like to call it. But it's even more interesting to see the other way around. When you measure in diffusion MRI some kind of steric distribution of fibers. This might help us to understand why it is the case and why this is the case in the very specific region you are looking at. So it's really like a guidance and maybe understanding a bit more on the deep microstructure of the fibers. So you look as though you need a major increase in computing power. So is the extra scale, this is another major use of extra scale computing and guessing. Because you're simply not going to be able to do a whole human brain anytime soon given current computing technology. For the reconstruction it would be already possible to use the current setups. Currently it's more a limitation in measurement time. But of course if you in the end if you want to do a tractography on the high resolution data. That's the next step and for this you certainly need the next extra scale generation of supercomputers. Marcus when we did the big brain we had tremendous problems of 2D slice to slice registration. And the question has often come up could we do vessel tracking from slice to slice. And the issue is always those very very high resolution jaggies are very hard to get rid of unless you have massive local number name warping. Do you have the same issues in PLI with fiber tracking from slice to slice? Yes, if you want to do fiber tractography at the microscopic level we are still not precise enough for doing this registration. Maybe not at one micro micro? Yes, I think that is possible. At this level in specific regions or volumes of interest it might be possible soon. I'm convinced of this. And you can even use vessels from PLI measurements. You very nicely see them in these images so you can use them also as teachers. Thank you very much Marcus.