 Welcome everybody. Today we will talk about registration as teaching of large terabyte.set. We will have with us David Urd, Stephen Privish, Sebastian Tosi, and John Bogovic. And so to help us panelists, we will have Julian Colombelli from IRB Barcelona, myself, Ofra from Weisman Institute, Anna from Opsala University, and David Berry from Creek Institute. And if the first speaker, David Urd, is ready, I will ask him to share the screen. Thank you. Okay, good. Then thank you for the kind introduction, Rocco. Yeah, my name is David Urd. I work at the University of Munich, LMU, and together with Stefan Privish, who's also among the panelists, group leader at Genelio Research Campus and MDSU Berlin, we're gonna give a little introduction to what we, the general concepts behind big data stitching and big data registration, and then focus on the software that we wrote together to address the various problems that arise with this tasks, which is big stitcher. But I wanna spend the first few minutes introducing the basics. And in various disciplines of biology, for example, developmental biology or neurobiologies, scientists need solutions to image very large samples, such as tissues or entire organs or organisms with very high resolution. And over the last few decades, several hardware solutions for these problems have been developed. So the gold standard for a very long time has been confocal microscopy, which provides really good optical sectioning so you can acquire three-dimensional image stacks of large samples. And confocal microscopy has a pretty high resolution and it is somewhat quick so you can also image live samples. Now from this kind of middle point, you can go in various directions. If you want the best resolution possible, you might wanna go to something like electron microscopy where you can take a sample like the C elegance here, slice it into ultra-thin sections and then acquire hundreds and hundreds of images on an electron microscope and then combine them into a three-dimensional volume image of the whole worm at nanometer resolution. Or if you wanna go into a different in the other direction, in the last 15 years or so, light sheet microscopy has risen to prominence and light sheets differs from conventional light microscopy in that you don't use the same objective that you used to look at the sample to illuminate it, but rather you have a secondary objective that shines a single plane of light into your sample. And this gives you very good optical sectioning like in confocal microscopy, but you can also image very quickly and with very low light doses. And this allows developmental biologists, for example, to image the development of, I think this is some crustacean lava over days and still have very high resolution. I noticed that my video is a bit choppy, so if it's choppy for you as well, I apologize. But you can't only use light sheet microscopy to image living samples, you can also use it to image very large fixed samples by combining it with some clever sample preparation techniques like sample clearing, which are chemical tricks with which you try to make your sample transparent by equalizing the refractive index within the sample to its environment. And that way it becomes transparent and you can see through it and also image deep within it. And likewise, a couple of years ago, the technique of expansion microscopy was developed where you take a tiny sample embedded in a hydrogel and then physically enlarge it so that you can actually image it at super resolution on a conventional microscope. And again, the samples that you create in that way can readily be imaged with light sheet microscopy. And all these examples that I mentioned now have one thing in common. They consist not only of a single image but of many, many images. And for us to perform any kind of downstream analysis on them, we first need to take all of the images and essentially map them all to a common coordinate system. And this is actually the task that we wanna solve with image registration or image alignment or stitching. And this is then no longer a hardware task, but typically a software task. And there exist many solutions for image alignment but in principle, they all follow a pretty common pipeline and workflow. And it usually works like this. So you start off by looking at all of your images in pairs. So you pick two images and then find out how those two images are transformed relative to each other. And if you do this for all overlapping images in your data set, you end up with a network of pairwise transformations and then you usually perform a step of global optimization to reach a consensus if there are some ambiguities in the system. And you might be done at this point but you can also repeat this process several times. So for example, to refine your alignment with more and more complex registration models, so you might start off with just a translation of the images and then do a refinement step in which you also allow rotation or scaling or even fully non-rigid transformations. And once you're happy with the transformations that you've calculated, the last task is typically to fuse the image which is either to combine it into one volume or and save it to disk but also just displaying the data set can be a challenge. And okay, for the first part, finding out how two images are shifted relative to each other, there exists a variety of strategies. And I just wanna mention the main ideas behind them. So there are two ways of finding out how images are transformed relative to each other. And one of them is intensity-based where you kind of look at the overlapping part of the images and then measure how similar the intensities of all the pixels in there are. And there are a variety of ways to do that efficiently either by working in Fourier space or doing it iteratively with an optimization scheme or using a multi-resolution representation of the data. But you can also do image alignment in another way which is you detect interest points, key points in both images and then just match those points to each other and map the coordinates of those points so that they lie on top of each other. And again, there exists a variety of ways to do that. Some of them are automatic where you automatically detect interest points for example, bright or dark spots in the image then construct a descriptor with which you can find them again in the other image. And examples for this are like SIFT or ORB for 2D images or also the three-dimensional geometric local descriptor that we use in our multi-view reconstruction for 3D images. But you can also pick key points manually and then simply map those points to each other. And I think John will mention that with big warp where you can do manual alignment of big data. Okay, so these are the basic principles on how to align images but especially with the data that we are facing there are several challenges and one obvious challenge is the data size. So I just wanna mention two data sets that we acquired in during our work on Big Stitcher. The first one is a cleared mouse embryo and basically every colored square in this image corresponds to a volume of 2000 by 2000 by 1000 pixels. And altogether it was around three terabytes big or this example down here where we have Drosophila labor central nervous system that was expanded and then imaged on a light sheet microscope from various angles and it totals to around 700 gigabytes. And this is more than you can for example fit into the RAM of even a powerful work station. So you have to come up with strategies where you can do the work piece by piece for example. And there are other challenges because you might have different kinds of aberrations that you wanna correct. So the simplest case is for example if you just wanna stitch a few tiles, the metadata you get from your microscope stage might not be accurate enough to just simply paste them on top of each other and you have to correct for that. But you might also have more complex phenomena such as chromatic aberrations, spherical aberrations that can't just be corrected with a shift but where you have to use a fin transformations or even more complicated ones. In light sheet microscopy, you can also often have the possibility to rotate your sample and then look at it from another angle. And then again, it is an image alignment problem to register those multiple views of the data sets. And finally in large data sets, you might even have aberrations that depend on the sample itself. For example, if you illuminate from different sides or if you rotate the sample, some deformations might actually depend on the refractive index at some point in the sample. So you have to do a non-rigid registration where you move one bit of the image different than another bit. And furthermore, we want the tools that we use to register images to be as robust as possible. So it's a time consuming process and we don't want it to fail because of one little error that happens along the way. And you can do that either by making the algorithms themselves error tolerance or by giving the user the ability to check the progress during the work and manually intervene if something goes wrong. And to meet all of these challenges, Stefan and I and colleagues wrote Big Stitcher. And Big Stitcher is an image Fiji plugin that's for reconstructing and registering image data sets with a specific focus on cleared and expanded light sheet data. But it is also a general purpose image registration tool. And the first task that Big Stitcher tries to solve, I didn't even mention it before but your data sets typically consist of many images. They might be in many files. They might be in a variety of formats. So actually just assembling all of the images into one data set that you can work with can be a challenge on its own. And so with Big Stitcher, we wrote a kind of automatic loader that would use bioformats not only to read the image data but also try to parse metadata and place the images according to metadata if possible. And we also added the possibility to easily move images to a regular grid, for example if you don't have metadata available. And also it's many of the subsequent steps can be done on down sampled versions of the images. So we offer the possibility to resave the data in a mighty resolution pyramid where you don't save just save the full resolution image but also down sampled versions of it in a format like HDF5 or N5. And that way you can do lots of the work later on on a smaller version which reduces compute time by a lot. Okay, but then once we are able to load our image data the first task is to find out how pairs of images are transformed relative to each other. And in the simplest case where we only look at translations we use a method called face correlation where you take two images then do a few manipulations of their full read transforms. And then in the end you end up with so-called face correlation map an image that is essentially black except for one point and the location of this point corresponds exactly to the shift between the two images. And we also realized that you could detect this point with sub pixel accuracy which is really useful because again we don't have to use the full resolution image but rather we can use a strongly down sampled version for example eight fold down sampled version and still get very low error in our registration. While at the same time if we down sample eight fold in three dimensions we reduce the data size by 500 fold and also reduce the compute time. And what I also mentioned is it's useful if the user can see the results of their calculations right away. So in Big Sitra we make extensive use of Big Data Viewer which was presented last week I think for example to preview the shifts of one image to all its neighbors. So you can click through them and if you see an error you can also manually say ignore this link. And once we have done that we have a quite involved procedure of finding a global consensus a global optimum of all of the shifts that we calculated and our procedure is actually able to remove links that disagree too much with their neighbors automatically. And we also have methods in place where you can register patches of images that only have background between them and still place them correctly according to some preregistration that you have like metadata. And by combining all of this you can take a data set like this it's a mouse brain slice and it would come out of the microscope with some obvious inaccuracies in the registration but in using Big Sitra you can easily align all of the tiles into one seamless volume but this might sometimes not be enough. So you might have classical optical aberrations like spherical chromatic aberrations where two wavelengths have a slightly different transformation. So you might wanna refine on your initial stitching result and in Big Sitra we do this by a method called iterative closest points. So we would detect points automatically in all channels. And if you have a tissue with some amount of auto fluorescence those bright and dark points actually are the same in both color channels. And what you can then do is you can move the points on top of each other in both channels allowing for rotations and k-links. Therefore have the channels snap into place even nicer than with just the translation. This is what it would look like on this data set. And similarly you might have things like spherical aberrations where the right side of one image and the left side of the neighboring image are not transformed in the same way. And again, using this method you can correct for such phenomena as well. And if we go one step further you might even have complicated transformations due to changes in the refractive index of the sample even though you cleared it it will not be 100% equal to the environment. And those errors might still be there even if you do a f-fine registration. So in big stitcher we actually have two ways of further refining the registration. One possibility is you can virtually split up your images into smaller images and then just align those images. Or we also allow for fully non-rigid registration using piecewise a f-fine transformation. So basically every point in the image gets its own transformation, its own deformation. And with these tools you can even correct for the errors that remain after all the previous steps. And I want to mention at this point big stitcher shares its DNA with Stefan's multi-view reconstruction software that you can use to align multiple views in a light-shift microscope where you can rotate the sample. But now with everything that big stitcher can do you can image your sample from one side acquire lots and lots of tiles then rotate it from that side as well and align everything using one software package in big stitcher. And for example here in this mouse brain you can see we can do the stitching but then also rotate it 180 degrees essentially doubling the volume that we can image. And with more and more complex transformations that we support we could actually on a test dataset show that we also this truly improves the quality of the alignment, especially if you have a multi-view or dual illumination datasets. And I think it's also nice to mention how long this process takes. So this test dataset was around 170 gigabytes in size and actually calculating all of the shifts between the images and the transformations takes a few minutes of time. So it's quite manageable on a standard workstation and the only really time-consuming step is actually saving the results to disk in the end which can take several hours. But in many cases we might not even have to do that because we have big data viewer and big data viewer can display the transformed images in real time. So if you just want to look at your datasets do some visual analysis of it you can do that without even having to save the result to disk. And as you can see here you can seamlessly zoom through your sample even look at it from the side and this typically happens in real time using big data viewer. Exactly and more and more there are tools available to actually perform downstream analysis using the big data viewer framework and I think next week Jean-Yves Tienervé will present Mammut and Mastodon that are tools to actually perform tracking visualization and annotation in big data viewer. So those could be seamlessly integrated with the datasets that we reconstruct earlier. But if you want to save your results to disk we also have a few optimizations in big stitcher for that. So we on the one hand we offer an optimal brightness adjustment to account for things like bleaching in subsequent tiles and so forth and we also can do deconvolution like the multi-view reconstruction but in big stitcher we can actually we do it virtually so which means that even in a terabyte size dataset calculating a deconvolution on the fly you can do this on a dataset that would never fit into the main memory of your computer and I want to end the presentation part with a few results so this is the brain slice that I showed you earlier and I just want to stop here and draw your attention to this part this zoom cut out so it's a histone stain and you can actually see the heterochromatin accumulations in those mouse nuclei so you have subnuclear resolution in a centimeter size dataset and once again you can image entire mouse brains and align the tiles using big stitcher and then have cellular resolution again in this huge dataset and finally I want to show this Drosophila central nervous system again which we was expanded eight fold and then imaged on a multi-view light street microscope from four different angles in a tight fashion and with big stitcher we could align the tiles from each angle and then all the angles to each other and I think that's a quite impressive results so with this I'm at the end of the presentation if there are any pressing questions I guess we could answer them now otherwise I would give a short demo of big stitcher I'm working on answering them David I think they can also be answered in the question answer session okay then I'll continue with a small demo okay and actually the dataset that I'll be working on is this confocal datasets consists of six tiles again of a Drosophila CNS it's available on the big stitcher image day Vicky page if you want to play around with it yourself you can scroll down to example datasets okay so as I mentioned big stitcher is a part of Fiji so I'm just going to fire up my Fiji and if you have not installed it already you can easily activate the update sites of big stitcher by going to help update takes a couple of seconds to check whether updates are available but in this window you have the ability to manage update sites and in this list you can just tick big stitcher I already did it but then you would apply changes and the next time you start your Fiji big stitcher will be available now to start big stitcher itself you can go to plugins big stitcher and big stitcher again and then you are presented with this window where it asks you for your dataset file so I mentioned earlier a dataset can consist of many image files and we save kind of all the metadata and the registrations that we calculate in an XML file so either you already have that or you can just define a new datasets and I'm going to do this using our automatic loader that takes care of a lot of the organizational things for you and this is actually what our dataset looks like so it's tiff stacks and they're called something C something hyphen 7 something so the C stands for the color channel and the other number is the index of the tile and the first step is to select which files we want to include in our dataset and the easiest way I think if you have all of them in one folder you can just drag your folder here and it will include all of the files in the dataset you could alternatively just drag a single file in there and place wild cards in this file path so we have C something that you indicate with a star and then 7 something and that way you can also build up your datasets and it will take a few moments because now Big Stitcher looks at all the files and it tries to parse some metadata on what the different files represent if they are colors, if they are different angles and so forth in this case because it's plain tiff stacks we can't find out much but we thought that there are numerical patterns in the file name but these files are C something 7 something dot tiff and it will ask us what does this first thing represent, we can say those are our channels and the section pattern represents our tiles now Big Stitcher also tries to parse the pixel size in this case it says one by one so that's not super helpful but you can manually set it and in this dataset the pixel size is actually 0.45 by 0.45 by two microns and finally since we don't have metadata we also offer to move the images to a grid automatically that makes sense and we'll see what that looks like in a moment now the next question that we have to ask is how do we want to load the image data we can just load the raw data as it is but we can also try to load it virtually with caching which is very useful if you just want to have a quick look at a very large dataset that doesn't fit into memory all at once so you can immediately resave it in a multi resolution format HDF5 or N5 and we generally recommend that you do that because it makes everything else faster so it makes the processing faster and it makes it easier to display the data in Big Data Viewer in real time so I just do that and then we can manually set options on how this multi resolution pyramid is calculated but typically the defaults here are very sensible choices and now we will load the dataset and then immediately resave a multi resolution version takes a couple of seconds here for this 150 megabytes and once we have finished the first thing that we see is Big Data Viewer window pops up just to make that a little bit bigger and move our datasets into the middle I think Big Data Viewer was discussed in this lecture series last week so I would assume in the recording there will be a recording where Big Data Viewer and the navigation in Big Data Viewer is explained in detail so I won't try to go too much into detail but essentially on a Mac you can zoom by pressing command and then scrolling or you can double press to move the image around and this is our dataset and since we earlier said we wanted to move it to a regular grid these regular grid options pop up and it's a bit boring because the default settings actually works nicely for this dataset because it was acquired in this zigzag pattern but we could also if you don't know how your dataset is what's right you can simply click through all the possibilities to preview in real time until you then find how your images are arranged you can also play with the overlap in real time to get a pre-registration of the datasets then when you click Apply Transformations you actually now end up with the main window of Big Stature and basically a list of all the images in your datasets so you can look at them individually or you can select multiple and by default we group the color channels but you can also unclick this and only look at the red channel if you want and a pretty useful little thing that we included is if you click into this window and you select the C button it will actually color the different tiles in your dataset differently so you immediately see if there are any errors in the alignment of the tiles and then once you have your dataset ready we can actually proceed with stitching the datasets in Big Stature and one of the most operations is you select the images that you're interested in and then you right click and then you have a list of things that you can do with those images and in our case we want to stitch the datasets and we prepare the kind of wizard to do that so which will guide you through the process it will ask us we have multiple channels in this dataset how should it treat them it could use just one channel for the registration or it could average all of them so in our case let's just average them and it will also ask us what down sampling of the data we want to use for the stitching and as I mentioned earlier it typically works nicely with down sampled images in this case with a pre-computer 2x2 down sampling in the HDFI file so let's just use that because then we can just simply load that and don't have to do any redundant calculations okay we didn't see it in the progress bar here but it took a couple of seconds to calculate all of the pairwise shifts now and then we have the ability to go into preview mode so if we click that what we can do now is we can select a single image so we can go through the images and just see how are their neighbors transformed relative to each other and in this window here we can click through all of the pairwise shifts and kind of manually check whether it worked alright here it did actually but if something for example here we have very low overlap so sometimes it happens that we can't automatically get a shift here so we or we get a wrong shift so we could actually right click here and choose to ignore this link for example and we can also filter links based on the correlation of this overlapping intensities or something if the shift is too big or too small we can also ignore it here in this filter panel after that we're going to click apply and run the global optimization so we have various strategies for the global optimizations but to be honest I would just stick with the default here that comes with all of the optimizations like the iterative removal of disagreeing links and so forth because it typically doesn't increase the processing time much so I just started running through it in kind of one instant our tiles snapped neatly into place and not already looks pretty nice this aligned data set but we could even try to refine this alignment with a theme transformation and again we select our images and we refine with iterative close points and in this panel so we can basically what it will do it will detect interest points and then try to map them to each other and here you can select some presets so you might want to do it you can set what kind of down sampling you want to use for the detection I'm going to leave it at the default now but I will warn you it will not work with the default settings and I'm going to do tile registration because in this case we don't have any auto fluorescence or fiduciary markers that are the same in every channel so I'm going to try the tile registration now so it will take a couple of seconds to detect all of the points and then it tries to map them on top of each other and actually in this case you can see it actually fails it kind of bends one of the images to the sides and the reason is the data set is quite small and we down sampled it 8-fold now so it was simply so small that we could not detect anything but this is I think a good example of where the interactivity of big stitcher comes into play because you immediately see that something goes wrong and then you can just right click and go down here to remove a transformation and simply remove the latest transformation and then you are back at where you were after the stitching and now let's try the refinement again this time I'm actually using only 4x4 down sampling so you activate that by hovering over it for a second and now let's try it again so again it detects interest points and then maps them to each other and you don't see a strong effect in this small data set but I mean in green here for example it snaps into place even more neatly for example go back into the main window and press C again and then you switch it back to a standard color visualization you could go through your data set but now you could also simply fuse it and do some downstream processing so down here you have possibilities of image fusion you can quickly display everything that you have selected I'm going to do this with some down sampling and you can see it immediately pops up as a standard image J image that you could then start to do analysis on it works this fast because you might see a little lag if you try to scroll through it because everything is computed on the fly so now we only compute this plane and then kind of if we press the arrow keys to go through plane by plane you it's calculated as you need it essentially but in this kind of more advanced image fusion dialogs you can also set it to pre-computer image if you want to save it right away or in so forth you can choose which pixel type whether you want it in 32-bit or 16-bit you can display the images in image J or immediately save them to disk so we have all of that in there and I would say with this this is the end of the demo I think I'm already a little bit over time so if there are any pressing questions I can try to answer them otherwise I would pass the button on to and we had several questions and actually Stefan was really good and the other panelist to answer in the question and answer section I just want to comment from you about the file format you say data can be converted to HDF 5 or N5 and you suggest to increase the speed of calculation so it is something that you suggest to do to every user at the beginning while they are starting to import their data in big-stature or is something that you suggest to do just after I mean it's if you really want to work on the data set I would always save it to but I also mentioned you can so let me rephrase it if you really wanted to work I would try to re-save it actually you can do it after the fact so you can start by loading the raw data or virtually loading and then again in this main menu you can right click and re-save the data set HDF 5 or N5 after the fact if you just wanted to look at the images then it's maybe faster if you do it with virtual loading so that way you would basically only for example in big data we are only looking at one slice through the data set so in virtual loading it would only load the one plane that you are looking at so that way you can see the data really quickly but once you do something like rotate the image to its side then all of a sudden it has to load all of the planes and this is something that works much quicker with an HDF 5 or N5 file save the data in blocks so in this case we have a Y set cut it would only need to load the blocks at this specific set X location that we are okay thank you thank you again for the nice presentation and also the lab demo and big applause for you and I will switch to the next speaker who is Sebastian that will present a mosaic explorer J plug-in thank you hi everybody so yeah I will first share my screen okay yeah so hi everyone I'm Sebastian Tosi and I work at the Advanced Digital Microscopy Facility of IRB Barcelona so maybe some of you remember me I'm the oldest generation of Nubias teachers but it's actually my first time participating to this very nice series of Nubias Academy webinars so I'm very glad to be here and yeah my talk is on mosaic explorer J it's actually not a plug-in it's an image macro to stitch 3D microscopy-tiled dataset it's actually a light sheet microscopy dataset so in a way the application scope is overlap or similar to a big stature but as a disclaimer and I won't consider it for very long I guess you will quickly understand it or approach is really simpler so in terms of software engineering I mean the it's order of magnitude simpler actually work well if it fits your own application so it's so more easier to use and I will try to highlight the advantages of this approach no so it's not as flexible as the big stature it's meant for some specific application okay and it cannot also correct for all the fine errors and adjustments such as chromatic aberration et cetera that David went through so the scope of the application is that your tiles your microscopy tiles should be organized on a regular 2D grid so it's not that you can place them anywhere they should really be like regularly spaced and also the formats at least currently supported it should be only TIF tiles okay so the tiles can be either 3D multi TIF files or TIF series or subfolders with 2D TIF files inside and we also use a simple scheme of XY indexes so we need two numerical fields to locate the tiles in the grid so the big advantages of the tool is that it's simple to use and also probably quite simple to customize since it's an image macro so if you need to add something that is not in there you might be able to do it easily there is no file conversion or duplication step so it can work natively with the 3D TIF files it's actually designed to work this way one big advantage is that you can use the tool while you are acquiring the images so for instance to spot that something is going really wrong during your acquisition if an image is not present so a given this slice it will just be black so you won't see it but software won't crash actually the alignment is computational less there is no computation or automatic optimization steps of the tiles it's based on interactive and so it's user based as I will demo later and what we use to make this possible is constraint transformation for the alignment that I will also briefly describe in the slides so it's so more simple it cannot correct for any kind of errors as I will show but you know on that list and especially with the microscope we use light sheet microscope it works quite well it supports dual sided detection and dual sided illumination and also some illumination correction mode for instance to balance the intensity between the two sides and also some flat field correction based on reference images or adjustable shading models so stitching for the microscope images I will go very fast on this David already did a good job but basically it's when your sample doesn't fit in a field of view and you still want to image it with the current acquisition setting you use so essentially you tile several field of view to see an extended region typical way to do it is to use a regular grid to tile the sample and to map between the tiles to avoid gaps missing information and also to simplify the reconstruction of the images practical challenges you cannot be sure where the tiles are exactly located even if you move the sample in a very controlled way due to physical imprecision of the motors and also some microscope misalignment for instance the camera tilt on the alignment of the light sheet next question how should you blend the intensity so the tiles especially if you have non-uniform illumination or some local artifacts so you have different ways to do it some are single based others are deterministic so giving some ways to the tiles I will cover this also so the approach we take in a further slide one typical misalignment in microscopy is camera tilt so with the system of translation of the sample the XY system in red here is not fully aligned to the camera field the XY coordinate system of the camera chip in this case if you want to tile the images properly you have to move the tiles no rotation is actually to be involved it's just a matter of 2D translating the tiles to the correct position so it's quite simple to correct for this and assuming that you take this stack so several slices you can actually at least as a first order approximation apply the same transformation to all the slices across the stack and this is what we do in Mosaic Explorer J correction of the tilt of the light sheet so you're just assuming that it's coming on one side so if the axis that you use to build the image stacks is not perfectly perpendicular to the light sheet so in other words if it's slightly tilted you might also have an issue when you when you stitch sorry the tiles of the Mosaic and this is represented graphically here the way to correct for this is to actually shift the tiles so in the z dimension one respect to the other so that you get a good match within a given slice of the whole Mosaic it's also a simple operation because you can actually shift the whole tile by a fix the offset and in Mosaic Explorer J there is a simple way so that the user can adjust these shifts actually you don't have to adjust them for all the tiles but only for the first row and the first columns first column sorry because the correction can be again to a first order so it's not a big manual work to do this then how do you find the right position of the tiles so David already covered this but you essentially have two ways to do this one is based on the intensity signal and correlation between the images so you can compute the correlation in the overlap region between two tiles so in a pair wise way and then optimize this correlation to find the right location another method and this is the approach we take is to use reference landmark so this landmarks can be either automatically detected interest point as David introduced or n-picked so you manually reference this landmarks and then you use them to find the transformation between the landmarks and you apply the same transformation to the tiles so since we use a constraint model to move the tile one with respect to the other we can actually afford so to say asking the user to mark these landmarks because with only few reference points you can typically get to already a quite accurate reconstruction of the mosaic okay then how to blend the intensity of the tiles for the signal of the tiles so you have again two categories I would say one is deterministic so for instance you average in the overlap region or you use some progressive blending so giving a ramp of weights that is increasing as you move in the overlap region for the other image okay or it can be signal dependent so a simple signal dependent function is to use a maximum intensity function between the tiles so only the maximum intensity shows up for instance or you can use some more advanced weighting modes or functions such as using the local contrast or the local sharpness while you merge so I know that the big stitcher supports these modes we use only these simple modes that are shown here on this slide in mosaic explorers so as I said initially we started to develop this macro to work with a custom light sheet microscope we have in the lab and then it was expanded because it was working quite well in our hands so we wanted to make it available to a larger audience so our microscope is represented here so it's a real picture of the microscope as you can see it has two detection sites so two objectives here and the camera that you don't see here and also two illumination sites so the light sheet are formed with two cylindrical lens coming from both sides and a system of pivot scan also both sides and then the sample so we use it for optically clear sample quite large samples it's sitting in a quartz chamber that is completely independent of the system so it can be positioned on the microscope here you can see and we use air lens to image so I won't enter in the specificity of this microscope because it's a bit unrelated to the the macro itself since the optical correction is actually performed live while occurring the image because you have some de-focusing when you move the chamber with respect to the objective since there are air lenses but I won't enter these details here just to say that it's a system with dual side illumination and dual side the workflow we use when we process dataset acquired with these images is essentially a four step in the software and I will demo some of them so we first stitch the four mosaics so from left right and also the two cameras independently for this we only use 3D translations of the tiles so following the model I illustrated before then we register the two cameras so by 2x2 the mosaics here on this part of the slide you can see the process only for the left side but you would have also two mosaics for the right side if it's dual side illumination this registration of the two mosaics from the two cameras after reconstruction of the mosaic is based on 2D similarity transformation so we account for rotation at this step some scaling that is useful especially if the two objectives don't have exactly the same magnification and of course some translation of the mosaic our approach is quite simple so we perform this registration in a single Z slice that we call the union slice and then we only keep half of the stacks from the camera so the best side is kept and the other side is discarded and the registration is only performed at the union between these two half stacks and of course the same transformation is applied to all the slices of the camera that is registered to the other one then in the third step we align the two sides if we have two illumination sides this is very simple again we just use a 3D translation of one mosaic respect to the other so you can prove that it only works well if your lightsheet have the same orientation so the same tilt or at least if they are not tilted along the horizontal axis of the image in this case you would really need a 3D rotation so it's a bit of a limitation at the moment but still for not too large mosaics or if this tilt is not too high it already works quite well finally you can correct illumination so by simply scaling the intensity of one side respect the other and also applying if you wish some flat field correction you can adjust the difference images for this correction or also try to adjust a simple model based on linear correction of the intensity profile okay so that's it for the intro I will now move to the demo just to mention the code is available on Github it's very simple to install it's just an image macro there's also an article that is under review so it's an interactive review you can already read it and importantly there are 5 video tutorials that are covering the 4 steps I described before so they essentially go into more detail into what I will show now in the demo for the demo I will switch to another machine I could run it on my laptop but since all the data sets are really big like over 1 terabyte I don't have them there so I will connect to a remote machine we have in the facility okay it's here so the first thing you want is to install the macro so here I already installed it you see mosaic explorer here then the data set we will use here let me show you the first one I want to open is this one so it's only one side in this case of a large data set of over 1 terabyte and as you can see the tiles are 3D so we only have we have all the slices in each of the file and each file is about 20 gigabytes in that case so I will open this data set to show you the okay the interface and so the first dialog box told me that there was already an alignment file in the same folder so each time you align your mosaic and you exit the software this alignment file can be saved and automatically loaded on the first run here we see the mosaic in color mode so I will switch to grayscale mode so it's a regular image window you can really use all the features of image we only see the XY view so there is no way to interactively rotate in 3D in this case I mean that's always the view you see if you want to navigate through the sample so you use Alt to open this navigation menu that I call the control panel so here you can for instance change the Z slice you are looking at as you can see the loading is quite fast and here we see so it's 16 tiles on each side we have left and right illumination side so it's already quite big here you have access to the different dimensions so if you have camera and double sided I will show you a bit more in the next demo here it's about the alignment of the tiles in X and Y so essentially to correct for the camera tilt this part here is used to move the tiles in the Z dimension to correct for lightsheet tilt in this case and here in this last part we have the intensity correction and we can also change the blending mode of the tiles ok so here I'm using for instance max intensity instead of the additive mode that I had before so I will now open a slightly quite smaller data set to show you some of the interaction for alignment of mosaic so this is this one so it's a single side single camera in this case it's a simpler case ok so as you can see we only have 3 tiles from the previous data set I will zoom in a bit so here the tiles are just laid on their file naming so there is no overlap and alignment performed and the way to align the tiles is quite simple you just have to draw to join some matching features so this would be your first landmark and the other landmark you press alt and you tick this register so we estimated the X overlap between the images and then you do the same for the Y overlap so you have to be relatively accurate you can actually zoom in a bit more but I'm just going fast here but no I perform the alignments of the tiles ok as you can see it should blend to yellow if you're in a good position you can also move in Z to check whether it's also good in other position it seems fine so once you're happy with the alignment you would switch to the grayscale mode find the right blending so for instance max intensity blending and now you are ready to export the images you do it simply by going to export here I won't export all the slices only a few of them ok let's export them here in this folder ok the images have been exported I can exit the macro and open the exported images the last folder that was used actually by images so that's why it's already here I will adjust the intensity zoom in so now the image have been stitched and are just open in a regular image stack viewer ok we could export as many as we wish and also several channels if we have different channels in the stack this is just a one click operation I will go quickly on the last demo ok so in this case the channel names are a bit different so it's a different sample in this case we only have small one by three mosaic but it's dual sided so left and right side and we also have two cameras in this case so I want to show you the camera are aligned ok so the first thing I will do is to use again grayscale and max intensity mode and I will zoom in a bit on a feature here in this part of the image so the world stack has 1200 slices so the first thing I will do is to move on one of the sides so one of the extreme plane and so here you can see the first camera on one side that is close to the plane we are looking at so if I change the camera if I move to the other camera as you can see the quality is not as good because now we are seeing through the sample so the light is scattered if I move to the other side so now the quality is good on this camera and slightly worse on the other camera as you can see so the whole idea is I will go back to the middle slice the whole idea is to register the two cameras in the plane where you want to make the switch between both cameras and this is done with the same process so you just have to point click landmarks to find the proper transformation and once the alignment is performed you can check it here so here we have the position of the adjustment of the camera so translation rotation and scaling so again if it is properly aligned you can see it blends to yellow as we see here and once you've done this you can also export the images but in this case again I will export only a few slices but in this case I'm telling the software it should switch the camera so in this case at the middle slice which is slice 600 so it will automatically switch the camera and of course apply the same operation to all the slices on the camera that is registered so I export it here in this case it's slightly slower because one of the two cameras is registered so we have some 2D rotation and scaling so it's not as fast as before ok and here I exported both channels so now it's doing the second channel that actually I haven't shown before but it's the auto fluorescence channel ok so I will exit and open the images so since I have two channels I should first split them and well not split some hyper stack viewer and also the intensity ok so if I'm moving more or less where I was here as you can see we see the camera switch but it's quite smooth so it's around here and here ok so we see there is some blurring on one side that is not exactly on the other one but the images are quite nicely registered so with this I'm done so just the last slide I would like to acknowledge yes people at the facility Julien and Lydia also for sample preparation and all our collaborators so actually we have many more than the one that I named here but the sample we used for the article in this presentation are listed here and also the article is published in F1000 research so it's one of, it's funded by cost new BS action thanks Sebastian so I'll make a quick question for you if it's possible to adjust the stitching, uploading a list of points and these points can be derived from another image analysis maybe using some fiducial markers like gold nanoparticles or other sort of staining that you try to segment, locate precisely and use that one to improve the stitching so to answer the question properly would these points be landmarks that could be used by the software or are they the position of the ties like initial position of the ties landmarks I guess fiducial markers like for example you have another staining of which you are really sure so at the moment it's not something that you can do as just importing the files I mean if you can visually see them in the images this fiducial it would anyway be quite easy to point click them in the same way that I have shown that would be reasonably easy to do but there is no way to import landmarks externally at the moment the landmarks that you select are anyway very few because the model is very constrained so it's just a 3D translation of the ties and also applying some camera rotation tilt compensation and lightsheet compensation so actually in practice with only like 2 or 3 points you are done and also 3 points for the camera registration which is a procedure I haven't described but you need also to point click 3 points to find the right scaling and rotation okay so thanks again for your presentation the next speaker is John Bogovic from Geneva Research Campus and he will speak about big work thank you alright thanks again for the introduction thanks for having me, it's a pleasure to be here and thanks again my name is John Bogovic I'm part of Stefan Salfeld's lab at Geneva and I'm the author and maintainer of BigWarp BigWarp is a tool for manual registration between large image data and I'm going to just jump straight into the demo actually before I described BigWarp because N5 came up a couple of times I wanted to point out that we've recently released a plug-in, an imageJ plug-in that enables easier saving and reading of N5 data sets into imageJ with some options for metadata and I wanted to point out that N5 is compatible with a lot of other large block-based file stores such as HDF5 and ZAR for example and this means that you can use the plug-ins that we wrote to write to ZAR files write to HDF5 files as well as reading from ZAR and HDF5 files if you have Fiji you have these already you can import with file you can create the bottom unfortunately N5 and this will open up a window that you can learn more about the details of on this GitHub page again this is github.com Salfeld's lab N5-IJ and as well you can save any data set using file save as export N5 it'll give you some options again these are described on this page so we hope that these plug-ins make things more accessible and a lot easier because it's tied to imageJ this means that really writing huge data sets is still a bit challenging but we're open to continuing work on this and I'm pleased being in contact with us speaking of please join the image.se forum if you've not already this is a good place to ask me questions about anything else and as well a lot of the information that I'm going to describe is on the big warp wiki page on the imageJ wiki okay now let's really hop into it if you have Fiji then you already have big warp it's accessible through plug-ins big data viewer big warp and I'm going to start by demoing the plug-in using sort of vanilla imageJ image plus style and then I'll show another demo that uses N5 at the end so this dialogue appears and it asks you which image will be moving that is which image will be the one that is transformed and which image will be fixed so here in this example I'm going to be using some correlative electron light microscopy data so we'll align this electron micrograph and we're going to use this POM data set this sample data is available publicly I think it'll go in the chat hopefully so if you press okay two windows will appear quickly three windows actually so one of these shows the moving image here's the electron micrograph the other shows the light data that will be our target image and the other shows the light data and the other shows the light data so here in this example you click landmarks with pressing space it enters you into landmark mode and you can click landmarks like so controls the undoes things I'm actually going to make these landmarks appear in a different color and much bigger let me show you what I did hopefully you can see now larger, absurdly larger landmarks I went a little bit too crazy perhaps okay so I'm going to start by in between describing how the tool works I'm going to be just giving some tips on good landmark placement at the tool so I'm going to be targeting these mitochondria and CMD shape mitochondria in here and so I'm going to find the thinnest part for example click on the thinnest part in the moving image and then the target image and you see that it kind of became highlighted once the pairing happened see how this landmark is brighter than this one for example also important is to localize well in 3D when your images are 3D so here I'm going to rotate the image so this is what makes big data viewer so powerful it enables us to really make sure that we've localized well in 3D and because my landmarks are huge it's a little tough to see but I'll leave it at that I'm going to place a few more landmarks in a couple of more places so again I'm finding other mitochondria I'm going to place a landmark again in the moving image and also notice that I'm not placing too many landmarks close to each other it's generally wise to spread them out so that so the transform can extrapolate well so I'll place one here and then one more after this I know exactly where I'm going since I planned these landmarks ahead of time very good I'm also skipping some of the details of how big orbs navigation happens because I'm pretty sure that this has been shown elsewhere in the webinar okay so I'm going to place this final one I'll also notice let's also notice that all of these landmarks appeared in the table and double clicking on them will zoom the window to those respective landmarks which is pretty convenient if one gets lost for example okay and now the other most important hot key of big warp is the T hot key so pressing T now warps the moving image and if in this window I press F that fuses the image that it displays the images on top of each other so it looks okay however what I'm going to do is rotate this and the alignment is pretty bad in the Z direction and that's because all of four points that I've clicked so far are very nearly in a single plane which means that while I localized well in this plane I didn't localize well in the through plane so I'm going to undo the transformation by pressing T again and I'm going to place one more landmark in a different plane so you'll see all of my landmarks were currently this plane approximately here and so I'm going to place one more over here let's find a nice mitochondrion so let's undo this and if I place one more we'll see now that if I transform it it's a lot better in Z as well so I'm going to press Q which aligns the other big warp window to the current window so here if I for example drag plus Q drag plus press Q you see that the other window moves along with this one and if I rotate you see that the absurd skewing that we saw in Z before is not there anymore because we've placed another landmark that's the general workflow by default big warp uses a thin plate spline transform which is a non-linear transform you can see that it can really warp the image a lot which is good if one is willing to devote the time and care to placing landmarks carefully but occasionally one doesn't need to place many landmarks so I'm going to use a simple transformation so choosing a simpler that is an affine similarity rotation or translation transform can work well it might be wise to use those options so we'll stick with the thin plate spline option for now and I'm going to show you a couple of more things before we move on to N5 the first is of course so ctrl s is the hot key that lets you save landmarks or you can go from the landmark window file export landmarks and save them as a readable CSV straightforwardly the other useful thing is to export the warped image and for that the hot key is ctrl e which brings up this dialogue and I'll explain it briefly the default option so if it's possible to just press okay and something reasonable will happen and actually I'll do that first and the reasonable the reasonable thing that it does that is the default behavior oops is it transforms and re-renders the moving image so that it is at the same resolution and the same size as the target image so I'll just have to wait a minute for that to work these are medium-sized medium-sized images and here here we go so first observe that one this is the warped EM image it is the same size as the moving image and it's at the same resolution so I'll show you that this is at about 100 by 100 by 200 nanometer resolution as this one is I'll show one other option here suppose one wanted to render the result at a lower resolution you can change this resolution option instead of being target to specified so here target means render the result at the resolution of the target image if we wanted it to be at let's say half the resolution of the target image we might say 200 200 by 200 by 400 nanometer resolution for example and if we press okay it should take much less time for the results to appear and indeed it does here we are we see that the result looks approximately the same but that the image is about half of the size of the target image all right very good let's see okay another nice option is that when one collects very many landmark points applying the transformation can become slow so we we have an option for exporting the transformation as a deformation or displacement field and that is possible by going to the file export warp field option and you can generally just press okay here and what it will produce is an image where the position at where the pixel value at a given point describes the displacement that the transformation describes there we have some tools for for using the output of this but that's all I'll say for it for now except as well that these deformation fields can also be stored as N5 which are convenient if for example one wants to transform points instead of images then one does not have to load the entire displacement field so again this is a three channel image and each component describes the displacement in physical for the transformation okay so I'm going to stop this part of the demo for now and since so far I've only used small images that is images that can fit on this old and small laptop I want to show that it's possible to also use very large image data and in fact I'm going to use the same image data that I showed earlier but I'm going to use N5 images that are stored on Amazon AWS and that have been shared as part of Janelia's OpenOrganel project so this is OpenOrganel.Janelia.org and these data are you can be found on this page here under the co-7 cell over expressing ER and mitochondria markers for Clem data in NeuroGlancer or if you press this button you will get a URL which I think will be shared in the chat as well but these data can be used in BigWarp as well and that's what I'll show now the so the regular BigWarp plugin that I showed this plugins BigDataViewerBigWarp only works for images that are open in ImageJ there is also a BigWarp XML HD of 5 option which Christian Tischer kindly provided but for more general N5 image data sets we have scripts that are on the BigWarp repository so these if you go to the BigWarp GitHub page inside of scripts what I'm going to be showing is using the BigWarp N5 script that I have open here and I will actually pause by saying I've locally saved one of the light image data sets to an N5 container on this computer so I'm going to be using two N5 data sets one of which is stored on Amazon AWS and the other is stored locally on my laptop so I also pre-clicked a few landmarks here but if I press OK here it'll take a second since it's contacting the server finding the metadata, reading some things reading the files that are locally on my machine loading some patch loading some blocks getting ready to display things and now it does and because these images are not local it didn't do anything smart with regards to brightness and color but I can just very quickly adjust and show you that indeed these are image data this is the same image data set oriented a little bit differently and now all of this image data is being fetched from Amazon and streamed into my little laptop and this is quite a terabyte data set but this is quite large this is many hundreds of gigabytes data set and again in this window we have the light data set that we saw from before and again notice that here the moving image is the EM data set which is enormous I'm going to just transform it just like I did before press F just like I did before and there we have it it's a little bit messy since I wasn't especially careful with how I clicked but all John sorry to make a question about the moving image and the type please yeah we have a question about why use the M picture as the moving one yes yes yes yes try to do the contrary indeed our collaborators tell us that because of the preparation that happens during EM that it's likely that the morphology of this cell is more deformed after the EM preparation than it was before that EM preparation happened that is it's easier to do the reverse since the light data are smaller and part of what makes big work useful I think is that it is not really harder in work it is not really harder that is I hope that you appreciate that I hope that you appreciate that I hope that you appreciate that I hope that you appreciate that I warping this giant image is as easy as warping the small image so that big data viewer big warp image live to enable this one to have to care less about these kind of big data problems I hope that answers your question so there's thank you yeah that's a great question and I'm going to just warp this on the fly a little bit if I can see the landmark and just show you that I can really actually deform this giant data set more or less on the fly and you see that it's slower than it was before because the data are bigger but it's readily possible and there's okay there's also another great question part of that question was having to do with the the button that offers the choice of transformation being broken yes that should be fixed though it's now moved to F2 so if you update your Fiji now F2 will open this this window instead of F8 sorry about that I'm going to just go through these things on the image.se forum there was some discussion about it there okay and I don't know how I am with time but that is everything that I wanted to show and so the other things that I could have shown which I encourage people to try are there are other capabilities that I didn't talk about such as transforming ROIs that can be transformed either from moving space to target space or vice versa using another one of the scripts that we provide as well just arbitrary point coordinates can be pretty easily transformed using also scripts that we provide and this information is all on the image jwiki and I think I will leave it at that it was short but there was all the information we needed and we encouraged ourselves to make other questions okay so now just the curiosity on my side you talked about the N5 file format what do you think about it in terms of you know supporting it over time if you think it's a file format everybody should adopt because it allows I saw it in one of your answer it allows to do parallel computing in a better way compared to HDF5 if you can comment more about this thanks that's a great question we intend we use N5 all of the time so we intend to support it since it's what we use we rather than think of it as its own file format we try to think of it as just a smart way to access blocks of data in a general way and again what I mean by that is the N5 file format is what I'm showing here which is there are blocks in a folder on your file system generally this is the light dataset that I stored as N5 and there are a bunch of folders in here it goes a few deep and then there's a small file in here that contains one block of image data so this is what I call the N5 file format HDF5 is a lot like this but you can imagine that all of these folders and blocks are stored in a single file and so this is part of what makes it relatively easy for N5 plugins to read HDF5 files since they're essentially the same thing but the storage mechanism is slightly different XAR is popular in the Python world but it is again very similar in spirit and so while we intend to continue supporting N5 because we use it a lot we also intend to continue supporting we want our software to play well and interact nicely with other software that was meant to use our HDF5 so while N5 will stick around we also intend to share and work with other people who are doing the same thing but we also want to make sure that the file format like work if you go on the image.se forum there's a next gen file formats tag that you should check out and others like Josh Moore and Christian Tishers and many others are having discussions there about this stuff moving to many blocks at the same time to HDF5 is difficult and which is part of the reason we moved to the N5 specification where blocks are in different files so the idea is that writing many files in parallel is possible and easy whereas writing many blocks to an HDF5 container is not easy so that's fleshing out a little bit more of what I said in the answer I hope that's clear I will say one more thing which is a downside of having many files is that if one wanted to move if one had a very huge N5 container moving these files is slower than moving a single large file that stores the same amount of data so the plus side of the N5 format is writing things is much faster moving an entire container can be slower so other I think I'll leave it at that but I will also just say that there's a lot of cloud support as well okay thanks a lot for your presentation I thank again all the speakers and the panelists that in background have done really nice work and thank you the attendees for being with us