 Okay, hello, my name is Stefan Zalschert. I come from Dresden in Germany from Pablo Tomaszak's lab and Thanks a lot that I wasn't invited to speak here. It's a pleasure and What I'm going to talk about here is not about modeling and not about simulating at all because we are not involved in that but We are working on software components That we designed to reconstruct Connect forms from from insect brains in the first place, but it can be used for reconstruction of any kind and These reconstructions are performed on the electron microscopy resolution so the the target of these software tools is essentially to handle very large data sets and to to align them and to Bring them together and to register them and then provide Multi-scale representations of the annotations that you're painting on top of them and these three software components are a web interface which has the fancy name cut made which sends for a collaborative annotation toolkit for massive amounts of image data are convinced are very important as you know and Then the desktop application track em2, which is an image. They plug into to handle large co second data sets and a bit more and Last but not least our more recent project is image live to it's a Java based image processing library that aims to simplify working with n-dimensional data with arbitrary pixel types and Transporting a method implementation from from a from a simple memory based Image layout into a into a hard disk based layout so so there There doesn't expose Problems on the on the infrastructural level to the method implementation Let me start with a biological context that we're working in the project is mostly driven by a person Which is more known in this community. This is Albert Cardona. He's now working in junior forum and he Is interested in reconstructing the conic Tom from a trisophila Early level brain and this is what you see here That's this and that's a where's my mouse actually. I don't see it Because it is not connected Yeah There you go So this is a sketch of the brain and it has two brain lobes like in a human brain essentially and then it has what What in vertebrates is this final nerve cord? This is here this this ventral nerve cord so it's on the front side of the insect and on the back side and consists of thoracical segments and of ventral segments and Albert has generated One very exceptional zero section data set on the em resolution of one of these entral nerve cord segments that you see in a massively downscaled version here and Now in his lap. He's reconstructing Essentially all cells that are visible in that volume starting from the cell body if it is visible in the in the data set through all the Neurides ending up at a synaptic contacts and In order to do that Yeah, exactly. I forgot one thing and we not only want to reconstruct and this data I'm on the on the electron microscopy resolution, but we also want to correlate it with optical microscopy data and this one example Of how this brain looks under a confocal scanning microscope That's a clone where single cells are made visible with a flip-out construct it's all systems from Geneva the farm from the group of Jerry Rubin and The white cells that you see here are cells marked with an anti-grabber urchic Stain so so these cells are got a urchic cells I'm a bit of lucky at some of your few clones and that are allergic and you'll know a lot more about these cells Okay, let's talk about the software and track em to whoops looks like that it offers you I I display cameras where you can watch your data and It offers you a navigator screen And on on top of this data you can you can browse your image data and you can kind of annotate whatever you see in there I'm with a lot of geometric primitives This is a freeform primitives as you see here to actually mark the volume of cellular or sub-cellular components at will but it also offers you more simplified constructs like scan at one traces to to just mark the the new rights and and Ball structures to to a mark. Maybe for example a new clues in a cell and then by that build up these these ball stick models That are very common in the community You associate all these volumetric annotations with a with a with a term and over freely to be specified ontology. This is I have hierarchical ontology So you can define your own terms in this case and we see a brain we see cell and brains and whatsoever and In the end and this is the abstract ontology part and in the end at So it finishes at a at a concrete term, which is then the annotation piece that you have in your screen So tracking to cannot only display this single channel electron microscopy data but you can also join multiple data elements into the into the same canvas and overlay them using various various Graphical overlays what we see here is a manually aligned And data set from from from a confocal microscope sort of a illusion is extremely poor compared to the em data set But you see structures and that are that are otherwise not distinguishable and the em data set Okay, so what you see here is That this em data set was not captured in a single run and the main reason is that it is Essentially to be for that at least and when when using a normal CCD camera, which is a resolution of 2k by 2k pixels or 4k by 4k pixels So what you would do is you would image all these these these sections in Overlapping tiles and you have to stitch them together and then I Actually forgot to mention that this is zero section data So it's actually taken from a block of tissue you generate all these sections They are floating on a water bath, and then they are collected By a very dedicated and careful person On a on an electron microscopy grid This person must be so dedicated He must be able to stop his heart beating and stop breathing for an infinite amount of time to not destroy all these sections to not lose so many and His name is Rick Fetter and he's also working in genia far But nevertheless while by generating these these sections you're applying a lot of deformation to the to the to the single sections and while imaging you're deforming the single image tiles because While the electron beam is making an image. It's heating up the section and then see for me whatsoever so we're getting a lot of deformation in this process and my task in this business was to to restore essentially the volume information from all these section images and This is work that is also available in track EM2 and we have two modes of Registration in the first place and this is a landmark based registration that is using approaches from from computer vision So it's extracting in variant image features from from all these image tiles, and it can not only Bring them into alignment by using these features, but also recognize which image belongs where so essentially you can you can start the montage which just throwing all images on top of each other and then it finds out how they are arranged and In the end you have a you have a large set of Corresponding points that spread across all these images and you put on all these these point correspondences into a into an into an optimizer that Minimizes the the sum of squared is placements of all these points and what we see here is A minor extension of that paper that you see down here It's using an a fine transformation for each image tile, which is regularized and with respect to a widget transformation that it doesn't shrink to entity Okay, yeah Yeah, this is a collection of Images you've taken because you can't you have to tie the surface Yeah Let me start here. Let me start here. So this is the size of an image This is in this case. It's 2k by 2k, right and you have a lot of these images And then from each of the images you extract image features This is what Microsoft is using for the virtual tour is then what Parama stitching software is using for Parama stitching Okay, and then for each of these features these are actually the green points whatever is green here is a point So it is an interest point which it has a Specific location It's interesting points that are Recoverable in overlapping images exactly and then for each of these features you have a descriptor and Descriptor is invariant with respect to to some sort of transformation So it's invariant with respect to rotation to scale and also to some amount of a fine transformation We're not so you would have to do to them, but then it's invariant, right? So you can achieve that And and then you have you have all these these points associated to each other That's what we're doing in reality, but it's not required So in principle you could just have But the movie looks so much better if you don't initialize it with a with a pre-none configuration. I'm telling you No, what you what you would do in reality is actually you compare only what you know What is what is belonging to each other and you initialize it with a nice configuration And also you don't you don't let the the optimizer start at this crazy point because this is just weird So you would you would just run it make through it and then pre-align it somehow and then and then sort of the final solution But this is this is really nicer, isn't it? good And yes, and as you see you do you don't do this only for for one section But for for all of them and all at once and and then you have a global cost function. Okay Yeah This is a two-dimensional problem, right? So we have we have all these images and and and and they have and they have an x y location the points And we are also correlating them across the sections, but you are you're not tilting the sections or not moving them around I'm using the coordinates of the reference points, but they have the resolution of the of the image So you're getting a few hundred points per each image And then but it is a sparse representation that simplifies the problem. Yes exactly and that's why we come to the next slide I mean this looks nice and from a from a global perspective It is it is beautiful, but it's only solving for a linear model her per each image And if the images are large and the local deformation sucks and it's very horrible then you're ending up with a stack that looks like that and And This is now the topic of the of the next alignment procedure and then we built and We are now associating from starting from that point We are finding corresponding regions in adjacent sections and further sections in further sections so to do to make it really a dense connection and then feed this into a spring connected particle system relax it and in the end you'll come up with a With an alignment that is very close to to block face scanning or or even even flips and yeah Okay, this has been done Yes, so let's go back here and this is an auto Projection so an auto slice through the through the stack so every pixel line here is one section one individual and you also see Artifacts in here. So for example, this one here is a standing artifact is a standing artifact and all these these black lines here are standing artifacts so when you Even a menu when you have the line situation you're seeing these black spots here there individual Independent per each section so the system must be very robust to with respect to all these differences. They're actually not interesting Okay, we have done this for the entire data set and if you look like that It's that's it. This is a global view. We zoom into it and we zoom into it and here that resolution You can actually see the signups is these are there the little black areas and between the Neuroids and From that data then finally reconstructions are generated This data set I must say is not the most perfect one so we are okay, this is an auto slice protection, but I'm I Cannot actually play forward here, but yeah You're getting what it is about, right? So from the side it looks like from from top good and now engineer from they're generating even larger data sets and this is a Cross-section from a ventral nerve code from an L3 lava so from an older lava of the true software and It's just 20 sections running forth and back What the interesting part is that at this data here you can see the microtubules running through you see that Let's do it again So it's 200 image tiles per each section and 20 sections running forth and back and they were lens corrected Everything was nice. So the protocol was very well set up and now if you if you go in here You see here these these small spots is the microtubules running through and they're nice in place. So good result I hope And what impact does this alignment have on reconstructions This is a reconstruction of the of some new and always doing done done by Albert Kodano On a on a very bad aligned data set And you see all this jitter from section to section so from from top down here you have the section direction And when we elastically align all the system Then these projections look much more realistic and actually represent what the what the Is This one here I would say it is in the range of of 10 nanometers something like that It can be more So it depends on how thick your sections are. I mean you cannot do really better than than than what your section thickness is giving you And this is 40 nanometers in this case So, um, yeah 10 to 40 nanometers maybe more at very bad locations. Yeah okay, um So this is a serious impact on when you calculate the length of of of these um neural profiles And what we see here is from the original jitter reconstruction When you just measure from from every point in the skeleton to another What is the de-estimated cable length here? And we compare this to a to a simplified length a lower bound length where you're in this skeleton replace all branch point to branch point connections by just straight lines so it is relatively robust with respect to to reconstruction jitter and We see that the elastic method brings these two measures very close together They're not expected to be um equal because the the branch point to branch point connection is a simplification, of course But the difference is not um that significant anymore and it Seems more reasonable Okay, people involved are in the first place Albert Cardona who made this software project I wrote registration libraries for for me and many other people have contributed to it It's embedded in the in the feature distribution of image change so you can just download and try and It was supported by several hackathons and genelia farm in max bank institute in dresden and at embl and heidelberg and also in in syria And yeah, these are the websites um this one is the shorter one so um if you want to look at that good next project um Track em2 is nice. It has um a lot of um features for annotation and for registration But it poses a significant problem when it comes for to to a manual annotation namely That it is a desktop application and it stores its data into an xml file and so Putting one single person in front of these massive datasets and ask it to reconstruct is unfeasible so you need many people doing that and that has been done in the past in harvard slab and also in davi um in davi box work in at harvard By merging then xml files and then then making complicated operations to bring this data together Which is a a complete nightmare in terms of data processing and So we are we are now focusing on on on a different approach. So we register the data with track em2 and then make this data available in a in a web interface and this web interface is um back via via um by a server database And the name of it is cut my head. There we are It has a thin browser client that allows you um google map style navigation on this data It is connected to this to this metadata and annotation data server And the images can be fetched from from any other machine that wants to serve images And that interface looks like that. Um, so we're looking here at zero section em data set google map style browsing you can zoom in And you have a navigator window so you can jump on the navigator window It is relatively quick particularly when you're in a local network And it's very slow when you do it across the Atlantic Ocean I noticed And um in this original version which was developed um back in in 2008 essentially Um, we had very basic annotation tools So as you can see here you can run through the Z sections It's also quick because you only have to load the the image tiles that are visible on the screen And the basic annotation tools that you had here Are mainly text labels that have a location in the in the screen Okay, I forgot one feature you can make a URL like um in google maps Point me to this location and send it to some collaborator and you will see The same structure on the screen But now let's look at these um these annotations and you can look in into this um web interface and you can um You have these um these text labels floating floating on the on the on the on the image data and you can um track them around Delete them and edit them and um because this data set is public um There is about 100 people editing crazy um text terms onto it. So don't take anything serious which was on it And you can um type this text on the screen you can change the the color information and what server so it's very basic Um, what the system also allows you is to to um crop data And this is the the next tool that I that I want want to show you here So you can um you can mark a region on the screen and mark a Z space in there And then um press the apply button then it tells you okay I'm I I'm generating that on the on the on the server and the server is generating this um crop data element and um providing you with a with a download link and because this is an asynchronous operation um the Cupmate interface has a has a message system So whenever you look in the next time you're getting a bunch of messages like emails And you can download this data and here we see the the crop region um Seen through through the image interface. So we have cropped it at a different resolution And Cupmate not only enables to to browse a single data set But you can associate um several image data sets um into the same project space So essentially it has um decoupled um coordinate systems One is for the for the image data set and one is for the project coordinate system And as you can see here the annotations like the neuro pile tag, which is here is um Um not in the image coordinate system, but it is in the project coordinate system So whenever you add a stack to it and register it into it it appears at the right location Um Yeah, that's mostly about the the old version and um now um arbor's group um particularly The um driven by by Stefan Gerhard has extended this interface to um support more complex annotations And they have transferred the the full skeleton annotation tool from track yam2 into into um cupmate And this is what you see here. You see actually um arbor reconstructing Skeleton traces from from this data set it provides you with a lot of information about these skeleton traces You can not only annotate these skeletons You can also prove read them so it has a full proof reading system because manual annotations are sometimes not perfect Each skeleton and each annotation in this system is associated with with the user who has created it And by that there is no ambiguity like when merging these xml files. Um when happened what and and and by whom it was done And um, yeah, that's mostly it and we're looking forward to um extending this by by um More advanced annotation toolkits. Um, yeah fancy 3d view. I'm using the upgl also done by Stefan Gerhard very nice Um where you can then look at what you have reconstructed on the on the image data Okay, I'm um skipping forward. Um, what is the database back end of of these annotations done in in cupmate The anthology that we are able to express here is a bit more complex than the hierarchical system Which is available in track yam. So we have made a uh, uh, a postgres representation of an Autology that consists of classes of class instances. So each class instances is instance is referring to to a particular class um, it has relations and relation instances and the relation instances can Associate to classes and they can what they can associate to um class instances. That means you can express a sentence like All cells are surrounded by a membrane And you can associate you can also express a sentence like um say cell a is green something like that, right? Whereas then green is a class instance of Of of um the class color all these um these um concepts are inheriting from a concept table we're making um explicit use of um postgres um um table inheritance and concept And um mark mark longer has um transferred um a neural anatomy ontology for for fly brains into the system So it um is fully equipped with uh with a um with an abstract part of these ontologies Um, this is the abstract domain and in the in the um concrete domain then um you you create tables for your particular particular annotation tools um what we see here is um It's the the concepts that we need to express um skeleton traces in this system So we have three nodes that have a parent um child relationship And um we have also um single locations in space and all these single locations And and and three nodes can then be associated with an abstract term like um Belongs to or is is element of a skeleton Which is then an abstract term and the skeleton is a model of uh neural neural arbor and the neural arbor is a part of a Neuron and so on so you can extract um very very informative um Things out of out of these annotations Um, what is coming in the future um cutmate has now a relatively nice um interface design Which is very simple, but it but it it's also flexible So we have a tiled window manager where you can flag windows around it's actually very nice to to work on very large screens modern very small screens It has two bars and a bunch of features like sliders and and and input fields and whatsoever It separates between project coordinates and image coordinates as i've told you before and future versions will Extend this into into an n-dimensional domain currently is three-dimensional and it is going to be an n-dimensional domain and also the association of the image coordinates into the project coordinates Will be n-dimensional um affine transformations the the Reason why we are going to do this is because we want to use it in the end for um for tracing cells in in four-dimensional images of troesophila embryo development where you have um where you have essentially the So i'm skipping the last part completely good where you have um They have an image of each cell and where it um moves in in in space over time and this is a very high resolution Okay future Future projects to come we want to to add volumetric annotations and Different imaging back ends and we want to connect it with server-side image processing for supporting automatic segmentation that is that is that has a feedback by by user interaction and dimensional data i have set and That's an ambitious project. I don't want i don't know exactly if you if you're achieving it, but it's an interesting idea Still this google map style interfacing is a bit slow when it comes to to screen refresh and and people like to have 30 frames per second running through through the data set and the The only way to do that actually is by by displaying the canvas as a movie And I think that's what is happening at some point that we just replace this display layer With a movie that is generated on the server and what you're doing with your mouse is actually the remote control to generate that movie on the server future music Okay, people involved here Mostly important Stefan Gerhardt who has developed and this cannot an annotation toolkit and and the webgl viewer mark long gay Who was very much involved in trying all these things forward? And there has been one hackathon By now in genelia farm and the project is funded by max blank and arbor to engineer farm um So my last part of the Of the talk which I will not accomplish now Is this generic image processing library? And because we are we are um not seeing the presentation slides. I want to show you just a few examples Um, the aim of this library is to virtualize pixel access And it does so in the java programming language and it maps these these pixel objects into Into primitive java type arrays so you can store large data And you can um you can write new back ends and the front end is not seeing that because we access these pixels through iterators and random access accessor and um By that you can actually achieve um some very um interesting Portability of software so there's a simple image viewer it loads an image And the viewer essentially only knows that it displays a grid of pixels. That's very simple So it loads an image and this is a grid of pixels so far so good, but if you want to um to move it around this This grid of pixels needs to needs to be transferred into real space and you do that by um by interpolation But what the result of that is then that you have a function that is defined for every coordinate in real space so it's defined for all real coordinates and um That's all what you need to then feed it into a transformation which again generates An image which is defined at all real coordinates and nothing of that is happening in memory But it's all generated on the fly. So we're transferring the coordinates on the fly. We're transferring the pixel values on the fly And you can see that here this neighbor interpolation. We can switch it to other interpolation types And there we are The interesting part this is very boring because it is just an image you can use the same kind of view to feed it with um data that is um At no point in time living in memory. So we have the same viewer and we see a fractal here And we can zoom into it. We can drag it around And now there is no interpolation involved But the result of that is a function that is defined on all real coordinates and can Provide you with a pixel value and there you see that I'm not Cheating we can also change this function on the fly and generate new pixel values So this is not only um possible with this kind of data, but we can also um Do a nearest neighbor interpolator as we've seen before on on pixel data, which is um Not on a on a on a discrete pixel grid, but just um wherever each pixel has a has a has a independent coordinate the same viewer again this is um Um a randomly sampled um image data that's actually random. It's a fellow taxes pattern because it looks very beautiful And we're using the same viewer again And to just explain what it does it is not um It is not transferring this into into a vector coordinates or something like that, but it's um It's performing nearest neighbor surge per each pixel per each pixel that we see on the screen to generate this this interpolated data Okay, this is not working only in a 2d plane But it's also um working for for n dimensional data And the examples that I show you here Wow Is again a fly brain so we can rotate in this volume And run back and forward and zoom in Change the way of how we interpolate it And then rotate around Serial points and only on the fly generate then these particular pixel types. So the volume is not recreated in memory And then you can do other fancy things like changing the algebra that the pixels are actually doing and using basic operations on them And that's a very silly No, I mean you could do it on the gpu if you want but you don't have to so that you actually so that The end application so the end method that you're using for that which fetches a pixel value on the on the screen doesn't care where it's coming from Right, so whether it is it is coming from a from a from a from memory Or if it is generated by some other process, it's not interesting for it because because the access is virtual And you could generate it on the gpu, but in this case it's generated by the cpu Good Say again In the end it is it is coming into a buffer. Yes, but no intermediate steps, right? You can make intermediate steps to to to improve speed. This is possible, of course Yeah, yeah, but look at this fractal for example The only thing that is stored in memory in the end is what you have on the screen, right? Anything else is a function, right? Exactly. And well, actually a better example is here. And where we have this catmate stack Serving as a back end for this data and this image data that is seen here in in its whole 3d size is decompressed with 70 gigabytes so that wouldn't fit into this machine, right? but still we can We can navigate in here. It's a bit slow because it's fetching these tiles and the speed is a bit limited by by jpeg decompression But we can we can have 3d navigation in this kind of data set and the and the display the the viewer is not caring where the data is coming from, right? So even if you if you do image processing on that is not caring where the data is coming from You just replace that back end and so it's a it's a clear separation of of way to what so you're not Intermingling your method implementation with infrastructure. That's a nice part Okay, and yeah the silly example that I wanted to show is that not only you can you can Do virtual pixel access but you can also generate new pixel types that just overload the the algebra that you're doing here And this in this case you see a poor man simulation And where we just have a have a pixel type that has a class And the value so each pixel is a population of a species, right? And the the weight in the in the species is a is a floating point number and The only thing we overload here is the addition operation. So when you add Two of these pixels then depending on whether they are the same class They just add on top of each other and if not they are fighting against each other and the stronger one wins and subtracts So the the weight of the Of the weaker one is subtracted from it And using this pixel type you can then just execute gaussian convolution On the image and do it over and over and over again We get distribution and at the borders between the species. They are fighting And in the region here if it exceeds some threshold, they're all extincting and starting from scratch. That's essentially it So you're using using a simple operation as gaussian convolution. Just overload the operators and then By that you can express interesting things. I think Good So much about it. I think my time is over. Is it good? Thanks for your attention