 Hello, my name is Bram van der Broek. I'll be the co-host today of the webinar by Robert Hasse. Robert Hasse is a computational microscopist in the Myers lab in the MPI-CBG in Dresden. I came to know him as the guy behind the search bar and the autocomplete function in the Fiji editor but of course he's much more than that and he will talk about the new developments of CLIJ today. Let me also introduce the other moderators, Marion Louveau, Romain Guilla and Mathias Aatz. They will answer the questions as well and I would say the floor is all for Robert. Thank you, Bram, for this kind introduction and thank you all very much for joining this new bias academy at home about GPU acceleration and image processing with CLIJ2. I also would like to thank the new bias academy, the people behind the scenes, organizing all that and giving me the chance to talk here about some technology I maintain. By the way, I'm not the only one behind the search bar actually, Curtis Rüden and Deborah Schmitt were pushing this project further. I was just doing a little part of that. So the session today is again moderated by these four nice people. Also thank Bram, Romain, Mathias and Marion for supporting me here. You find here on top a link where all the material is available online and I think one of the moderators will copy this link to the chat so you can download everything. The slides and the example scripts for the very end and furthermore the outline will be, I will give an introduction about GPU acceleration and image processing. I will show you how to use it from ImageJMacro. I will show you some example workflows, mainly image segmentation and applied graph theory. So I will show you what that means and by the end some outlook how it works on other platforms and what we developers plan to do next. So CLIJ, first of all, to introduce these terms so sometimes people ask me where CLIJ or Clitch comes from. So it is a bridge between the so-called open computing language, which is a language usually understood by graphics cards and ImageJ. So this is where CLIJ comes from and the basic idea was to enable GPU accelerated image processing to people who use Fitchy to biologists without the need that the biologists or the users of Fitchy have to learn in the programming language. And as you may have heard already from graphics cards is that they allow accelerated image processing. So workflows like this one, you start from a three-dimensional Lightsheet data set, throwing a Triborium embryo, developing and doing some spot detection, doing some Wohonoi diagrams, simulating where different cells meet and deriving meshes from that kind of data. And then further analyzing the meshes, for example, to visualize the average distance between the nuclei. That's heavy computation actually, that's quite complicated and takes time. And when you run workflows like this on a graphics card, you can do something like that in under five seconds per frame. I mean, it was just not possible before. So we wanted to do this kind of analysis in as fast as possible. And that's why we made CLIJ available for everyone who has similar tasks to do. So those of you who know already CLIJ and CLIJ, you can ask this software how many commands are there. So there are 124 commands in CLIJ and CLIJ2, which is about to release. So it's not finally done and not finally out, but almost has at the moment, 324 operations. So it's like as an increase, it's more than doubling, we have more functionality in there now. And the focus of CLIJ2 was, so not just filtering images and segmenting images, also the next processing steps afterwards. So everything which has to do with cells and connections between cells and neighborhood relationships and stuff like that. So this is the major part of CLIJ2. And I will show you in much more detail what that means. And here it should be already clear that CLIJ is some kind of a programming library, which you can, for example, use from ImageJMacro. So you build your workflows by coding in fact, but there are some alternatives. I will come back to this. And what's also very important, we spent quite some time into making this accessible by using a very interactive website. So it's not like just you enter code in a text editor, and then you have your workflows, you interact with the website, which tells you what to do next. And how does this command work? So you can read this with some kind of interactive documentation, which makes it hopefully as easy as possible to use. And then you gain some experience with using CLIJ. So I'm working with it now for something like two years. And when you have some experience in coding, you can also do more advanced stuff. So workflows is the one thing going from an input image processing, processing processing going to an output image, maybe if a for loop around, is basically just the starting point. So this is how you enter, how you how you start using CLIJ, how you learn it. But as soon as you reach a certain level, and you may have to have some additional programming skills, of course, you can build quite complicated workflows, for example, behind such a user interface, as you see here. So I'm browsing here through a 35 gigabyte virtual stack of the world developing for Zofila. And I'm just adapting parameters of my workflow. So I don't enter different numbers in my script and execute my script and wait five minutes and check if the result is okay. No, I can, these parameters, I can manipulate in real time and see a result in real time because it is so incredibly fast. Are there questions until here? So I will during my webinar from time to time star a stop on these kind of slides and maybe take burning questions. So if there is already a burning question, I could take one now. Not yet. That's a good sign. So I will just go ahead and there's from time to time such a slide where we can do a little introduction. So I will do at the very beginning, I'm a short introduction to GPU accelerated image processing. And I mean, just the very basics, right? So a computer has a central processing unit. So this is our microscope workstation, which has two of them, not super rare, but happens. And furthermore, graphics processing units, usually both graphics processing units are mounted inside the computer and one is lying outside. Furthermore, when you open a laptop and take a very, very close look, you like you need a microscope and you will realize that basically every computer has a graphics card. So in this case, it's an integrated graphics card, which is basically in the same chip as the central processing unit. So GPU and CPU are built together in one chip. I'm just saying you don't have to buy one of these expensive graphics cards. Basically, every computer which has a screen has a graphics processing unit inside because otherwise you wouldn't see anything on screen. So integrated GPUs are a potential alternative. And furthermore, I recently discovered these things. And I'm actually quite happy with them. You can buy external GPUs. So you develop your workflow on your little laptop. And as soon as you're as you're convinced that the workflow does what it's supposed to do, you plug an external graphics card and suddenly everything is something like 10 times faster. This is an opportunity, so to say. From an architecture point of view, there is some general differences between CPUs and GPUs, which finally result in the speed up we observe. So when you look here on the left side, central processing unit usually has some cores, four cores, eight cores, 16, maybe you have an expensive workstation with 32, but not thousands, right? So it's a limited number of cores. And they have access to random access memory in the computer. So it's called your CPU memory, which takes quite some time. So that's why these arrows are so long. So if you access a pixel, you read out the pixel, this takes some time, then you process this pixel, and then you write it back into the memory. And this is how image processing is done. When you do image processing on a GPU, this access between the compute units and the memory is much faster, something like a factor of 10, 15, 30 faster. And furthermore, there are much more compute units in typical graphics cards. So there are graphics cards with thousands of compute units there. The only thing you have to have in mind when you want to exploit graphics cards is that in order to get an image into the memory of the GPU, you have to push it there and you have to pull the result back. So I will come back to this later on. But in order to exploit this very short way, this very fast access between memory and compute units in the GPU, you have to ship the data to the GPU. Otherwise, you cannot gain from speedup. But it brings us to a little checklist. So I put it at a very beginning because that's maybe the most important thing about GPU accelerated image processing in general, and CLIJ in particular. So when does it make sense to GPU accelerate an image processing workflow? So first of all, you have to have a workflow. So if you don't know what to do with your images, CLIJ can unfortunately not help you with that. So you first need to have a workflow for processing your images before you can speed up this workflow. Second of all, this workflow should be slow. If you observe that loading an image takes 10 seconds and processing the image takes one second, then it may not make sense to accelerate the processing because you anyway have to load the data for 10 seconds. So that's why actually the other way around makes a lot more sense. If you observe that loading the data takes one second or half a second and processing the data takes a minute, then it makes a lot of sense to GPU accelerate the processing because then you can get processing and loading from a timing point of view closer together. You can actually make the thing faster. As a rule of thumb, you need to consider the size of the images you want to process in relation to the graphics card or the GPU you are using. And GPUs with a lot of memory are actually expensive. That's why I'm mentioning that here. So a single processing step. So with a step, I mean something like one image which exists in your workflow, something like times four also should fit into your memory in the GPU memory. If you want to process an image, which is one gigabyte large, and you have a graphics card which has two gigabytes of memory, you will have a very hard time. So in order to process large images like one gigabyte, I would strongly recommend having graphics cards of eight, 10, 16 gigs of memory in order to conveniently program a workflow. Otherwise, it may really be hard to set something functional up. That's what I mentioned earlier. You have to actually translate your workflow. So it's not like that you just turn a switch and suddenly everything is fast. You have to re-engineer it. You have to rewrite it. Therefore, you have to learn CLIJ and how CLIJ works. That's why I'm here today. I will show you in more detail. But you should have that in mind. Unfortunately, it's not like a switch you can just turn. And in the particular case today, you should know basics of ImageJMACCO. You should know how variables work, how a for loop works, how if works, how if statements work. If you don't know yet and you are watching this video on YouTube, there are also other videos where you see how ImageJMACCO works. Also from the New Buyer's Academy, Anna Klem was just introducing that. So watch maybe this video first and then come back here and continue at that point. And yeah, if you want to get the really most out of it, if money plays not a big role, so there are people like that, right? And you just want to speed up your workflow as far as you can. Then you should buy a graphics card with GTDR6 memory and a memory bandwidth with more than 400 gigabytes per second. So you can read this in the specification of the card. Usually it's on the box in the shop. You can read that. And if you really absolutely don't know what that means, then just contact your IT support. And I'm pretty sure that they can help you and select a graphics card which fits to these conditions. I think the graphics card with the highest speed on the market at the moment have something like 600 or 700 gigabytes per second. So you that's there. There are some opportunities. For a more disclosure, I don't get any money from every GPU vendor. I just have fun using this technology. Questions? Yes, we have a few questions. So one question is, sorry, it's already answered. But so let's still do it. One question was, so is this CUDA based or something else? As I said initially, it's the open computing language. It's OpenCL. And OpenCL stands for advanced on all graphics cards. That's why you can use that from Intel, Nvidia and potentially other vendors which support that. But these are the three vendors where we have very tested graphics cards. Another question or? I can also just move on. Yeah, there's a question that somebody asked. Is a stack considered as one image? Yes. So I mean, if you have a big stack where every plane is super large, then you can also submit these planes individually to the graphics card process them and get the single plane back. So you can do this. But in general, if you push an image stack to the GPU, then it exists there as one memory block which might then block one gigabyte of your memory in the graphics card for example. Okay, then I will just move on, right? What can we gain from GPU accelerated image processing? Let's put some numbers on that. So in order to measure that in more detail, we compared CLIJ, so functions which are available in CLIJ with functions in Fitchy or an imageJ. And we did that on two computers. We did that on a small 13 inch laptop. And we did that on the heavy workstation I showed you earlier in my talk. And here you see the actual specifications. So the workstation has an Intel CPU and an NVIDIA Quad Pro graphics card. And the laptop has an Intel CPU and an Intel GPU. And you will now see a lot of plots of that kind. And you see here, for example, on the x-axis, the image size, and on the y-axis, you always see the time it takes to process such an image. So if I apply an image of a size of 250 megabytes, if I apply a Gaussian blur to that, then one of my CPUs, namely the CPU on the workstation, takes six seconds for that. The CPU in my laptop was in this particular case a little bit faster, so it needs less time. That's why this curve is further down. And the graphics cards in orange and in red need even less time. And these plots are characteristic for the kind of operation you are applying. So this is image size versus time in the case of Gaussian blur. We also, I mean, we did quite some of these comparisons. So I think this was one of the most extreme cases, the minimum filter in 3D. So the local minimum takes something like 15 seconds on my laptop CPU. So in this case, the laptop CPU is slower than the workstation. And then you can see here that what takes 15 seconds or almost 15 seconds on the one CPU, I'm not sure about the number exactly, but it takes almost no time on the GPU. And this is this is the thing we would like to exploit. Furthermore, it does not just depend on image size. And also depends on the parameters you enter. And here you see actually that the image j developers, whenever they did that, it might be quite some years ago, must have spent quite some time on optimizing the Gaussian blur. Because you see here, when I when I change a different, when I when I change the Sigma parameter of the Gaussian blur, so it means more pixels are taking into account, at least theoretically, the duration, the Gaussian blur and image j takes to process that image stays actually the same. While when you do it on a graphic start in our simple naive implementation, it takes more time depending on the Sigma. I'm not sure when this green curve and when this red curve will cross, so that might be a super super large Sigma. But you see here, in case of my laptop, when I so this is laptop CPU versus laptop GPU, they cross at a certain Sigma. So below or above that Sigma might be decisive if I do this operation on the CPU or if I do this operation on the GPU. Nevertheless, Gaussian blur is a bit an exception. It's an exceptional case. And yeah, minimum filter in this case, 400 seconds with a radius of 12. So 400 seconds on the CPU versus we are again basically nothing on the GPU. This is what we what we are trying to do. So and this is was approximately the way how we put it on bio archive the paper first and how we set it to the journal. And then we had some discussion with the reviewer. So I think at that moment, when we had this discussion with reviewer number three, I was not so super happy about doing all this. But now looking back to this slide and all the measurements we did on all the different operations, I really would like to thank reviewer number three for asking that. So he wanted to know more details. And I'm today looking back super happy that I provided these details. So I will not go into all these plots. But you may agree that for all these operations we show here almost all operations we show here the CPU is take more time depending on independent from image size, at least above a certain image size than the GPU. So I would carefully say that this is a general thing we can observe GPUs are faster than CPUs when processing images of a significant side. And you can summarize that a bit. So I mean all these plots are very interesting and depending on which operation you want to use, which you want to heavily use may want to have really a closer look into this. However, you can also summarize that in a table. So you see here compared to my laptop CPU, the laptop GPU has speed up factors for these operations I listed here. So some operations are on the laptop GPU three times faster than on the CPU. And so this is like as a rule of thumb, you can say, okay, maybe something like three or five is typically doable on the laptop. And in some cases, you have more on the workstation where we have this more expensive hardware built in, you see speed up factors up to 180. And to make that clear, a speed up factor of 180 means what others do in three months to do on a single day before lunch. And so I see here some potential in this kind of technology. Nevertheless, we are talking about individual operations. It's just the processing of the pixels. What we haven't considered yet is that we have to push and pull the pixels to the graphics card's memory. And pushing and pulling takes time. And here you have some, it was quite some sophisticated measurement, but I would tell you the take home message is as a rule of thumb, keep in mind that one that transferring one gigabyte of data to the graphics card or back takes about one second, at least on the systems we tested. And these are numbers which may change with the next generation of computers or the next generation of graphics cards. I'm pretty sure about that actually. But at the moment, rule of thumb, one gigabyte takes one second. And if you process images on the graphics card, which takes much less than one second, you pay two seconds for getting the data there and getting the data back. So you have to have that in mind before you put all your workflow on the graphics card. You have to develop a workflow which is, which gets this back what you invest here. And just as an example for that, that's also from the paper actually, we implemented one example workflow, you see here an axis from the top to the bottom, that's time. So Gaussian blur, this is a quite thick block takes time when you do it on a CPU. When you do it on a GPU, the Gaussian blur itself is faster. So you gain something, but again, you have to push the data there, and you have to pull the result back. And that's why all together this distance here is time is longer than just applying the filter on a CPU. So the take home message is, if you apply a single operation on a GPU, it may actually be slower than on the CPU. What you have to do is, you have to implement your whole workflow on a GPU, like here, for example, you see a difference of portion in a combination of a special cylinder maximum projection in order to gain something. And the workflow you see here, then produces this kind of data. So this is a cylinder maximum projection, we take the surface of an of an of an embryo of a drosophila embryo and we unfold it, we roll it out, we make a two dimensional image out of that. And if you do this for 300 frames of this drosophila embryo, I think one frame was something like 150 or 180 megabytes large. And if you do this process for 300 frames, that takes two hours and 44 minutes on a CPU of my laptop and 11 minutes on the GPU of the laptop. And when I then go to the more expensive workstation and run it there, it takes five minutes. And you may agree, if we go down from two hours 44 to something like five minutes, that we gain a lot actually. So it's real fun to develop these kinds of workflows. But before I go deeper and show you how this technically works and how you can do it, are there any questions? Yes, these questions are streaming. Particularly a few that I think you can answer. I hope. Yeah, well, the first one was, so can you use CLIG in with cloud based GPUs? Can you send the workflows to the clouds? Yes, so I mean, I have some collaboration, some people also on Twitter reported that this technically works. But unfortunately, I have also heard, but I'm not sure which platform it was. I mean, in some cases it doesn't. And then you have the problem that if you're using an online service, where you have, for example, a free some free web space where you can also run operations on GPUs, you may have access to the GPU, but it does not support OpenCL, and you cannot install anything because you're using a free web service. So that's unfortunate. Especially if you have access to the machine itself, and you can install drivers via docker or whatever, then I'm pretty sure that you can make these things work. So there are people who did it already. So that's technically absolutely possible. Okay, so another question was, can you use multiple GPUs? So you can use multiple GPUs. I actually do that now, basically almost every day, but you cannot do it from ImageJMacro. So the ImageJMacro language is unfortunately so limited. We cannot have an object. So I'm now talking about object-oriented programming. We cannot have an object representing the GPU. And that's why an ImageJMacro is not possible, but you can do it in JSON. And if we keep this question in the list, I'm happy to put a link to example code where we did that. So in JSON, in Groovy, in Java, you can do that. And that makes a lot of sense for some workloads. Yes. Okay. So there were also lots of questions about, is my graphics card good enough? Some special cards? So again, basically, I work also a lot with my integrated Intel GPU because I don't want to carry around a heavy laptop or a workstation. So I work with my small laptop and it was from the graphics card point of view not expensive. And I would say the last five years or the last maybe even seven years produced laptops should be supported. Yes. And regarding graphics cards, it's similar. I would just say as a kind of disclaimer, if you go back something like seven years in time, or if you have a graphics card which is five years old, you may be able to use it with CLIJ, but it is much slower than recent CPUs. So if you have an old, we had that in some case, we had that an old graphics card in a modern computer. Makes no sense. So if you want to go for high performance image processing for GPU acceleration, you should invest some money and buy a modern graphics card. But these gaming cards you can use for that they start at 300, 400 euros. So we are not talking about big money here. Yes. One more question, maybe? Yeah. So maybe the last question is a technical one. So somebody was asking whether in the graphs that you showed, did you take the GPU loading time into accounts? Or is it only the processing? It is only the processing push and pull is not part of that. That's why I showed push and pull independently. And furthermore, maybe the question also goes in this direction. You have a so called warm up effect. You have it actually on the CPU and on the GPU, it's just in different orders of magnitude. When you execute an operation for the very first time, it may be slower than subsequent operations because there is code compiled, the code which will be executed later on on the GPU and shipped to the GPU. And that can actually take some time. And I would now have to look into my own paper to find out how we did that. If we did the median over time or if we excluded the first time point and reported about it separately, I'm not 100% sure about that. But it's also one take home message. Don't process a single image on a GPU. Process many of them because the second, third, fourth iteration will be faster. You will later see that and you can measure that yourself in the exercise. Okay, I will just move ahead, Bram, right? Yeah. Cool. So if you use CLIJ from imageJ macro, there is a couple of lines every imageJ macro should have. And just as a spoiler, I usually hardly type them. So it's not like it's something you have to recall. I will show you that these lines come automatically. You don't have to recall them, but you should be aware of them and you should know what they're doing. But first of all, you should load data. And for example, here, I run a command to open a sample image in ImageJ. So you learn CLIJ itself has no functions for loading images. So we use the ImageJ functionality for loading images. And when we run this command, for example, a window will pop up. Then the first CLIJ specific thing, you have to initialize the graphics card. And therefore, you have to have these two lines. Actually, you have to have only this line. I strongly recommend also putting the clear command here, the clear command empties GPU memory. And I usually do that at the very beginning, because when you develop workflows and you iteratively try to develop a new workflow, this thing will crash, right? So this is the way how workflows are developed. It crashes, crashes, crashes. And at some point, you have it. And then you are happy because it's super hard. And when it crashes, and then you restart it, the macro again, the first thing you want to do, you want to get rid of the old images. So this line CLIJ clear corresponds to ImageJ close all. So as if you would close all windows, but we are just clearing all images on the graphics card because we don't see them as windows. So then there comes so-called push command. So we have to send the image data from the window we see on the screen to the graphics card. And the push command takes, in this case, a variable. You could also put a string here, which corresponds to the title of the window. So you specify which image you want to push to the graphics card by the name of the window in ImageJ. So it's quite convenient to explicitly submit one particular image and not just the active image or something like that. So something like the active image doesn't exist in CLIJ. You always have to specify everything explicitly. And here, just as an example command, the Gaussian blur takes two images as parameters, the input image and the result image, and two more parameters in 2D. So a sigma and X and sigma and Y. If you would do this 3D, then there would be another parameter sigma for the third dimension. And here again, you have to specify input images and output images explicitly. I must say that I was in ImageJ macro programming always a bit annoyed about which image is at the moment the active image and do I really process the right image at the moment? And that's why we did it like that. We have to always specify input images and output images explicitly. Furthermore, you may see that this result image variable here doesn't exist yet in the code. So this was not possible in CLIJ1 and CLIJ2. This works now. You can specify here an undefined variable. And this undefined variable will hold the image name, the name of the result image after executing this operation. So, and then that's an optional thing, but especially important for the people who work with graphics cards with not much memory. You can, after you process the input image and you made a result image out of it, you can release the memory of the input image. So if your memory is really costly and you want to process big images, you want to clean up every image as soon as you don't need it anymore. So this is the rule here. As soon as you don't need the input image anymore, release it. So this corresponds to the close command in ImageJ macro. And then for the more you may see until now, we don't have a window open yet. We just still see the input image, but we don't see any result image. In order to see the result image, we have to call the pull command and then the result image will infect the open. That means CLIJ runs per default, always in batch mode. You don't see any images until you explicitly pull it back from the GPU and you make actually an image share window out of it and see it. And last but not least, clear the memory again. So then everything is fine back to the starting point and you can run the next macro. So these are the important commands. And as I already said, you usually don't type them. So I will show you likely in the next step how they are made. So those of you who are familiar with ImageJ macro programming know the recorder already. So I usually record the first half of my workflow I would like to do. I record with the macro recorder and then the second half I type and here I will show you both ways. So you have a menu, plugins, ImageJ on GPU with CLIJ and CLIJ2 has independent menus. And then you have submenus with filters and stuff. And you see there are a lot of things in with very complicated long names and they have very complicated long names because I'm actually mostly using the search window and I want to type something like O2 or threshold or in this case connected components analysis. So this is how you call one operation after the other. And you see here I'm processing the blobs.give image to do a segmentation, to do a labeling. Then I visualize the labeling with the glass bay look up table. So this is normal default ImageJ commands in combination with CLIJ commands. And if you don't really see, if you don't really feel a difference between calling ImageJ commands and CLIJ commands, then we were kind of successful. That was the goal. We don't want to teach, we don't want to give you a new recorder or a new way of writing code. Actually, the idea was to make it as similar as possible how everything is in ImageJ so that you turn on the recorder, you execute your operations like version blur, segmentation, thresholding, connected components analysis. And then afterwards you create the macro and you have to code. Let's just zoom in for a second. What did the recorder record? So obviously, the macro recorder records for CLIJ other code. So that's the code I showed you earlier. And here you again see, for example, for the threshold O2 operation, we have an input image, image two, and an output image, image three. And for the next operation connected components labeling, we take the image three as input and we produce image four. And then the next operation, exclude labels on edges. So this is like the checkbox in the particle analyzer from ImageJ. We remove the labels which touch the image border and we have a new label map coming out. And also here we specify input image and output image. And it always works like that, right? Always specifying input images, output images, and then other parameters which are necessary for the particular command. So this is how you can use the recorder again. It should be as streamlined as ImageJ itself. And the alternative to that is using auto completion. And here comes the website into play I mentioned earlier. So you start typing something like O2 and you can in the auto completion click on O2 and you go to the website and you can read the details what the threshold O2 method does. And the most practical part of this might be that you can scroll down this website and it tells you what is usually done after threshold O2 or what's usually done before threshold O2 and you can click on this command. So for example, connected components is something like you want to do after thresholding with the O2 algorithm. And then you look, okay, what do you do after connected components are executed, labels and edges. And this is how you can build typical workflows. So this database, which is there on this website, comes from all the examples I programmed at some point in a folder. And that's why I'm saying typical things you can do like this. And again, as I mentioned earlier, see also here this workflow, it is actually the same workflow as we did earlier. You specify input images and output images. So hopefully it does not become too confusing if now two mouse pointers are sorry for that. Yeah, so yeah, so you specify input images and output images. Furthermore, there is this website, which you can also I mean, you can also access it by not clicking in the script editor, you can also just go there, click on the reference and search for your commands. So this website is again, as similar as possible to the website, you know, and you know from image j macro, so where you have the macro function. So here this is the CLIJ function. And again, you see which commands are typically run before and after you see a lot of examples. And you also get code snippets for Java, for MATLAB, and for IC Java script. So today I only I mostly talk about image j macro. But those of you who want to use it from MATLAB, those of you who want to use it from IC or from scripting an image j or from Java directly, this is where you get the code snippets from. Furthermore, also, this is from the starting page of the website, you programmed a lot of let's call them notebooks, where you see a little code snippet and what it does a little code snippet and what it does. And in these code snippets, you can also click on the commands and go to the reference. So you see that the CLIJ project is related to doing something with graphics cards, but it's also related to providing documentation, which makes it as easy as possible for the users in order to use the graphics cards for fast processing. So I would like to make this as easy accessible as possible. And if you don't understand something, or if you say this is too complicated, then please tell me because I'm super happy in making and advancing this way of documenting or putting documentation and software together in order to make it as easy to use as possible. So that's one of my big goals in this project. One slide or one word about error messages. Those of you who are using imageJ for some time may have seen this window already. And now in the recent years, the computers have more and more memory and the images maybe did not become so fast, so large. That's why we don't see this error message anymore so often. Furthermore, it's so grayish and black, white and a bit boring. So we spent quite some time on redesigning error messages. And now we have a colorful error message, which is basically telling us the same. So you see here, it says mem object allocation failure. So it tells you it cannot allocate more memory for another image. So the error message on the left side on the right side is basically saying exactly the same. You are out of memory, but the error message on the right also tells you which operation was it, which actually failed. So you can read there on the bottom, Gaussian Blur 3D. And if you totally struggle with error messages and you don't know how to fix the workflow, then just send me the full text, actually, which is in this window, for example, via image.sc via the forum. And then I can tell you, ah, yeah, sure, you likely did something with the Gaussian Blur. So this error message helps you, but it also helps me in helping you. That's why this is kind of an important information for transporting, for communicating what went wrong. And what I'm saying here is that error message is not like bad and ah, I did something wrong. Let's click it away, you know, read it, or at least read these two positions to find out what actually happened. And you can then react to the out of memory error message in two ways. You can buy more memory. So this is a pure hardware limitation. There is no, there is no options dialogue where you can enter a higher number and suddenly it will work. No, you have to buy a graphics card with more memory, or alternatively, you can change your workflow. So for example, by building in this release command, when you don't need an image anymore, release it, like if you would close a window and image J, so this is how you reacted to the error message on the left. And furthermore, also often underestimated, down-sample your data. If you're just hunting some nuclei, you don't have to have the nucleus in something like 50 times 50 pixels, maybe 10 times 10 is fine in order to locate it, right? So depending on what you want to do, down-sampling might be absolutely fine. Last but not least in this section, there are also cheat sheets. You find it in the folder of the session today, but you also find it linked from the website. My brain works very visually. And that's why I made this at some point. You find these cheat sheets a bit sorted by different categories where you see, okay, Gaussian blur, I have to provide an image and two numbers. And then this is the result which comes out. I mean, if I want to invert an image, I provide one image and this is the resulting image and so on. And here you see always the command with coverage corresponds to that. And there are different operations, like image filtering, there's math, there's binary operations, there's something with statistics, matrices, vectors and so on. And you find all that on these four different cheat sheets. And if you missed something on this cheat sheet where you think it's a super important information, please let me know. I'm happy to adapt them. No problem at all. And before we step into workflows, again, other questions? Yes, many. But let's go for the first one. So a question from Rocco. What are the improvements of CLIJ2? Compared to CLIJ and what is actually the future of the CLIJX for experimental? I have slides about all that in the later section. No, I will come back to this. The next one then. The next one. Is there a limit in the dimension in channels, sets and time? So the limit in dimensions comes from also, I mean, definitely comes from the limit in memory. So you cannot have arbitrary large images. What I recently observed is that the images in 2D in image J have a certain limitation. They cannot be larger than 2 gigabytes. I think this is the arrow that pops up if you try that. On the graphics card, they could be larger than 2 gigabytes, but you cannot push it there. I have an example script about it. I'm happy to share the link. You can create super large images in the graphics card and then copy, push it tile by tile to the graphics card and then process the super big image on the graphics card. I would question if it makes sense, if it is efficient to do that, but technically it is possible. Yes. All right. Then another one. Can we integrate neural networks using CLIJ? So inside CLIJ, we have nothing like deep learning specific stuff, but everything which is compatible to image J is de facto also compatible to CLIJ. So I think it actually makes a lot of sense to do some pre-processing or post-processing on the graphics card of CLIJ and have an intermediate step somewhere in your workflow which uses, for example, stardis for segmentation, some deep learning algorithm. So that may make a lot of sense. So that's technically absolutely possible. You just have to, again, think about push-pull in order to get the image from image J to the graphics card, from the graphics card back. Then you do some deep learning, again, maybe on the graphics card, but maybe also on the CPU, and that's technically absolutely feasible. Yes. I take one more question and then I move ahead. Okay. So the question is, how can we develop a plug-in using CLIJ2 or more specific? Is there much difference between IJ1 programming and CLIJ2 programming? I guess we'll see that. IJ1 or CLIJ1? It says IJ1. So plug-in, I guess, then instead of macro. So plugins are written in Java. Maybe it's possible to do that in Jython as well, but I would actually strongly recommend to do that in Java. If you want to really, really dive into graphics cards and programming your very own algorithm, then you have to learn OpenCL because this thing is OpenCL-based. So we have for CLIJ1, we have an example, a template, an empty, basically an empty plug-in on the website. For CLIJ2, I don't have it yet, but I will definitely do it in the next month. CLIJ1 and CLIJ2 are almost the same. It's really like almost the same strategy behind it, almost the same class names and everything. Compared to image J1 to CLIJ, it's like completely different roles. So you don't work with any RRI image plus what you know from image J. So you can push pull these things, yes, but when you work internally in the graphics card, you have completely different data structures and it's all different. Good. I will maybe just move ahead with image segmentation. That's a very short chapter, actually. So this is a workflow I showed earlier. You start, for example, from the blobs image, you can apply a threshold also. So the method for that is called threshold also. Then you can call connected components labeling and you can then exclude labels. So this is like, this is the way how it goes. You can also do it more sophisticated and now we come a bit to my research. So I look a lot at these tri-volume embryos. We look here on the slide. We have a maximum projection of a dataset, which is in reality processed in this workflow 3D. So we start, for example, from such a dataset, 200 megabytes, maybe a 3D stack, and we do some Gaussian blur. We detect the maxima and then we apply some thresholding and we mark the image. This thresholding and actually removes detected maxima from somewhere outside of the embryo. And then we get a spot detection result which looks approximately like that. And from such a spot detection result, you can do, I'm not sure how to call this algorithm correctly. It's some kind of a binary opening, closing to label maps and not binary images, but it's also related to the Voronoi algorithm. So what we basically do, we apply a specialized maximum filter which only overrides zero pixels. So it would not, one label, one selected cell would not overwrite another one, but background intensity is overwritten. When you apply this maximum filter and we have some binary erosion and the algorithm delivers results which look a bit like that. So you basically blow up these spots which were found until they touch and you blow it up a little bit more and then you shrink the whole thing again and you get, I would say, a reasonable good segmentation of such an embryo dataset out. This is by the way a notebook which you also find on the website. So for segmentation, I would say this is kind of CLIJ provides basic functionality to do basic segmentation things, so again there's no deep learning, nothing fancy in here. Questions to this part? That's a short one. Maybe not super related to this part, at least no. But there are a few people who ask about other work rounds for the memory limitations. So for instance, can you process images in blocks or cut time lapses in short pieces? Maybe I should have mentioned that earlier, especially time lapses is obvious. You have to process them time point by time point. So that's the command push. If you for example have a big 2D image over time and the command push would push this whole thing, but you can also push current slides or you can push current C stack and then you only have this current time point, 3D, 2D on the graphics card can process it and get the results back. And I mean processing time lapses is that thing because I also do it a lot. It's that thing where you spare a lot of time. Again, the first execution of an operation, so the first frame you process maybe a bit slower, but then you afterwards process 5,000 frames which take a millisecond, maybe not that much, which takes 100 milliseconds each, then you actually gain a lot. So processing time lapses time point by time point make a lot of sense. Pushing and pulling tiles is not part of CLIJ2 yet. I will show later how you can exploit it, but it's not so 100% finished yet. So it's on the way. Let's say it like that. More, maybe one more question. Yeah, I have to check. There's a few, but yes, I'm sorry. I can also just move on and we take the questions. They actually need the questions first to see if they're... Maybe I just move on and then I mean there's more time for asking questions, right? Okay, so applied graph theory, holy, what does that mean? So research-wise, I'm not so much into image segmentation. There are worldwide hundreds and thousands of people who do image segmentation and there are great algorithms which are super efficient and deep learning and it's all awesome. I'm a bit more concerned about what you can do with a segmented image. So what happens afterwards? And this is where graph theory comes into play. So when I studied computer science, which is now years ago, quite some professors told us that pixels aren't necessarily squares. They can have any shape and then one pixel can have three neighbors or five neighbors or seven neighbors, right? So technically, that might be imaginable. So let's look at an image like that. So you see here an image with different intensities and you see the pixels are not squares. This is like arbitrary shapes touching each other and you see that there's some noise and you see also some kind of sharp gradient in the middle of this image. If I would now apply a mean filter to this image, so again, the pixels, the edges around the pixels stay but I can mean filter, I can average that locally and that means when I apply a mean filter, I get a little orange stripe here in the middle. So I lose my sharp edge while I'm averaging my local neighborhood of pixels and with a median filter with a wider radius, this orange stripe in the middle becomes wider. So you can also apply a median filter which is edge preserving as we know. So you can still average locally while keeping this edge. So and this is a technology, so again, you learned that in computer vision courses in the university and I was for quite some years wondering, okay, what can I do with that? Why should I do this? And second of all, and more importantly, with which software can I do that? So as far as I know, something like this was far away from doable in ImageJ in Fitchy and that's why you can do this with CLIJ2. So these kind of operations and there is a lot of stuff behind is the major new thing in CLIJ2. And let's call it applied graph theory or let's talk, let's call it biology because these could all be cells and the cells could have physical properties expressed as entities and we want to investigate how physical properties of cells locally are related to each other. So for I am convinced that for biology, for life sciences, this plays a big role and that's why CLIJ2 brings this functionality to the Fitchy universe. So I'm not a graph theory expert, so I had to consult colleagues of mine, so in this case, Carl and Savos, from time to time it is so good to talk to theoreticians. So these are theoreticians. They do a lot of graph theory, also in the biology field. And I had some really mind blowing meetings with them, which led also to these tools I will show now. So let's start from our segmented embryo. So we have all the cells individually. So this is a label image and every cell, so let's assume these are cells, right? Every cell has a different number. So a different intensity in the three-dimensional cell number one has the intensity in the three-dimensional image one and cell number two has the intensity two and so on. We can now from these cells measure the distance between them and draw color coded the distance in the graph in a mesh. So we can have an image like that. Again, all this is 3D. We just look for presentation purposes on maximum projections. Then when I note the distance between neighboring cells, so cells which are touching, objects which are touching, I can average that over the object. So I can say that all connections going to this particular cell are on average something like 25 microns. And then I can visualize that for every cell. Again, we are looking at a maximum projection. And then you will see it's a bit noisy. So the segmentation may not have been perfect. Maybe also the finding out which are connected to each other in the graph was not perfect. That's why this data set looks a bit noisy. But we can apply for example an average mean filter. So this is the average distance to neighbors of neighbors. And we can also apply a media which looks in this case very similar. But I bet especially at sharp edges, there's a difference. And we can make minimum maximum and we can also investigate the standard deviation of these properties in space. And the actual big benefit from that is, or maybe I come to the next slide, I come to that. So in order to do this, you start again from, for example, from an image showing this individual spots this very detected nuclei in 3D space. And there is an operation in CLIJ2 which is called spots to point list. So out of my three dimensional image stack, I make a new image which is 1906 pixels wide and three pixels high. And these three pixels height correspond to X, Y and C coordinates. And the 1906 wide is all the points in my original image. So I'm reducing this big image stack, this 200 megabyte stack to a 22 kilobyte so a much, much smaller image which just contains the coordinates of these spots. And when I then take this point list and I calculate the distance from every point to every point, I can draw a so-called distance matrix which is an image which is 1906 times 1906 pixels wide. So you see here, so it's like you have all the points on the y-axis and you have all the points on the x-axis. And the intensity here in this image corresponds to the distance between these particular two points. So this is the one kind of data structure. We have no new in CLIJ2. And furthermore, when you start from the labeling, there are obviously these objects touch. And we still know which of these cells corresponds to which spot over there. And you can generate a so-called touch matrix which has the same size of like the distance matrix, but it's a binary image. And you see again on the y-axis and on the x-axis there are all the spots from the original image. And if there is a pixel wide, it means that these two spots are touching. And if the pixel is black, it means that these two cells are not touching. And this is actually what we need to build up a graph. And then you can basically use the touch matrix and the point list again. And there is a method called touch matrix to mesh, which allows you to draw a mesh. So this mesh is drawn into a 3D stack. And from a 3D stack, I can do a maximum x, a maximum y, and a maximum deprojection to get these three images out of my original data set. If you multiply the touch matrix with the distance matrix, you would get out this color coded mesh I showed some slides earlier. So this is this is typical functionality which you have now in CLIJ2, which wasn't there before in CLIJ. And most of the stuff, at least to my knowledge, was also not available in ImageJR4G. So when you can also, there are some more methods, you can also do some quantitative stuff. So for example, from the touch matrix, you can actually count the number of neighbors every cell has. And if you if you have the count of every particular cell, you can take a label image and the count for every label, how many are touching, and you can make a quantitative parametric image, which expresses the number of touching cells for each cell. So we have now here as intensity kind of a physical property of the sample. So we go away from from from light from from intensities coming from light and fluorescence from the sample. We go now to images where we have physical parameters of our sample expressed in fact in 3D. So counting neighbors is a quite easy thing and measuring the average distance of touching neighbors also makes a lot of sense. And this is actually the same view I showed at the very beginning in this workflow over time. So the one thing comes from the touch matrix. And of course, if you want to know the distance, you have to go via the distance matrix. Before I go through this slide, breathe in and breathe out. It will be a bit more involved, but we will manage. We start from the label image again, a three dimensional image where every object has the pixel intensity one, two, three, four, five, depending on which object is object one, object two, object three. And from this, I call a method which is called statistics of background and labeled pixels. This is more or less corresponds to the particle analyzer and image sharing. You get a table out and this table contains something like the bounding box, the mean intensity, and stuff like that. And from this table, you take one column and you push this column back to the graphics card. So a column, for example, the column containing the pixel column is a one dimensional image you can handle in the graphics card as any other image. It's just a one dimensional image. So it's basically a vector of intensity. So an image which has a width of 1,906 and height and depth one. So it's like a one dimensional image. And you can, that's what I just showed on this slide before, you can take the label map and these one dimensional vectors and you can say replace entities in order to draw these parametric images. And here's some important point. If you have a vector which contains information about the distance from each cell to the moon, if you can measure the distance of a cell to the moon, then you can visualize in 3D in a parametric image the distance of these cells to the moon. It's just about having these vectors and then you can put anything in. So furthermore, you can threshold them. I could apply also a thresholding to a one dimensional image. I'm not sure if it makes sense, but you could technically do it or you call a method like smaller constants. So you hand over a vector. This is the vector containing the numbers. This is the small objects. Vector will be the result of the thing and a threshold. And there's also a threshold message which says something like greater or equal a constant. So you get two binary one dimensional images out. And you can also, again with replace entities, apply this binarization of the vector to the original label map. So you can segment the whole embryo consisting of these so-called superpixels. I also just recently learned that to get binary three-dimensional images out, which correspond in this case to you separate the embryo into two parts. The embryo actually and there's a rosar which is protecting around the animal. And furthermore, so you may have seen that the segmentation was not perfect. So you can now also with the filters I showed earlier, you can now propagate object properties to neighbors. So this is like applying a mean feather to a neighborhood in these kind of images. So again, we start from a vector or one dimensional image and we take the touch matrix and we say minimum of touching neighbors. So I enter here my vector of anything distance to the moon and I enter my touch matrix and then I get a still top vector out. So this vector may look different in the original because I look for every neighbor for the minimum intensity or the minimum whatever of the of the touching neighbors. And then I can again with replace entities in format again, draw a new image which looks cleaner. I was removing the noise by applying a mean filter, a minimum filter in this case. And then I get a cleaner binary image out. Last but not least about this particular step. So you can binaurize images of that kind. But you can also make a new label map out of that. So you again, you again take exclude labels is the command for that you take the binary vector, you take the label image. So this arrow should actually point a little bit more to the left. And then you get a new label map out which only contains half of the animal in this particular case. And if you compare this to the original image I had at the very beginning of my workflow you may agree that this is actually quite nice reasonable segmentation of the embryo of this left side where the cells are more dense than on the right side, whether it's the Zerosa city. Graph theory. I'm not a graph theory expert, but are there questions about graph theory? About applied graph theory. Yes, there are questions. Yeah, so there's a question in graph based filtering. Do you take into account pixel area or does it rely on the connectivity? It depends on the connectivity. But you could for example, I'm not sure if this question goes in this direction, you could for example also say I get a vector out which contains the volume or area of my objects and then I binarize this vector and say okay I divide it into the small objects and the big objects and then I process this vector further. So this is how the technology basically works. You process the parameters in these vectors and then you go back into 3D space or into 2D space and you visualize them. I hope this answers the question. I hope so too. Let's see, we have another question on the graph theory. Is there border to border distance for non-touching objects and also is there a way to get the distance from neighbor to neighbor to neighbor of neighbor? So the distance of neighbors to neighbors, this is what I actually learned from the experts, from talking to the experts, is a matrix multiplication. So it's a bit fancy. I'm happy to provide some more detailed code for example later on in the forum. In order to find out which cells are connected to the neighbor-neighbor cells, we do this over two steps. You actually square the touch matrix. So there's books about that. It's really amazing to look into this and you can do really amazing things by playing with these matrices by multiplying them with each other. It's awesome what you can do with that. So yes, this is possible. The other question was border to border. I don't have something like that yet but it might be possible with some tricks by identifying individual borders in some way but I don't have anything in particular for this yet. Okay, we have a couple of more general questions. Would you want to do that now or at the end? I take one more question and then I move on. I mean we are almost at the end actually. Okay, let's see. So there's a question about if this can be implemented in cell profiler? I forward this question to the cell profiler developers. Unfortunately, I don't know. But let's see what CLIJ, what other platforms are supported? So let's see what this brings. So the marketing department was working on this animation for the whole year. It took them some time. So CLIJ is now compatible with IC. So the prototype was there already for some months and I'm really confident now that this works. And that means that inside IC I implemented, I have to click on play here and there's sound. Oh, awesome. There's a recorder in IC. So especially or actually only unfortunately so far for CLIJ commands. So you see here one of the typical IC dialogues that look actually quite similar to the ImageJ dialogues where you enter some parameters, for example, to do a top-add filter that's some kind of background subtraction, which is very similar to subtract background in ImageJ. So I take my Troesophila 3D stack here. I was doing some background subtraction. And then I want to do a maximum projection, which also has a dialogue. And you record all that in a fashion very similar to ImageJ. And then you create a script by clicking on this button. And then you have a script editor, which is also very similar to ImageJ. So this was all part of IC already, right? I just added this little recorder and some plug-ins for all the CLIJ 2 commands. And then you can execute this script, which is in fact written in JavaScript. It's not macro in IC. And so you can build in a very similar way. You can build workflows in IC. And furthermore, in IC as well, you can build these so-called protocols where you connect operations with each other by lines. And then you can enter the parameters and you can execute it again and see the difference and so on. So this is actually, I like it a lot. I think Alexandra Fuhr was developing this kind of editing workflow. So big respect. This is an awesome thing, I must really say. So I was just putting this CLIJ 2 blocks into here. So this is these windows. If you want to dig into this, let me just give you one hint. So I cannot go into this in detail today. I'm happy to show it to people who are interested. But if you want to start with putting these blocks together in IC, one important thing is that you have release blocks. This is like releasing memory, right? That you have release blocks whenever you don't need an image anymore. So your input image goes here, goes into the top hat filter and it goes out here. You don't need it anymore afterwards. And that's why you have to put a release block here so that the memory is released on the graphics card. It is a little bit complicated, but we haven't found any other way of doing it yet. And that's why you have to put these release blocks for all images in your workflow. Furthermore, you can also use it in MATLAB. I'm about to cut the final release of this one yet. So you can download it. You can try it. It's just not final. So I will do this in the next month. And then you can also try it out in very detail. I will also update the examples for MATLAB. So the code is very similar to Macro and to JavaScript in IC. And you can call these commands from MATLAB as well. And maybe I will just move to the outlook. So people ask me sometimes, okay, what's the next step? So CLIJ was published in June 2019. So this software came out in June 2019. The paper a little bit later. The CLIJ2 is about to release in something like five weeks. So we are pretty much on schedule. We will now put the beta version that everyone can test. So it's not fully 100% finished, but I'm very confident that everything is fine. And the question is what will happen next year. And some people know it already. The idea is to start a new project which bases on CLIJ. So CLIJ is breaking out, right? We are compatible with MATLAB with IC and we want to go further. So we have to get rid from a name point of view. We have to get rid of the IJ. And that's why the idea was we call the new thing Clesperanto. And Clesperanto is basically thought as a multi-platform, multi-language thing for graphics, for image processing on graphics card. So you see here again, this was the auto completion in JSON in imageJ actually. And I just put some lines there. It's a little workflow. I executed it. And now I can copy these three lines from the script. I can copy over to a Python 3 Jupyter notebook. So this is all available. You can download this. You can try this. This is available. And I execute the same operations running actually on the same OpenCL code on the graphics card. They're just called via PyImageJ technology connecting Python and imageJ. And then I copy it over to MATLAB and I can also execute it there. So I'm an image analyst. I work in this business for something like 10 years. And I often have the issue that I find a super cool algorithm online, which is unfortunately not in the same programming language as I'm working. And so I have to translate. And I would like to avoid that by making it possible that you can call exactly the same commands in different environments, in MATLAB, in Java, in Macro, in Python, in IC, in wherever. So this is the idea of the Clesperanto project. And there's a website where you can watch this video. I just showed you get links to all the software you need in order to try Python, imageJ, IC, MATLAB, and so on out. And I cannot do this alone. So I'm reaching out basically. I did it for the first time at the new bias conference. I'm reaching out to everyone who wants to join efforts here. Let's do this together as a kind of community project. Let's remove the IJ from the name and let's make Clesperanto out of it. And I'm suggesting that we can do this in a year. I'm pretty optimistic that we can do this until June next year. But we can also talk about deadlines. I mean, I know that we all have more important projects to do. I'm just saying if you're interested in joining here as a user or as a developer, we need both actually on board to develop this thing in a direction where it makes sense. Get in touch. There's a website there in the image.sc4 where we have a thread where we talk about what we want and what we don't want and so on. So get in touch with this thing. And that was a question asked by Rocco earlier. What is CLIJX? So CLIJX exists actually for some time already and the X stands for X experimental. So when you go through the script editor while typing, you may, for example, hit a method which is called CLIJX capture webcam image. So the capture webcam image function I programmed in the first week of the lockdown at home in order to implement this amazing research project. I wanted to move our eyes in a different way. Therefore, I have to read images from my webcam from my laptop into the graphics card memory. So that's an experimental function. You are very welcome to test it. You can, for example, do a smart microscopy with that with a crappy USB microscope. You can acquire dust and apply algorithms to it. And having this in the folder with an image. So you can work with webcams, with cameras. You can also process images tile by tile by tile with that. So this is also somebody asked earlier. And it is in CLIJX because I'm not 100% finished with that. And I'm not 100% sure if this is the best way of doing it. So this is experimental code. Also the TiltViewer where you have synchronized windows and you can tilt a sample in 3D to get a better view on certain properties from certain directions. So this is experimental code. You are free to use it. It's online. It's open source. You can have it. But be a bit careful with it. It may break and pitchy may crash when you use these experimental methods. And also for the sake of reproducibility of science before you use one of these methods in a research project, just get in touch with me. Ask is this push tile thingy finished now? Can we use it? Or is there any issue? And then we can talk about it and maybe fix it for your particular workload that you are on a safe side. Feel free to play with these tools. I love some of them a lot. But they are not fully finished. This is experimental stuff. Okay. I would take maybe two or three questions before I give you your homework and release you. Okay. We have some questions. So one question is, is there a possibility to register or align long time lapses with CLIJ? I have example script for that. So motion correction is doable, I think in simple ways. I think what I did was measuring the center of mass or something like that and aligning the data set with that. I also have an example script in the folder, so where all the examples are actually, where I compare it with StackRack. So it does a little bit of different thing than StackRack. StackRack is quite sophisticated. You may know it's not a simple naive approach. So technically, yes, but not super sophisticated stuff. So you may have to develop an own thing around. But moving images in 2D, absolutely possible, a fine transform. And for registration, you need a method for data mining, how to move, and you may have to develop this yourself. So another question is, do you have these nice cheat sheets? Do you also provide cheat sheets for object-oriented languages? No, I don't. So I don't, or I don't do this yet. But the commands, I spent huge effort that the commands are the same. So you don't, in macro, you say ext.clij2 underscore, and in Java, you say clij2. And the rest is the same. So it's very, very similar. And if you are a professional Java developer, you will likely manage that without me making another cheat sheet. So it's not about making the cheat sheet. It's about maintaining the cheat sheet's long term. Yeah. Okay. So maybe the last question for now. So will Clasperanto replace CLIJ in the future? Yeah. So the idea of this, I have established a so-called release cycle. I would like to bring out software which can, which is reliable, long-term, which people can use to build workflows. I don't want to have something volatile, which breaks every two months, because I did, that did change something. So CLIJ came out a year ago, and it is now replaced by CLIJ2. But CLIJ, CLIJ2 will be there together in the same environment, compatible, very, I would say 100% compatible to each other at the same time. And I will continue maintaining CLIJ for another year. I'm not sure what I will do afterwards, but you have at least one year to transition between CLIJ and CLIJ2. And with Clasperanto, I would follow the same strategy. So as soon as Clasperanto comes out, I will maintain CLIJ2 for another year. And afterwards, I'm not sure. I give no guarantees about that. But what we can definitely do, what I'm planning now is that CLIJ and CLIJ2 stay there forever until they break. And if they break at some point in 10 years, and I'm pretty sure that I'm not retired yet in 10 years, I may do something about it, but I may also not. I cannot guarantee in the far future. But you have at least this one year of transition period from CLIJ to CLIJ2, and you have also a year between CLIJ2 and Clasperanto. So this is what I'm happy to guarantee. Okay, yeah, we have many technical questions. I'm not sure if it's suitable to answer all those, but we and probably Robert will do his best to answer them on the foreign major. So the homework, I guess, then? Then I will just introduce the homework, then we go through some final questions. Maybe not the super technical detailed questions, because most of these questions may really need that I post a piece of code or that I link to something. So we can do this later on the forum. So I'm super happy about providing examples for any kind of workflow. So the homework, if you want to get started with CLIJ, CLIJ2, there's detailed installation instructions on the website, but basically it is you turn on Fitchy, you go to the help update menu, and you activate the CLIJ and CLIJ2 update site. So CLIJ2 needs CLIJ, one builds on the other. Furthermore, so with that you can execute basic scripts on basically all computers where the graphics card is competitive. Furthermore, you may have to do additional steps if you want to process super large images, especially on expensive graphics cards. So of course you have to install the driver of the graphics card, but you also have to do some tweaking depending on which operating system you run. You'll find the details for that on the website. But for just executing these examples, I provide you this activating update site and having a driver for the graphics card installed, which is there by default usually, which should just work. The first exercise would be so that you get a bit of view into the website. Go to the CLIJ2 website, code examples, imageJ macro, and download the benchmarking IJM and execute it. The window, the lock window of imageJ should spit out something like this. So you see here that I apply a mean filter on the CPU, which takes something like three seconds. And CLIJ2 on a GPU would take something like eight milliseconds. And we compare CLIJ2 with CLIJ. If somebody observes a difference between CLIJ2 and CLIJ, please drop me an email. I'm pretty sure that there are graphics cards or special installations where CLIJ2 is faster than CLIJ. But I don't have a full overview which system and which case and which what. So if you observe that CLIJ2 is faster than CLIJ, just drop me an email. I would be happy to know the specification of your system. For the moment, you also see here this first operation. So this is the first operation of the mean filter takes a bit longer. So this is the compilation time I mentioned earlier. So the first execution takes 60 milliseconds, then the second and all the subsequent ones are faster. So this is what I just said, same with CLIJ1. Then use the macro recorder to record the workflow as I showed earlier. So you see here in the slides, the video, it's the same video as earlier, so I will not show it again. But use the macro recorder to record a workflow to segment the blobs image, run connected components, and for example, exclude the labels on edges, and look at the recorded code and try to get familiar with that. But this is how you get started. And then more sophisticated workflows I program with auto completion, but you can also continue with the recorder. So it's a matter of taste actually. And then the third exercise would be that's a quite involved one. So we have a workflow, we start from a tribulum data set, actually, just maybe it's a third or maybe it's just a half of an embryo, a 3d a 3d stack, I think something like 200 megabytes large, we crop out a region, we denoise it with some median filter, I think do some background subtraction with the top head filter, you see in the code how this works, do a thresholding and connected components analysis. And there's an image j macro which does that. It uses the 3d objects counter in order to separate these objects. And you can translate this workflow step by step into CLIJ, CLIJ2, in order to see if it pays off to run this workflow on a graphics card. If you translate the whole workflow and you don't have so much experience in CLIJ2 yet this may take some time. And that's why I provided an almost translated workflow. So you find this CLIJ macro segmentation exercise macro in the folder, where also these slides are. And then most of the workflow is already translated, there's just four or five to-dos in this script. And you should find out yourself, which operations you want to apply here. So for example, you want to do connected components analysis. So you start typing connected components something, and then you see what commands come up. And for the more as a hint here for the next operation afterwards, the result image must be called label map. Also here you see the maximum projected image should be pulled by the end of this workflow. So you have to make a maximum projected image, likely during the maximum projection, just to hint you how this variable should be called. And in this way you can actually practice this thing that you always have to specify the input image, and you have to specify the output image, and then the next operation comes input image, output image, and so on. And when you are through with this, I mean homework, so take any time you want for this. You don't have to do it today and ask in five minutes questions about it. Also go again back to the website and check out the tutorials. So we wrote together with Darnie, I really have to thank her a lot because she was helping me recently a lot with writing and proofreading of all these tutorials. You find the very basics of CLIJ described in such a notebook-like tutorial website. You find how to do binary image processing, labeling, working with our eyes. So you can not just pull images from the Graphics Cloud, you can also pull a binary image, SNROI, and you learn in this notebook how to do this. Then you have seen it already, CLIJ2 brings a lot of matrix manipulation stuff. So the whole graph theory is basically working with vectors and matrices, and there is a notebook explaining you how to do these things, how to process neighbors of neighbors, how to work with the relationship between cells. So this is the slide I showed you earlier, exists as a notebook. Then there are more practical examples, so the Trozophila embryo cell counting. This is basically the CLIJ version of the workflow which we had in the CLIJ paper. And also all the stuff I showed with my Triborium embryo exists there as a notebook so that you can reproduce the steps and all these. So you find it in the browser, you can read it step by step, but the files behind this just image j macrophiles, you can download them and execute them as they are, and can play a bit with parameters or replace one operation by another one. And last but not least, there's also the super pixel segmentation, think about it, divide the embryo in different parts. If you need support, so of course you can ask questions now, but I cannot answer all of them, that's why we will then let on make a post in the image SC form, and you can make a post in the image SC form at any time and post your questions. So for example here, Nico de Francesco was at some point trying out CLIJ, he's an early adopter. Thank you, Nico, for providing feedback early, super helpful for me as the developer. So he tried out Glitch and posted on the forum, Robert, I realized that CLIJ is super slow, can you help me? And then I was answering, yeah sure, I'm happy to help. You are processing super, super small images and super small images are more efficiently processed on a CPU than on a GPU. But if you process larger images, this is what I suggested to him, you will see that the speed up comes. And what I'm also always suggesting when I talk about the forum is always provide feedback if the solution works. So Nico, then yeah, sure, now we are talking. So this is how support for CLIJ works. And the more we communicate on the forum and not via private email, then the less private emails I have to answer because usually the problems are the same. People run out of memory, then something is surprisingly slow. And then we have usually similar discussions, similar solutions and these similar solutions we collect in the forum. And then it's accessible for everyone. And then we can kind of make a database of successful solutions for common problems. If you use CLIJ, please cite it. I would like to apply for grants at some point for this. And therefore, I would need to show that it's useful. And the best way of doing this is scientific citations and scientific papers. So if you use it, please cite it. And then I can belong to or maintain this thing. And with that, I would actually just summarize. So CLIJ2 has some new functionality, especially in times of social distancing. It's a very cool tool to study neighbors. Furthermore, it comes with very detailed documentation. Again, there's a link between this pop-up which the autocompletion brings and the website. Use it. And if you don't like it, tell me. I would like to make this documentation software mix thingy as good as it can be. It is applicable for ImageJ, Fitchy, Java, but also Jive and Groovy, JavaScript, everything you can run in Fitchy. So I think also somebody used Beanshell. So it's all there. You can just use it. I still may have to provide some more examples on the website. You can use it from IC and you can use it from MATLAB. The beta, the polished version of all this will come out in approximately a week. And then we will have beta test for a whole month. And then the release is planned for mid-June. And just as a reminder from earlier, GPU acceleration works best with many steps between push and pull. You have to have long workflows to actually gain speed up. So don't push and pull your images all the time and let them pop up an image. Then it becomes slow again. Push one image, process it in many steps and then pull the result back. This is how you gain the most out of it. It makes sense to process many images, time lapse data, stuff like that. And of course, it makes sense to invest a little bit of money in dedicated graphics cards. And with that, I would close and thank a lot of people. So you see here some pictures of my closest colleagues, let's say. So I feel I'm standing on the shoulders of giants. I could not do all this alone. So thank all these people you see here, not just those on the left, also those on the right, that just the names are. So these are the people who provided feedback, who programmed parts of CLIJ2 actually. So some filters were not programmed by me. They're programmed by students in Manchester and Dresden. Thank you all for providing support, for doing something for the community for this tool. And last but not least, thank also the New Buyer Society for bringing us all together so that we can talk about our projects and teach each other and learn from each other. It's a great community effort. And with that, I'm done. Maybe I take some more questions. First, thank you, Robert. It was a wonderful talk. Thank you. Yeah, we have a few questions that may need some answering now. For instance, a slightly technical, can CLIJ2 take in-lit two views as an input? Technically, yes. I think it takes random accessible intervals as input, or it can take random accessible intervals as input. But if you know how the view works, the view is an abstract representation of pixels. So the pixels are not there. So at the moment when I have to push this image to the graphics card, these pixels will be made. So in that moment, I have to basically un-view the view and make an actual image out of it to push the actual image to the graphics card. So in this particular point, it's a very interesting question. In this particular point, image-lip and CLIJ use completely different concepts. But I'm strongly convinced that the points where softwares meet, which use different concepts, there it becomes powerful. So if you find a way to combine Imglib2 and CLIJ for your particular project, then you may actually gain a lot because you can use this the best from these two words. Okay. Next question, do you have any plans to implement CLIJ or Clasperanto to process single molecule localization microscopy images? This question came earlier, I think five times for many people. I was thinking about implementing that at some point, but then I learned that the NanoJ library maintained by the Henrikus lab people actually has stuff for that. It also runs on OpenCL, so it runs on the graphics card. I would then reinvent the wheel. Maybe at some point, we build a strong bridge between these two libraries. At the moment, it's for sure possible to combine them because both run in ImageJ. So you pull your image, you process it with NanoJ and then you push the result back. So this should be doable, but I was avoiding effort so far because it would be kind of a reinvention of the wheel. So we don't have this yet. Okay. Then there's a question. How do we know when GPU processing makes sense? Is there a cheat sheet for that? Or common sense? Do you just need to try? So you definitely should try. Also, if you are already sure that it makes no sense, you should definitely try and provide feedback. So I have this checklist at the very beginning of this talk where you can get a bit an idea of what you actually need. So again, my major hint is as a rule of thumb, if processing the image takes 10 times longer than loading it, then it might make sense to accelerate that. You would then have to have, before you start re-implementing this workflow for the graphics card, you should take a look on the CLIJ website if all the operations you need are there. Or you can also search in the search field of Fitchy. You use thresholding, you use connected components and some post-processing binary filters. You check if all these commands are there and then you could translate it. And last but not least, you should definitely try and measure. So the easiest way of doing that, I think there's also a notebook where it's explained and showed. The easiest way of doing that is using the getTime method in ImageJMACO to measure the start time and the end time of a processing workflow. And there's also some more sophisticated, more detailed methods. You can actually debug a full workflow on a graphics card and see which is the slow part and which is the fast part and then optimize the slow part. So that's quite involved. But for those who want to really gain most out of that, that's the way to go. And again, there's a tutorial on the website on how to do this. Okay. There's another, a few people asked if there is a fast-create transform in CLIJ. So not in CLIJ, but Brian Northen from the US, I'm not sure from where exactly. He was implementing something like that and we made it compatible with CLIJ. So there is in some experimental repository, you can find it on the ImageSC form. I'm happy to provide a link there. And you can download everything you need. Basically, it is already on an update site. You can just run it to apply for a transform. And we were doing the convolution with that. So it is there. We have it. But it's not part of CLIJ2 yet. We have to finish development a bit, make it robust and stable. And then it may become, for example, part of Caspere Hunter at some point. That's right. So you're running quite late, I guess. I'm sorry for talking so much. No, it's fine. So maybe the last question. So Ophra is actually once more homework. So she is asking about, do you have any tips, any suggested homework for the graph-based analysis? So there are notebooks about that. So when you look on, I think there is a category between the notebooks for the graph-based stuff. Basically, the neighbors of neighbors notebook could be a good starting point, because there you see actually how it's done. The super pixel segmentation and the tri-volume dataset, the processing of these two things, that's graph-based operations. I'm not sure if the term is right, but yes. So there are notebooks for that. And you should then just download the notebooks, download the image-shaped macro files which correspond to the notebooks, and play with them. And for example, try to apply it to your data. And if you struggle, let's chat on the images here for how it goes. Okay, so maybe the very last question is, so this was also asked a couple of times. So is it possible and how can you run another Fiji plugin in CLIJ? So Fiji plugins in CLIJ, is this a question going towards translating Fiji plugins to run on a GPU? I think so, yeah, but then without a lot of work though, just running. You just contact the developer of the Fiji plugin and suggest him or her to use CLIJ. And I'm happy to help, so actually we are working on translating some of the plugins. In fact, one of the exercises shows how to translate the 3D object counter to the graphics card. So we are working on some of these plugins already, but this is a lot of work. In the very worst case, we have to translate whole Fiji. And maybe we will not do this. So yeah, that's unfortunately the way to go, yes. Okay, so I think we will stop here. Thank you, Robert. Thank you all the moderators, especially also thanks to you, the viewers at home, for attending. So there are still a lot of open questions, lots of technical ones as well. We will try to answer them and post a thread on the image SC for them with that. It was a pleasure. Thank you, thank you, Bram, for moderating all this. And again, for all the others, I mean in the background, I know there are quite some people involved in organizing all that. And it's a lot of emailing, a lot of paperwork. And thank you all for making this academy happening. Yes, thank you.