 Welcome everyone to this last webinar of New Bias Academy of the Year, we're very happy to have you here for Enjoy Webinar and we're super excited because now we've had almost 30 webinars throughout this pandemic time. It's super great but we're looking for topics to continue and the ways to continue since the pandemic is sort of slowing down. So before we go to the webinar we will share one link with you if you can open it already and then suggest to us what kind of events would you like to see next year in the future and what type of topics you would like us to cover. That would be great. So, yes, today we have with us Wei Uil who's a researcher in KTH in Stockholm and he's specialized in deep learning image analysis and he's also the leading developer of the platform Enjoy and also the bio-image model Sue and as a moderator we have triangulated from KTH Stockholm from the same lab of Professor Emma Lundberg and four of us from New Bias Academy to help with answering your questions. So please be active and ask all your questions in the Q&A window and then we will try to answer all of them. This video will be recorded and then it will be also available on YouTube channel and the questions together with answers we will put later on the image SC forum. So welcome Wei and I'll just stop sharing. Thank you. All right so I'm going to share my screen. Can you see the screen now? Yes, yes. So yeah I think it's a great pressure to do this webinar. I think it's New Bias has a great lineup of like you said 30 webinars. I think they're great. So it's a great pressure to you know contribute our introduction to Enjoy. So today I'm going to basically focus on the introduction of Enjoy about the planning based image analysis and also because we work a lot on the web environment based in the browser. And also I'm going to introduce you some of the latest developer we have been working on about model serving from the server side for example. Yeah so please feel free to ask questions and you know interrupt if you need it. So yeah so there will be three main topics which I'm going to focus on three parts. The first one is introduction to Enjoy. I'm going to give a brief introduction and with some demos later on. And then basically I will kind of show a demo step by step to show you how to develop your own Enjoy plugin and then use our recent server called Bioengine to show you how to train your own deep learning models and work with the Enjoy plugin. So I'm going to I would like to start with this you know a list of trains in bio-image analysis. So everyone knows that deep learning has become in kind of a dominant approach used by bio-image analysis. It's super powerful and most of them use UNED for example if you talk about image segmentation of the classification models and the models actually becoming there's also a trend to use big models sometimes even massive models and there's a special type called transformers which are originally used for natural language processing and now it's trying to find its own kind of place in image analysis as well. So this is definitely a trend to observe and besides that of course everyone trying to work with massive data sets and not on a laptop but definitely you want to store those data on a server and also in for example scalable file format such as SAR, M5 and then NGFF from the OME team for example they have this nature method recent paper about this. So a lot of exciting trend in the community. There's a lot of challenges which we try to focus on for self in terms of in I mean a lot of challenges in AI specifically for bio-image analysis. So here's just a few then for example usability right we want to build software which is usable user friendly you want a GUI to operate on but it's challenging because you in the meantime you need a lot of storage and compute power you want a lot of flexibility because you want to try to handle very different data types right so and these two often conflict into each other if you want to have a lot of flexibility and the GUI becomes complex right and so we in the end we're going to show a demo to how I see as a potential way to adjust this kind of problem but there's also interactivity you want to annotate the data especially images and scalability which is also a major issue because you want to work with remote storage and compute resources. There's also privacy which is also important if you work with biomedical images but in the meantime there's a lot of opportunities in the web with the web right because we have cloud computing basically you have unlimited storage and compute power basically you just need a credit card if you use Amazon this kind of cloud service right um and there's a lot of a modern UI framework for building beautiful web pages but also mobile apps and the kind of in the web is very kind of matured ecosystem and there's also exciting development in the web standard for example web assembly web gpu this enables very powerful computation in the browser for example later you will show that we essentially have the scientific libraries NumPy, SciPy, Python, entire one in the browser so this is also exciting um there's also scalable storage such as S3 which typically used with for example ZAR and this kind of file format right um and so one thing we try to focus on to build a progress web app um so Imjoy essentially a progress web app and it has very nice features for example you can work with uh reaching interactive UI but in the meantime it's not just like standard web application it can also work offline basically you don't need internet to run Imjoy plugins for example uh and you can also do computation in the browser plus the cloud right so for example with this what I just mentioned in the earlier about the web assembly and web gpu is going to be super powerful to do image analysis in the browser right so the browser essentially becoming kind of operating system itself when combined with cloud computation resources it's a super powerful combination uh this is the basic framework of Imjoy so basically we have this progressive web app uh app so it's accessible from this Imjoy.io yeah and then it's a plugin based framework heavily inspired by for example image j right so you can develop plugins for it but we also support multiple languages Python Java JavaScript for example and it can be extendable to other languages with the help of for example web assembly but in the meantime we can also um our recent development have been shifted to the server side so because we think that's the way to solve the scalability issue um so specifically what we have been working on is this for production to deploy Imjoy plugin we have been working on this Imjoy app engine uh to allow deployment of Imjoy plugin to the cloud with the help of kubernetes and also in collaboration with craig from ebi working on the binder deployment but in the meantime this kind of cloud solution gonna also work on prime deployment for example in your institute in your institution or your kind of lab workstation for example um i just want to briefly mention a few key concepts which uh within Imjoy so the first one is the every plugin is sandboxed right so it's either docker container or sandboxed uh you know plugin environment in the browser right and they talk to each other through this technique called a remote procedure course it's a bit technique basically enables transparent function core between plugging across language programming language i mean and also across the location if you have a plugin running in the cloud another one in your browser right and the other thing is the workflow composition through this uh technique called a synchronous programming uh this enables very flexible uh workflow composition and then open integration so this has been we have been focusing on such that you can easily integrate Imjoy with other software with other website etc so you're gonna see this i'm gonna briefly show you um for example the first one is um how we do remote procedure course right so based on the left side you have an Imjoy plugin uh in python it runs in the cloud when i click run it's actually this code gonna send to a server in the cloud through my binder which is a free jupyter notebook service we use it for our backend for example um and you can see this activity going on it's actually requested a docker container is gonna install the Imjoy libraries and then it's gonna run this uh plugin for you but then yeah so now it's ready and then on the right you have a javascript plugin what we try to demonstrate is that in this right so you can get the plugin based on its name it's called python plugin on the left side when you're getting you can call the function called add and this is the add how the add defined out space you add two numbers together and as you can see you can just call this function as if it's defined in javascript and if i click run here is create a user interface in javascript and when i click this button and this computation is essentially done in this python plugin in the cloud so as you can see that we have this rpc core which makes it transparent to uh do computation remotely or in or in the browser with another plugin so the other concept i want to briefly illustrate is this asynchronous workflow composition and so basically it's a technique allows you to do flexible for example if you want to schedule um two process right you have two plugins they have function called process and you have two inputs so if you add this at weight before it means that it will weight and heal this um process function finish and you get the output and then you're going to wait again so as you can see that it's sequential right because you only execute the second line until the first process is finished um and this is sequential process but what if you want to do round this two process in parallel right so what you can do is that instead of you don't add this at weight so it will not return the actual output but it's where uh return an object called promise and this is basically just like you for example you order coffee and you know you get the ticket right with the ticket and then you can get the output so this is exactly what we do we kind of order two coffee right and then we've got two tickets so this is simultaneous you don't need to wait and then later you wait this two ticket and then you get your actual coffee for example right and then so as you can see that in this kind of way you can easy by by adding this at weight where to add this at weight and you can make the sequential or concurrent and imagine these two plugins on two computers in the cloud right and then you can do parallel parallelization right so this kind of just a demo on this how you can have a flexible way of doing work for our composition and the other thing I want to briefly show you is that what we call open integration so basically m joy plugin doesn't necessarily to be only what can enjoy so what I'm showing you here right now is that is this web revocation kaibu which you will see later itself is a um standalone web revocation you can open this thing can use it as in standalone mode but if you place it in another m joy plugin just like this the code you are seeing here is basically part of the m joy plugin and you can call this create a window and then you can pass the url and it turns this kaibu web revocation it becomes an m joy plugin and then you can access the function inside this web revocation for example view image or add a annotation layer and if I click run here and you basically see that you can customize this behavior by adding this image layer and then you have this annotation layer and then you can easily do annotation right so basically with these three lines right you can make an annotation tool essentially of course you need some more code if you want to do more complex stuff um and I will just go through quickly a few demos but feel free to interrupt me if you have questions um so the first demo I want to show you is this visualization with visa a tool developed by trevor and um he yeah so the visa is basically very powerful tool to visualize dark images if you're familiar with size a scalable um file format for large images such as this one when you zoom in the corresponding chunks were pulled into the browser so this is an one of the m joy plugin which you can use in your workflow for example on the other thing I want to show you is the one of the other work we have been working on is called interactive annotation tool uh together with chan um you know we made this tool um so the idea is that we use this kaibu as a viewer image viewer and then um basically we do annotation inside the kaibu plugin and then in another plugin you can do for example run a training of deep learning models specifically we use cell post to train this model in the background so we have a co-lab demo um I think if you watch one of the past new bias academy uh video you will see this one of this demo so basically you have a co-lab running the cell post training code and simultaneously the user will annotate and then the idea is that you first use this cell post to make a prediction and then you if the prediction is not perfect and then you correct it and then you send back the corrected annotation to the model and then you train it again so you can continuously improve and then get the sense of how it works so this is a just quick movie showing you how it works in co-lab um and it's it's uh fast forwarded so you can quickly see how it works so you have four images initially in the training set and then we just start the training loop and you can see the loss and then the idea is first click this button called predict and this is the prediction and the result is not perfect and then you kind of modify it for example in this case we just cut two cells stick together into two and then you send back for training you get another image you make prediction and then you make correction and then you also you know you just keep doing this and then until you the model um you know satisfies what you would like to achieve for example uh you just add new annotation in this case um and then so this essential advantage of doing this is that it makes it much faster and it gives you a kind of a sense of when it's enough right because otherwise if you do it offline you don't know how much training data is enough and also um there are other issues for example and this is also good for building trust between AI models and the human user right so uh yeah so feel free to if you want to use this and feel free to search this enjoy intact annotation you'll find this f.1000 paper under the new bias gateway um in f.1000 so i'm gonna skip that so there are a few other demos I want to show is for example not only do image analysis but after you extract the features for example and we also have a tool to allows you to do interactive chats right so you basically make plots so this code snippet basically tells you you best again you call a great um you know window and then similar to the kaibu you pass another link with chats.mjoy.io and then essentially turn this web application into an mjoy plugin and with this plugin object you can call the function in the application for example in this case we just load a csv file right so this is csv file hosted on jobbox right so if you run this code then you can see uh this interface with um for example and here is essentially a umap for example you have you can do a scatter plot for example and the table is preloaded so you can select uh define the x and y so this x and y is the column names in this csv table right and then the nice thing is for example you can easily customize the color based on another column for example location code and then you can easily make this kind of a scatter plot and then you can um and the other thing is that we with the mjoy plugin you can also register a callback when the user cursor kind of hover on this point right so then we have you have this information layer where you can hover and you can see the image and you can also click right and then in the click event you can in this case we fetch an image from the human protein atlas and then you can see the corresponding image so some of them are broken because they don't uh they are not available anymore but most of them are available and then you can see different information uh yeah so this kind of uh uh interactive tool is very useful for example exploring for example single cell data or or just a feature map of features produced by another neural network for example right for example your segmentation um feature morphological feature you can visualize in this way and then with this kind of information layer you can go back to the cells for example and this entire thing is also an mjoy plugin which you can control through the api just like this for example load and you can also have some other api function you can customize um and then I mean what you are already seeing within the annotation sorry the the presentation tool is called mjoy slice it's available as slice.mjoy.io and you can also use it to make interactive slides and for example you can run mjoy plugins within your presentation and we're going to see another demo later with image JavaScript but just briefly introduce you what is image JavaScript so this is one of our most popular um service because it's also linked to the original NIH image j website so what we did is that we take the image j which is in java compiled into JavaScript use a tool called chapj and then it basically enables running mj in the browser um and it's also support tablet and mobile phones which the original image j cannot support for example and you can share macros through a new url for example and you can easily integrate with the web application and it's suitable for teaching and you will see it later but uh let me just quickly show you for example a link right or you just click on the link for example you want to share a workflow a macro with others um yeah so this is a macro made by Jerome um and he is kind of if you know he is very active in the mj community so what this does is that in the url we encode a uh a macro right in this screen and then you can see when I open it it's basically runs this entire analysis on segmentation and then also produce this area for example so this is very useful for um report sorry I think I closed my um sorry slice yes um um so this is very useful for sharing and also putting a paper for example if you want to publish uh with a macro for example um and then for teaching for example it's also very useful because you can share the slice with the students and then for example if you want to teach a image a macro right on the left window you're gonna have a list of image a macro and then if you run and then it's gonna run on the right side and then you here it's a basic macro for um doing segmentation right and you just take a binary and then analyze and you can modify it right the student can modify if they made some mistake right some syntax error if they run it again you're gonna get this syntax error right um so this is very useful to do interactive annotation and share with the students um um uh if you want to teach image a macro for example but this is just one example um enjoy itself we also and there are many ways to use enjoy one including use it we have extensions for Jupyter notebooks co-lab and also Jupyter lab for example and you can easily run and develop your enjoy plugins in Jupyter notebooks for example and it enables this kind of for example 3D visualization with I'm showing you here right now um and then we have a growing list of plugins different of them I'm gonna I already showed you most of them um but then just a list that showing you there are different ways of running enjoy you can go to this imjo.io there's also a light version right which is smaller than this faster to load and then we have this vscode extension this is a new um I'm gonna show you later how to use it basically you go to this website and you're gonna open an vscode if for users familiar with this you're gonna I think you're gonna like it because vscode is very powerful IDE for developers and we have a extension to support running enjoy plugins there we have also Jupyter lite so Jupyter.imjoy.io and this is a Jupyter notebook running in the browser powered by this web assembly and essentially a piledite project which enables running python directly in the browser so you don't need to run any local server or remote server for Jupyter notebooks just run in the browser right go to this thing you're gonna have it and this we're gonna also show it in the first demo later and there's also colab and so basically use pip install imjo and then you're gonna be able to run imjoy plugins in google colab we have notebook extension Jupyter lab extension for example but this just different ways of part of our open integration so the user have can use it everywhere for example um i'm gonna show you this demo showing you how to develop your own imjoy plugin um let's see yeah so for this part i'm gonna um so the conventional way of doing this is you go to imjoy.io and then you can start uh you know create the plugins but i'm gonna for today i'm gonna show you another way of doing this is basically uh we have this sorry imjoy uh team if you go to this website i made this tutorial imjoy tutorials so this is a tutorial repository which i added a few kind of plugins and you can so to to try this out you just press the dot key right with this and then github will have essentially this vscode integration it will open vscode for you so this is a github vscode right so it automatically load everything in this repository and you have this for example this is an imjoy plugin right um and i have two folder for the first part that you only focus on the folder called one plugin development and then for the first time if you load it you will need to install two extension and you should be able to see then uh the first one is imjoy and the other one is vscode pile that and you can just in click install and here because i already installed so i don't see otherwise you're gonna see a install button you just click and then or here right something like this you you just need to install this imjoy extension and then you will be able to use it and after that you go to this file open this folder and then you can just look at this so this is an imjoy plugin right so this is the code of imjoy plugin uh you have config field which is the json file with some kind of a meta information for the plugin for example the important part is this name and then also the type so we have different types uh depending on different programming language for example if you use javascript use this web worker plugin uh type um and then the other part is this um you know i can close this one and then you have this script block this is the kind of core part of your plugin and so this is a very simple plugin which does nothing but you know it will cause an api code prompt so this will ask you for an input uh for example if you want to ask the user what's your name right here's your name and then you're gonna get the name which user types and then you're gonna uh alert show an alert message box say hello this plus this name right so in order to run it uh you can click here in the bottom you have this run imjoy plugin if you click here and on the side you're gonna see this imjoy opening and you can see that it's already about what is your name right i say way right and then press enter and then it shows me this hello way right so this is essentially so we just made our first plugin with very simple features um yeah and then you can just um we can open another one the second one i want to show you is the itkvitica viewer for 3d visualization um yeah so this is an also an imjoy plugin called itkvitica viewer if you uh as i said this is also a standalone uh imjoy standalone web application if you just open the link right what you see is a um you know you can open a local image i don't know um i i will i will skip this but basically you can open a local image and then visualize it so it's a standalone web application but in order to make it an imjoy plugin you can just pass the url uh to this right uh to this api dot create a window so in this case we have a python so essentially this is a python um i think vscode has some difficulty in detecting the um python language in this but doesn't matter but this is essentially just python right and we import this numpy um and then we in this setup function we create this random numpy array with this shape 10 by 10 by 10 so it's a 3d image array right and then we use this create a viewer with this itkvitica viewer and then we call a function called set image right so how do you know there's a function called set image because uh because you're gonna see it from the website right so here this is the itkvitica viewer website in the viewer you can find the documentation and you can find a set image function right so it's already um describe the set image function um i think it's in the if you read this documentation you will find out the set the set image function is available here right um and also importantly this plugin with type connective python and in this case it's gonna launch this plugin on my binder so basically it's for launch a remote server so yeah this i can also click here say run plugin right and yeah so so what it does is that it's gonna launch this plugin by launching a server first on the my binder website um yeah so just quickly show you this is my binder my binder so this is a website to launch jupyter notebooks and we just use it you don't and yeah so so i launch it and then you can see this is what we did we created an itkvitica viewer and we also pass this image array with 10 by 10 by 10 with random pixels right from 0 to 255 and then this is what exactly what you are seeing here right now and you have a 3d volume um yeah so basically if you have um volumetric data you can do the same right so i just just make it quick and i just show it uh 10 by 10 by 10 but you can you can see that it's uh you can just display a volumetric data with this um and the other example i'm gonna close this one this plugin window right and then i'm gonna go to another one which is um kaibu annotation so i think you already see it in the slice um you can also drag this here so you have a side by side you can also close this um the same you just click here right so what what we did here is the kaibu annotation tool so in this case we what we did is that we have this kaibu app right so if you open this what you're gonna see is the kaibu standalone web publication so this is the kaibu web publication right so how to use it as an image or plugin right you click the documentation and you will see how how exactly you can use it for uh image or so basically you have this um um you know kaibu api documentation and so if you want to use it right so you can go through this basic usage so basically you pass the url here either in create a window or show dialogue right so you can even try it here because this is also because this documentation is also interactive documentation with image or a built in so basically you can run it and you can even edit the code here right if you want to do some testing right um for example what we are doing here is that we create a viewer first and then we view an image from the human protein allows right if you want to switch another image you can just change it and then you say wrong and then you're going to have this viewer here right um but essentially this same code you can you can go back to here um and then you can use it here right so for example you can uh sorry I think I'm going to the wrong one this is the one we were just looking at um yeah so here for example um in this case we work with JavaScript because because it's a worker type so this is JavaScript so you just need to notice that the in the documentation sometimes we have Python right so you have Python or JavaScript so in this case if you want to create a JavaScript plugin you just click on the JavaScript side and basically you can just copy and paste this code also to make it work inside here right so basically copy the code here for example if you do it here right and then say I want to uh run this plugin you just click here and then you have the this window open here right but it doesn't enable annotation this is because we only add this image layer there's no annotation layer so in order to add this annotation layer so you just read through this right um shapes it's got shapes layer um yeah so this is you have a function called add shapes uh so you again you can see this example but this is in Python um doesn't matter because we gonna we just change it to JavaScript right and then you need to rewrite that a little bit and but you can just for example add a new line here and but this is like a Python you know style I need to change it to JavaScript style which is basically like this and then change Python to JavaScript right so this is the JavaScript um and then here is the uh shapes initial shape I don't have one I just put it empty um and then I can just modify it and then run it for example um yeah so then if I run it you see we have this triangle layer so this is what we define it's called triangle and it's empty but I can just add a new shapes here right um yeah so so you can see a lot of examples here in this API documentation I want to quickly point you through um for example you can add buttons right control what we call control control widget for example and in and you can add a few buttons uh no yeah so this if you click this because this is a Python plugin and it will launch on the my binos so you need to wait for some seconds to until it is established um and then you can also create form you have we have very rich form UI which you can use to get input from the user um and then let's see we I'm gonna quickly show you let's see this is also Python this is also we you can also show a file tree you can show a chart and there's a lot of API you can go through but I'm gonna show you this one yeah so now it's ready so for example you have two buttons right so what essentially what you do is that you see call this function viewer dot add widget and you say type is control and then you add a type which is a button the button have the label called say hello this is the say hello you have a callback function and this function defined as alert hello right so when you click and then you're gonna get this message because we have this um you know callback function set so just to give you a bit idea right so if you load an image and you have a button say process right essentially what you just need to change this function to process and here change to process and then here you just do a segmentation classification or all kinds of you know and you just do whatever you want in the UI right um so I think I would need to wrap up because we're gonna show it later as well um let me just show you quickly if I have other examples okay so I'm gonna switch from a vscode but vscode if you like this kind of environment you can just stick with this environment I'm gonna show you another way of developing mjoy plugins which is um if you go to this uh jupiter light jupiter mjoy.io right and this is um a jupiter light running in the browser so you don't need to start any jupiter notebook server or anything it doesn't connect to a server by you know by default but what essentially it allows is like it runs python in the browser right you can create a new notebook uh but it has less compute power because it's just runs in the browser but for most of the stuff we do is already enough but I'm gonna quickly show you a few examples uh for example the same one which we did before about itca vtca viewer um and here the advantage is that you can do inline uh plugin display so basically for example this is the same demo you have seen before you create a random 3d volume and then you create this uh window plugin you have the viewer you call this function called set image and then you just uh for example if shift enter and then you're gonna get this inline right this is great for development because you change and then you know um you can then uh you know for example I change it to 20 right and then you get this result immediately right and then there are other you know you can also slide through sliding so this is uh you can also change the intensity right so and but this is just um enhancement for mjoy if you just make sure when you run this mjoy plugins and you need to make sure you see this icon that means is reload the extension because otherwise you won't see this window here um this is one example the other one is uh sorry I'm gonna open this uh the other one I want to show you is this visa demo so basically enables um showing because one question right how do you deal with because you work in the browser how do you deal with massive dataset right so this example shows you with this kind of file format um or interface mjoy can support this kind of a display or annotation on a massive dataset or for example tissue images for example right so this is just a quick demo um I converted from the visa example and then you can quickly show that here we just define a function called rerun visa so essentially wrap an mjoy plugin and it's caused this uh mjoy plugin called visa right if you open this visa it's just this web application right um so we just wrap it as in plugging by calling this api.crate window you get a viewer and you set the you can add image to the viewer right so basically you run this and here we just create an om visa or not called a ngff file format so here we just create an multi resolution pyramid as an example okay so this is just an example um and so I'm going to show you how to let's say for your own data you already have this so you don't need to do this but here for demonstration purpose we just create an example ZAR file okay so this is the ngff file format so here you use ZAR to open this um ngff file right it's a multi resolution astronaut image so then because we defined this run visa earlier with this mjoy plugin so now you can basically just um get the image and then you just visualize it in your browser um so that here is the the difference is that if you have a massive image right when you zoom in the corresponding chunks will be loaded into the browser if you watch here the black it means it's a little sending data right if I browse it through another region right you can see it's blink so it means it's a dynamically pull the chunks through the browser that went based on the visualization behavior right so which view you're looking at so this is the way to scale right so if you have a massive data you want to visualize you need to do this kind of uh use visa plugin um we have also some work on you know working on the kaibu to support this but it's not supported right now um but for now we are supported there's also ongoing work in the itk retica viewer to support this there's already support but not the ngff file format yet so but it's gonna come soon I think um yeah so this is the um I think this is the demos I want to show for the first part uh let's see if I have anything left so I think that's it for the first part um I think we can pause um you know for questions if you have any question about development of plugin in mjoy or anything related to mjoy yeah feel free to yeah I think there is this one interesting question it says that in many examples the data needs to be transferred to web servers what what about the local data or local internet web servers uh yeah so so that's a good question um so let me see how to answer this question so the the thing is um so it's not exactly because this is for demonstration purpose right we because we want something reproducible so we use a lot of url in the demo but um in fact you can for example if you run through this um you know if you use let's see which demo I show for example let's see for example this itk retica viewer is native python right and this means you can connect to your local Jupyter notebook uh which runs on your own computer or another workstation right so there you have your data and you have your you know um you have your compute together and mjoy plugin just provide an interface for it so in that case you have no problem of transferring data but for other cases where you have let's say you run your compute in a remote server and then you have your interface here and we do have some api to allows you to upload your local data right so you can for example in the kaibu example there's a files right so for example we have this a files widget allows you to build an ui where you can select the local files right and then in your code you can say uh send to the server and then run it right so I think for for for that particular question we will also uh see it in the second part let's see how we deal with that you know local file and remote compute basically yeah so it's there um thanks yeah one more question do you have uh documentation on implementing enjoy plugins oh yes yes I think I forgot to show this but uh here you can see mjoy slash docs basically you have this documentation and you have a quick start um and then I think one one tutorial I recommend is this i2k workshop tutorial if you go to tutorials click i2k right and click here and then here you have this plugin development in step by step in a very complete uh tutorial this is what I recommend but for more complete you know api documentation you can see this imjoy api right and you can see all the api function we expose from imjoy where you can call for example you have seen a lot about api dot create window right you can find it here this is the api dot create window and you can see full documentation about this yeah just feel free to browse through the imjoy slash docs but a lot of other features which I'm going to show later is not documented yet but you know we're gonna it's in the progress thank you thanks I think we can move on and if there are more questions we can answer them later yes yes yes sounds good so okay so I think I'm gonna move to the other part which is uh closely related to our development in this bio image model zoo so and I'm gonna not give more details about this but I will just quickly show you that um so this is the uh collaborative community project which in collaboration with the elastic team from Anna Kraschuk uh foreign youth and other groups many other groups and for example the uh zero cost group so right now we are uh in an upgrade in the website but you have the old website as well and where the idea is that we have this model repository for sharing AI models so basically you can find a lot of pre-trained models the ideas that you're gonna download and use it in different software but one of the key features we want want to develop is that in the end you're gonna have this um button when you can click and run uh models with the help of imjoy plugins right so I'm gonna quickly show you one example uh which is within the human protein others uh let's see if I can do it yes for example this one so the idea is to click a button and then you launch an imjoy plugin directed from the website and then you will be able to uh run uh you know deep learning models for example here is a classification model when I click predict this new network will be run in the browser um um yeah and then you can make prediction on the for example where does the protein green channel localize right and then you have different classes of prediction when you click on it you can also see this attention map of the neural network for you know analysis so this is just one example and this is implemented with tensorflow JavaScript which means the network runs in the browser but for most of other cases um a model cannot easily run in the browser so that's why we want to um come to our second part which is a um new developed um server what we call the bioengine so it's basically the idea is to have AI models and applications served from the server so when we say server it can be a workstation in your group or a cluster in your institution or just some kind of commercial cloud provider so basically we provide this um server software allows you to serve AI models and applications so the idea is that we're gonna fetch models from the bioimage model zoos all the models they're gonna be available eventually uh this is a work in progress by the way um but I just want to show it give you an overview how things work with this um so it also provides a web API for model training and inference so basically you call web API and you don't care about how to the dependencies which is often problematic for installing deep learning software on your own laptop right for example or your own workstation but in this case in the bioengine we manage it for you or bioengine manage it for you and then you just access the API and you're gonna see demos later so it supports test run models and for example as I said in the model tool website you can click the run button then you're gonna run through this server and it's also run mjoy applications such that you can build for example interactive annotation tool combined with AI models right and it's suitable for deploying AI workflows for institutions facilities labs for example because you can also run your own server and then you can you know host a set of mjoy plugins and they have a fixed workflows and this is ideal for facilities for example you have you have a fixed dataset types right and you can have a list of plugins for example cementation uh you know it's gonna be used a lot and then you can host this kind of a server plus mjoy applications for your user of facility or your lab for example um and this I'm gonna show you how how it works and with mostly focused on this interface where you can open jupta.mjoy.io right so I'm gonna share these notebooks later but for now you just need to um you know look what what I'm showing you I think I'm gonna use this one yes um so so again I go back to this um so so the notebooks are gonna be in this folder called bioengine tool bioengine so these different notebooks right um if you download it and then you you drag and drop to here right in the jupta notebook jupta light right go go to this jupta.mjoy.io right and then you drag and drop those notebooks files in the second folder and then you will be able to run it so I'm gonna show you how how it looks I'm gonna close it first um yeah so the first one I want to demonstrate is this bioengine uh this is just a name I'm gonna fix it to work confusion so you have the bioengine and then here's just the demo and you will see a lot of code but I'm gonna explain uh so this is just um so a few function to help um because we want to run a classification model so here these are the predefined labels right so you're gonna see later and but this is just and this is a function to fetch images from the human protein analysis website um again just for demonstration purpose but you know in in case you have your local files you just you don't need this but you just read from your disk um and we have this helpful function to fetch images from the human protein analysis right so if I just run this when you run this notebook make sure you have you see this icon it means the mjoy um extension is loaded but although in this case we don't use it but just to make sure you have that um and then um I'm gonna so here the first part is that we're gonna fetch an image from the human protein analysis by id so this is the image id and this is just for demonstration again you don't need to do it if you just use your local data right so we just show an image so this is one cell we have a large image but we crop in a small region because this model is gonna only work on a single cell right so we're gonna run this classification model and this model developed by this best fitting is a team from um you know who wins our cargo competition in the human protein analysis but there's just some side information but this is the name of the model right and then you're gonna um this is the name of the bioengine server it's got ai.mjoy.io slash Triton right so um this is a development server so it might you might experience some instability for now because we are continuous improving it but later on we're gonna provide a more stable server for public use um and yeah so basically you provide this um url for the uh bioengine server you provide a name for the model and then you basically call a function called execute this model right so you just pass this image to the execute model and then you're gonna just get the classes if if I run this notebook just remind you this notebook it runs in the browser right so it means that this cell doesn't have much compute power but why can we run this uh inception with three which is a pretty big model is because we run it on the server side so what actually happening is that this image is local in your browser or you can read it from your disk right and so but this image gonna send to the server to this server and then it's gonna run this model and it's what return with and result with these classes and what you're gonna see is cytosol 0.7.8 this means that okay so this cell the green channel has the probability of a 0.78 located in cytosol right it's which is uh true right so so this the same server it can not only run one model for classification but it can also run multiple models so in this case we just use another model called cell post right so I think if you do a lot of uh segmentation and you must know this you know cell post which is a new approach um from the janila farm casting was the uh first also and then they developed this cell post model which is kind of state-of-the-art segmentation model um so basically but the running it is easy because the model is already on the server right because you just fetch an image and then again you just execute this model by passing an image and then you also have a parameter called 30 nanometer which is a part of the cell post parameter right so you just run it and then this is just to display the result um and you can see this input image and then this is segmented mask um and again you just change to another model name studies is just another segmentation model which has also good performance in you know for example when you have a more concave shapes of cells and then studies this good to use for example but again you just need to know the name and you just pass the data and then you just run this right so because most of the computation is not done in the browser so you don't need really kind of a very good server you can even do it in the browser as we are doing right now so this is the first demonstration so hopefully give you a taste on how the bioengine works right so basically have a list of models which we support and the the list can grow right and later we're going to link to the bio image model zoo website such that most of the model in the model zoo gonna be available through this kind of stream you just need to know the name and then you're gonna be able to call it right um and so the second demo I want to show you is this training models right training model often considered as much harder problem because it's much more computational demanding and it's also has because of that you will need more um you know because if you only do inference you can do on cpu and the speed is okay right but if on the training you definitely need a gpu which is make make it much harder there are other solutions like google colab and egyptian notebooks but again that's i'm gonna come back to that some kind of comparison with that but here i just demonstrate you how to run this training right so in this specific case we just use the bioengine to train a model to do um cell pose segmentation right so again so this is just um a few functions and i'm i'm gonna just skip this but this is very similar so the key part is this train function um again you just pass the url of this um bioengine server and you pass the model name cell pose train and it's not the same as the previous one where we just execute because training is a process which involves a sequence operation right it's a it's a sequence of operation because you need a training loop right so that's why we use this called sequence executor uh from this pal pal triton client so this is a very lightweight client basically doing http api course so we just wrap it with this uh sequence executor um but it's very lightweight so you get this executor and then you have a for loop to loop over your samples right so you have these different samples and then you feed the image and labels right so so basically you you have your image and then annotated uh label image right so this is the mask with cell id as the pixel value um and then you send to the server and you call this function called step right so you feed the input and then you run a step so for every image you run 16 mini steps right so you set 16 mini steps so on the server side each image and labels they're gonna be do date date augmentation for 16 times for example it's gonna randomly rotate the image uh for 16 times and run 16 iteration of training on the server side within one core right if you do one step it consists of a 16 mini steps right and then you just another outer loop which the epochs depending on how many epochs do you want to train you can set this parameter code how many epochs um and similarly you're gonna have this predict function right once you train the model the model is still on the server you just need to pass the model id uh which you set when you do this training right and then you do this prediction you can use it instantly so i'm gonna quickly show you this how how it works so this is just um uh we just download some um sample data set with only four images we have a train samples and four for train sample and for test samples and this is just show one example how it looks right so this is the human protein analysis images and this is manually annotated annotations this is the mask image so in order to do a training right so what you can do is that you call this train function and you pass the training samples and then you you set the epoch let's say 10 right and then it's gonna run for 10 epochs and each epoch consists of four samples each sample gonna run 16 million steps in each mini steps um and it's gonna be randomly rotate the image and mask simultaneously for 16 times and here you're gonna get see the loss function you can see how it how it you know it's improving because it decreases right so and you can also set so like on purpose i said it's a pre-trained model to norm but you can set it to sito which is the standard um cell post pre-trained model right and you can start with a pre-trained model right and each time you can just train this model and with all the computation down on the server side here you just control your training loop here so once it's trained you can optionally you can download the model right it's going to generate the model package in the format of the bio image model zoo format right so it's an essential zip file and you can extract the pytorch weights and with an additional yaml file which describe the weights and names etc so you can basically download model but because we are running in the browser with um so you don't need to run this if you run you're gonna get that memory error because the browser has limited memory to get this but if you download this notebook in your own notebook environment you will be able to download it and write into your disk the model and you can use it elsewhere uh but this step is optional because you don't need to download the model in order to use it because the model is on the server you can just use it right so you call this predict function uh you set the same model ID as what you are training and then you just send the data there and then you're gonna you know train it so this essentially provide is a model uh serving model training service right you send your in a couple of examples annotated examples you can generate the annotation in enjoy as we just see before and then you can um oh i think there's something wrong i think there was some changes in the code so essentially it's this um issue you just need to remove this but doesn't matter it finishes right so then you can just ignore this error so you can download the model we can skip and then you can just use this model to for prediction and then you can generate the result uh data i'm sorry i think is this comment out this this is where went wrong right so you can just run this prediction yeah you got this result right and then here we also get this result i think it's the same thing but uh it's gonna run through all the models one by one and you can see the result here um so i will quickly also show you so now this we don't use any enjoy plugins i think the more most powerful thing to do is combine the bioengine server with enjoy plugin because that's the way to enable scalability right because you have computation on the server side and you have you know your ui in your browser so this is just a basic demo on how to achieve that in so i'm going to show you how i just made and then more quick demo i made before so so here again i'm going to display a few functions right so including display image which is just helper function to display image side by side and then i define this cell post segment function which essentially does the same right this is just another way to call the bioengine in one function instead of two before we i showed you in two functions but here this is just an easier way to do it so basically you pass the url you pass the model name cell post python this is the official kind of cell post model and then you pass the image and then you get the result of the mask so we define this function we're gonna fetch an image from the human protein not us right you can see the image here so this is the image we are gonna work on um if i just call this function right cell post segment you're gonna get this predicted mask right so now what i want to do is that i want to combine this step within the um enjoy plugin with kaibu so how do i do it i create a viewer uh i view this image so i we see this image we also add an annotation layer right so this layer i'm gonna give me i will be able to do annotation there uh but in this case we don't we don't do annotation but instead we add a button so this is what we add a widget and then the type is the button right so you have a button called do segmentation and then you when you click on it you're gonna call this function called do segmentation in the do segmentation we're gonna call this function called cell post segment and then you get this mask um and then you can show convert this mask into a ge adjacent object such that kaibu can show it again in the viewer so let's see this is how it looks right so because we added this button you can see this button right and it also loaded this image so now when i click this button whatever happens is called this function and this function will call the bioengine to do segmentation and then call another function to convert the mask image into a ge adjacent object such that we can add show it as another layer in red right we set the color in red right so if i click here this image gonna send to the server and then come back with a mask image the masking image gonna come be able to convert into a red um annotation layer and this is called prediction right so and you can even edit this right so you can delete this some cells for example if you want you want if you want to edit that uh so but this is give you a basic UI to do segmentation with the server um i'm gonna quickly show you another demo which essentially do a step further um to do interactive segmentation the same process as we did i showed in this video before right so basically you use kaibu to annotate send the new annotation to the server to train it and then come back right so i'm gonna show you quickly how it works so this is the same functions um except and now we add a function called train once so basically you pass an image and label the model gonna train once once means it consists of 16 mini steps right on the server side it's gonna do augmentation on the server side okay i'm gonna pass this and then again we fetch an image the same image from the human protein alas again if you want to read your local file just read your local file um and if we just call this function and then you just segment that is the same but now if we just test this train once function to show what it does right so basically if you pass an image and mask you're gonna train once we produce a loss value of the model and after you can also predict right so this is just a test for the these two functions right so what really comes together is this imjoy plugin so in this plugin what we did is that we add two buttons one is to do um prediction in this do prediction function we call the predict function we get the mask we convert it into a geogasin object such that we can show it in the kaibu the idea is that you do a prediction first and then the user can add it to improve the annotation manually and then they can take the improved annotation you're gonna click another button say train once with this improved version of the annotation right so then it's gonna call this function called train once and this feature gonna send it back to the server call this train once function on the bioengine you get the loss value if you just run this right and uh for the sake of time you will not see much but i just show you how it looks and so you first do annotation right so let's see if there's anything i think the model is already good because during testing i have a trend a lot just in case for example if this one is wrong you just do some editing right uh you can just edit that notation let's assume this is the edit we want right um you can just say train once right train once means this new annotation gonna send it back to the server and the server cannot improve a bit right and you can do prediction again this now with the new version right so as you can see that you can change to another image do prediction first correction and then train once you know do this kind of a step i will not i will not show this process but you know as you can see that this can just you can easy do this kind of uh uh interactive annotation plugins with this by combining the bioengine and enjoy plugins so i think i'm gonna quickly move to the um other part which um you know just just briefly what is the difference between bioengine and jucano books i think the difference we try to improve the advantage of bioengine is that scalability what i meant is that because the bioengine um if you use jucano books before or use colab before and then you will know that you know jucano books is essential design for a single user if you are user open and call noise loads use a gpu and this gpu is fully occupied even if you don't do anything other users cannot use that gpu right the same for colab and but colab is a different story because it has google provided a lot of the gpu but this is if you let's imagine you work with private data and you don't want to work with uh the cloud right and gpu utilization is really a problem so that's why we propose this bioengine to solve this problem and we solve it because we allows multiple user to share the same gpu and then not only multiple user but also multiple models right so you can have many models on the same gpu um and you can have many users use it simultaneously imagine this kind of process interactive segmentation right because the segmentation process annotation process is very slow and you know with bioengine you can essentially share the same compute infrastructure with many users because annotations slow anyway right the training gonna take some time but it's much faster than you know human notation so basically it means that multiple user can do annotations simultaneously within this kind of plugin and they share the same server and the server gonna alternate in between different model train different model for different user or different groups for example um and so this is essential allows multi-user multi-modal serving including training and inference right and the other advantage is that Jupyter notebook and colab when you start you need to wait for some time to install dependencies etc right and this doesn't need that we just use it instantly you connect to the server send the data needs to get back the result so there's zero setup time um this is also other advantage um yeah so yeah so I think that's it for the main part but I want to take the rest of some minutes to give you a show you another thing which I'm very excited about which is AI assisted by our image analysis uh this is ongoing work which I've been doing um is the technology is basically enabled by this massive model called gpt3 and codex from open ai so this is a natural language processing model which has 175 billion parameters which is kind of a largest model ever made at least you know a few months ago you know um so this is hopefully give you a taste of future uh I'm gonna quickly do this demo just a disclaimer um please just pause the recording if you are doing a recording uh because this is just the policy of the open AI so we don't we are not allowed to record it when we do this demo so please just pause sorry where can we uh restart recording yeah thank you so yeah I I've been showing you um about engine sorry about image analysis on the web the train and then um I'm showing you how to make enjoy plugins and we essential aim for scalability and activity different and we have made this by our engine um for AI model serving and then in the end I also show you this AI assisted by our image analysis hopefully give you a taste of the future um yeah so I think this is the end of my presentation I would like to thank um the most of work I've been doing is carried out in the cell profiling group headed by Emma Lump at South Atlantic KTH University um the Mjoy is a project that consists of a lot of uh users uh developers for Martin Craig and uh many others right um so yeah if you are interested to follow us on Twitter and the bioengine is not yet released but it's gonna but the API is released but you know the software itself is not yet released so stay tuned and follow us on Twitter we're gonna announce once we have it ready um I also want to thank the open AI for providing a beta testing to the GPU 3 and also codex model um just to clarify the demos during this talk has not yet been approved by open AI um although we are not allowed to record it but I'm gonna in the for future you know visitors of this video we're gonna I'm gonna post another link uh later to have the same or similar demonstration thank you thank you that was great presentation uh really awesome demos uh everyone please before you leave uh answer the satisfactory feedback survey we appreciate it a lot and there's one question uh that wasn't fully answered so regarding the chart that Mjoy.io did you mention that if we measure the features uh on Roy's on an image then we can see the corresponding image for a specific data point um so sorry that the chat we what is that chart that Mjoy.io yes yes so if you measure the features of an image then you click on the corresponding um on a data point will you take you to that image can you see the image data yes yes so that that's exactly what the plugin meant for but it doesn't do it I mean we don't the idea of providing that plugin is not to provide something as these right so you don't use it as if but the idea is to provide an API which enable the developers to do it so this is exactly one of the use case we are aiming for just um for example in this kind of a case right codex generate this a table right and we can basically display this table with the Mjoy chat a plugin and then you're going to see this interactive plot and when you click it and then you're going to go back to the individual cell for example right um for that and then exactly trend trend trend I think she did exactly that right for our single cell model she did some modification to that when you click on one of the scatter plot dots in this um chat plugin and then it's going to go back to show you the bounding box of this cell for example or boundary of this cell for example yeah but the general answer is yes great thanks a lot so uh there are no there are no questions so we can stop thank you webinar here sorry sorry for being over there no problem thanks so much for the amazing webinar I'll stop then thank you