 All right. So I'm going to let you get started with your talk, which is about Corona Net. That sounds interesting. All right. So welcome to my talk Corona Net fighting COVID-19 with computer vision. So before we begin, I guess a little self intro. I am a 17 year old student from Hong Kong. I'm really interested in computer vision, specifically generous of models as well as scientific computing, especially modeling partial differential equations with newer networks. And so a brief intro about like the background of this project, why I came to pursue it. It's because I started this project in approximately late February to early March. I was very much inspired by the doctors from Wuhan, which I saw on TV, and how they were working like 72 hours straight to battle the virus and keep everyone safe. And I was thinking possibly there was something we could do as more technical people to maybe help them out. So this is how I got the inspiration for Corona Net. I was also very much inspired by the Johns Hopkins University. And they showed me that we can leverage technology to help battle the pandemic. And so I wanted to like do my part as well. And so they created the COVID dashboard, which was very helpful to a lot of people globally. So yes, regarding the background, I had also, I have also previously done a project on ranking reboundary resection for lower grade glioma and it also involved segmentation. And I was thinking perhaps we could transfer that experience and to help out with Corona virus. So here's my problem statement, which is basically how I would like to help with the problem. So we see that hospitals all around the world are overwhelmed with COVID-19 patients and there aren't enough doctors nor equipment to like go around. So we can see that there's a manpower shortage as in doctors. There aren't enough doctors to attend to all the patients and all the scans from the CT images. And because mostly radiologists are the most knowledgeable about these scans and there aren't a lot of them. And also a supply shortage as in ventilators need to be reserved for the most vulnerable. So I was thinking to automate CT diagnosis confirmation with AI. So with this, with Corona Net, I was hoping to automate the diagnosis confirmation as in the AI can view the CT scan and then make a prediction on the probability that the person has the Corona virus. So this can possibly help release, relieve the labor shortage and to help with patient triaging and the allocation of resources because by gauging the extent of segmentation and positive pixels, you can know which patient has the most severe Corona virus. So Corona Net is made out of like three tasks. Firstly, it's binary classification, which is basically zero and one infected or not infected with COVID-19. And secondly would be binary segmentation, which is given a CT image, we identify all the all the pixels which are infected with and with a symptom. So symptoms include 103 which are brain glass consolidation and pure effusion with binary segmentation, we do not differentiate between them. Whereas with three class of mutation, we do differentiate between them as you can see from the colors. So three class of mutation is obviously the most difficult, which is which accounts for the slightly lower accuracy. So as this is a Python conference so let's first go over the technologies we used, I used. So the language is obviously Python and I utilize the NumPy library. For AI, I used 10th pie torch, and I also used album mutations, torch vision, psychic image and map plot lip for image processing. Because NumPy speed is very fast compared with compared with vanilla Python, because it utilizes techniques such as parallelism and vectorization. So it also has better support for matrices, tensors and operation and math operation. So that's very useful when you're dealing with AI. It also retains the benefits of Python itself, which are, for example, elegant syntax. So what you might ask, what do you use for image processing? So I think map plot lip, how I compare them would be map plot lip is like more general purpose. It has some really good functions. It works very well for say saving and loading an image for a psychic image has more advanced algorithms such as optical flow. So, and also pie torch vision is it's very tightly integrated with pie torch. So it's very ideal to use when you're doing augmentations in the downloader with pie torch. Lastly, album mutations is specific specifically for biomedical imaging. So it has functions such as scale shift and elastic deformations which are very useful for biomedical imaging. And why pie torch? Well, firstly, it has more research support as in most papers, most sort of papers, they release code in pie torch. So if you want to integrate like the most updated technology into your code, pie torch would be more ideal. It also offers more customization ability and I heard that TensorFlow gets a little bit tedious with downloaders from my friends and also it has it was designed with dynamism in mind. So it offers dynamic computation graphs and I heard that TensorFlow to also has this but well I stick with my torch. So in terms of the model architecture used and so my consideration is that for my task I'm tackling multi label segmentation. So as a brief overview in computer vision there are three main tasks, which are respectively classification detection and segmentation. So normally classification is considered the most easy and classification you basically given an input image, you output a single scalar label representing the class of the object in the image. So for example, if I had an image of a car, then you would possibly output the number of five and five would correspond to the class of car. So in terms of classification, you're not only concerned about the scalar label you're also concerned about some degree of location information. So you would have to identify the, the boundaries of the object of interest, so that you can output a single scalar label around the object. And lastly for segmentation it's considered the most challenging. So given an input image, you not only have to retain spatial and semantic information, you also need to perform sort of pixel wise classification so that you can identify the whole extent of the image and output a pixel wise mask. So firstly, let's go over the techniques for classification. So as, as you can see, given an input image you output a class label, this is typically done with the fully connected layer. So vanilla convolutional neural networks already perform fairly well in this task. But then when you people when, when you were trying to obtain really high accuracy you turn to deep CNNs. However, researchers in the past found that there was an accuracy saturation and degradation problem, because, because of difficulties in identity identity mapping. So they introduced the, the improvement of Resnets to help with identity mapping as well as feature pyramid networks. So Resnets are basically introduced this shortcut connection to relieve the pressure from added layers when identity mapping. Whereas in FPNs they have this lateral top down hierarchical connection structure. So basically you combine the feature maps at different scales so that you can retain and hierarchical information from different scales so that you don't lose. So, so in conventional CNNs when you perform, when you perform increasing convolutional layers you lose some, you sacrifice spatial information for semantic information. However, if you do this lateral top down thing and fuse different feature maps at different scales you can retain this information and make better predictions. So, in particular in my in in Corona net, I utilize the efficient to net architecture, which is basically the SOTA on image net and a few other benchmarks. So introduces a novel compound scaling method, which completely revolution, revolution arises like the scaling of network depth with an input resolution parameters. So as you can see, and this is a compound scaling method, scaling method. And so it basically, their insight is basically that death within resolution they influence each other so you cannot scale them arbitrarily or individually. So they actually performed linear graph search, as well as some other mechanisms to find the ideal parameters. And so by scaling up. So they scale up the computational resources and accordingly by to the power of. Yeah, to the power of two. Yeah, and so you can see it's it outperforms a lot of other baselines with minimal parameters. So this is this gives a unique benefit to me because I don't have access to a lot of computational resources but it's also it balances accuracy as well as speed and efficiency. Now moving on to segmentation. So what segmentation is is basically as we covered earlier basically to perform pixel wise classifications. So here's a more algorithmic description of what segmentation is. And so I use size the fully convolution network, which is basically an encoder decoder model. And what what's different about FCNs is basically with conventional CNNs and you learn a convolutional function instead of you learn convolutional function. If you want to obtain a mask from that function, you would have to implement some additional mechanisms such as class activation mapping. So that's an extra step and it's not, it's not ideal. Whereas for FCNs you can do it directly by learning a convolutional filter. So as you can see from here this new shape, this is unit and so it down samples and then up samples and then it learns this convolutional filter through this encoder decoder network. So here I used the unit model, which is, which basically introduces a symmetrical contracting and expansive path so you can see that the dimensions are largely symmetrical. And also it computes, right, also it, it, it attains SOTA performance in the SV challenges in 2014 and 2015 I believe it's also tailored to biomedical imaging and and it has had a lot of success in that area. And lastly, it's it. So basically what the, an advantage of the encoder decoder model is that it successfully fuses local and global information. Yeah, because it's upset. So it's a bit based on the feature pyramid network as well and the principles are similar. And so it retains these both global and local information so that it can make better, better, better well predictions. In terms of the data I use I because this was actually a huge challenge for me at first because I start a bit early in early March and and there wasn't a lot of data out. So there, there actually was no data out except for the COVID-19 CT segmentation asset. So this asset is it's, so let's show this. Yes. So when I first started it only offered 100 Axial CT scan slices which is nowhere near enough for training AI, as you probably know. Progressively they introduced more and more data which I am training on and for now I have I my training set includes approximately 1000 slices which is not enough. And so, so this means that although Corona net has attained a well fairly good performance, it's nowhere near applicable applicable to real life yet. So if you have data, feel free to and you're willing to share like feel free to and contact me I would love to improve on Corona net. So, yes. Because of the very limited asset size. So, augmentation is very crucial. So, for better generalization. So, in particular, I used elastic transformations and skill shift to simulate the natural deformations of human biological tissue, as well as random normalization and random rotations. So, let's examine the performance of Corona net. So as you can see, I evaluated the performance following convention with Fran loss and dice coefficient, and that's coefficient is basically the ideal as one and as you can get a zero. So, as you can see in binary classification, the network actually performs a lot better. And because it's less less challenging task. And also because there was more data, a lot more data available for binary classification and this was trained on for binary classification. It was trained on approximately a day set size of approximately 1000, whereas for multi class segmentation it was only trained on like, and 80 images augmented to 1,920. So as you can see the data size is a huge problem for any open source project like Corona net. So, the ideal accuracies were achieved using the atom optimizer, which has, well, less, which oscillates less than, say, SGD. So now in terms of future development, I would love to further pursue a Corona net. And so, yes. So, in terms of further development, I would love to leverage it to recommend personalized medicine slash treatment, for example, based on the extent of so based on the number of pixels that were identified as positive and infected with COVID-19. I would personally use this as a metric to assess the severity and and also the severity of COVID-19 in each patient, as well as use the exact types of the types of Corona virus symptoms identified so for example if one person has 80% ground glass perhaps, and 20% consolidation. So this could be used as a metric as a consideration to tailor the medical treatment to each patient. And also, because of the limited data and how, and how for segmentation if you're trying, if you, you're trying to predict a mask it's the data and annotation process is actually very labor intensive because a radiologist has to hand label each pixel. It's not ideal because they they're very busy with the COVID-19 pandemic and everything. So I would recommend, I would try to, I would suggest trying a weekly supervised segmentation approach, for example, using global average pooling class activation mapping, or object region mining. So this can bypass the need for labor intensive mask annotations. So lastly, I would like to go over a bit of the code. Yes. So this is actually posted in my GitHub repo, which is here. Oh, no. Right. Let me just find it. Sorry. Right, it's here. Perhaps I'll send it send out the link later in the chat. Right. So this reports in this repo, I report the accuracies as well as well as off classification. So as you can see with efficient that we obtained, I obtained very good results. And in terms of binary classification, which is a less challenging task. And also, so in terms of the code, the data can be downloaded directly from this website. I did some pre processing, but it should be enough easily, easy enough to figure out. So Corona net. So, as you can see, I used PyTorch for this task. And for you, I use the unit model. And I actually included six, six contracting and up sampling paths, and as opposed to, as opposed to the vanilla model which had for up sampling and contracting paths. Yes. And a possible improvement for. So let's first, let's talk about the possible improvements in terms of accuracy for this model would be firstly died, including dilation and also, and also attention mechanisms to unit. And so this is a so briefly to talk about how to use PyTorch. And as you can see, it's very easy to load the data. And so I wrote a data loader. So this is this is how I loaded the data. So this would be the augmentations that I do. And I normalize my images. It's very, it's very important to normalize your images and machine learning because the optimizers, they, they, they can, it's easier for them to converge when the features are scaled more properly. And yes. So PyTorch also provides a lot of built in functions such as the binary cross entropy loss as well as, as well as the nn.sequential. It's a very useful. It's a very useful. It's a very useful function for for connecting networks. So let's first let's have a look at the code. It's all freely. It's all available on GitHub. Yes. So as you can see, you can very easily define a network. So I basically use some unit blocks, which are defined below. Basically, it's a deep convolutional, fully convolutional network. If you want further information about the code and feel free to well fork the repo and try to like tinker with it yourself. So here are some references which I, which I refer to in my work. So, for example, the ResNet paper and whatnot. So if you have any questions about Corona, feel free to contact me at my Gmail or my socials, I guess. And that's not least. And so through Corona nets, my aim was to, I identified an opportunity to basically help with a lot of, with help with the pandemic through technology and I think it's not really the capabilities of technology. It's not really limited to just biomedical imaging or something like that. We can actually help the world in a lot of ways through technology. So another example would be through education. So I have actually actually started co-founded an initiative for AI education with my friend. And we are planning to run a contest, which, which deals with AI education and we will be using Python, primarily. So we are currently recruiting organizations, projects and mentors. So feel free to check us out. It's included in the Europe Python slides. So last but not least, I hope that through this talk, you learned a bit more about AI techniques and how we can, we as people who know Python can leverage Python libraries and Python, such as PyTorch and NumPy to really build an open source project that can actually help with global crises and just so that we can try to do our part to help, especially in this very trying time. So this is the end of my presentation. Thank you so much. Thank you for that presentation.