 Yeah. Okay, great. Thank you. I'll try to get us back on to time a little bit. Natasha is not able to attend and I will give the talk for him. The slides are available with this bit that you need to get them by each 2022. I from GPUs. What are GPUs? Why do we need them? How are they useful for bio conductor and where can we find can I find GPUs? I'll also do a little bit of a live demo with an actual GPU equipped machine in the room. How many folks are actually computing on GPUs? Not all that many. All right. Well, and you know, the idea is that we're going to be seeing more software that sort of does a better job if a GPU is available. And so we want to be prepared to use that type of software. In fact, bio conductor packages are coming online that will make use of it and we will show you one of those. Graphical processing unit has many cores is capable of parallel computation and is different from a CPU. Streaming multi processors are used in there. Each streaming multi processor is a multi core machine. Each GPU is their core multi multi core machine. The streaming multi processors share the same global memory. And each streaming multi processor has streaming processors, which are the individual cores running threads. And those technical issues one can learn about on one's own time. The idea is that if the computation is set up properly, they can go faster. And there are now well exercised packages for statistical learning Keras and TensorFlow that are becoming more prevalent. So a couple of bio conductor packages and there are some new ones being submitted that take advantage of GPUs. Do this by using reticulate to interface to TensorFlow or Keras. Reticulate is the R Python interface. And you can get GPUs to try these things out fairly easily from all the cloud providers. And I'm going to talk about one interface to the Google cloud provider that is made by NHGRI through NHGRI funding. It's called Anvil. Genomic data science analysis visualization and informatics lab space. So Anvil has a very refined user interface in which when you get yourself on there, you've logged in and you have started a workspace, you can configure a machine to use. And one of the things you can do is say I'm going to enable GPUs and I will select a particular type of GPU. There's a whole family of them available here. And I'll tell the system how many GPUs I want. And then I will get this thing going by pushing launch. And we'll do a little bit of that in just a moment. This is just a view of a workspace that Detech made that helps do all these demonstrations. And there's a notebook, a Python, a Jupyter notebook within that that will demonstrate this. And we'll jump over there in a moment. Now, it turns out that in Terra, this is the underlying system for using Anvil. It is not yet possible to use our studio with GPUs. So this is done with a Jupyter notebook, but that will be remedied in the near future. And so we were going to also develop a new container family that has machine learning equipment, including the food and stuff for NVIDIA GPUs embedded in it so you don't have to do these installations. And if you have other questions, you can ask Detech on Slack or pose questions in the support site. I will just show a little bit of this live. This is a Jupyter notebook here running in Terra. The system that we have configured is the default R version. And I've asked for four CPUs and two Tesla D4 GPUs. And that is going to cost me 91 cents per hour to operate. And we have a work, a notebook here. And we can run some code in the notebook. For example, we can check the bio conductor version. And we can then verify I installed all of these things previously. So Keras is available. We have a full Python configuration report here telling us what is there. And then when we want to start to use TensorFlow, we can ask whether the GPU is known to it. Sometimes that's a bit of a cumbersome thing to establish. In this case, TensorFlow knows that the GPU is there. And we can ask it to list all the devices, the CPU and the GPU. So this is just using Reticulate to operate with the TensorFlow Python packages and run certain functions and methods that are available there. And here we can ask, again, is the GPU available? And it says it is. And so now I'm not going to run this code. This has to do with the generation of some data. But here is what it looks like when you actually want to fit a variational autoencoder. This function is defined in the VA experts package. And the different components, which are autoencoder and deep learning concepts implemented as layers, value units, sigmoidal activation function and so forth, all very high level expression of how we are going to build up this autoencoder, run it. And then this, I'll run that one there. I think that should work okay. And then there's a very nice little way of visualizing the model that was actually specified. And that takes a second to run. But it's a nice way of showing the actual components of the deep learning model that is going to be used with the GPU. So I'm going to stop there because all of this stuff is available for your inspection in the open thanks to Natesh's work. And if there are questions, I might be able to answer some. But I hope this is sufficiently explanatory that you could get going if you wanted to on your own. Anybody have a question for Natesh? Can you just go back to the URL at the beginning of the slides? Sure. These slides have the URL at the end of the moment here. Something wrong with the desk here. Thank you. Okay. I see a question from online for an earlier speaker. We are done with a few questions for Qian from Levi. Levi asks, have you thought about submitting reused metadata or data to Zenodo through the REST interface? Like through Zen4r. Is Qian still on? Is Qian still here? Maybe not. We need access to the slide. Someone can't get access to the slides. These slides? Yeah. Yeah, the slides are not accessible. Okay. Qian is now on the call. Hi, Qian. Welcome back. There was a question for you. Okay. Sure. From Levi, he says, have you thought about submitting reused metadata or data to Zenodo through its REST interface? For example, Zen4r. Zenodo, yeah. I think that can be one of the, one of the places that we share the data sites. We have considered sharing the data to account some like cloud space like UnView. We were just thinking about that. So Zenodo could be a place that we deposit the data. But I think we are more likely to deposit the recipes which are more lightweight so people can generate the data locally using our functions. There can be different possibilities. Anybody else have a question for any of our speakers? Is there a possibility in future for, we say, sparse array to work better with GPUs to speed up sort of like loading and operations on some of the big sparse arrays? So you're seeing sparse array to work better with GPUs in the future? Oh. I was thinking about this when this was presenting about the GPUs that maybe, you know, it could be, if there's a possibility, maybe to even, to improve, to improve even more the performance of those objects by using GPUs. Yeah. Why not? I'm not there yet. You can see my list of things I still have to do before that thing works. License notes faster. All right then. I think we're going to end the session and we'll see you guys right next thing. Thank you.