 We are talking about the new Nectar Research Cloud GPU service and reservation system. I'm going to be your facilitator for today. I've also got our key speaker, Paul Connington, who is one of our directors at ARDC for the Nectar Cloud, and then also Sam Morrison, who will be conducting the demo for you all today to see how the system works. So Paul, I could ask you to go to the next slide. Just also before we begin, I just want to give an acknowledgement of country. So acknowledging and celebrating the first Australians on whose traditional lands we meet today, for me, which is the Wulwundjeri Wui and Wurrung and Wururung people of the Kulin Nation. And we pay the lands in which we meet and we pay our respect to their elders past, present and emerging. So today, just to give you a scope of what we're covering. So we are going to give an overview of the Nectar Cloud itself because we just a very brief one because we may be aware that you may have heard about this service but not actually use the Nectar Cloud before. So we just wanted to touch base on that. We'll talk about why we brought in this service, what are some of the processes involved for using the service, what hardware specifications we are offering. Then we'll launch into our demo and then we'll talk about some user support available to you and some time for Q&A. And of course, just a little bit of Zoom housekeeping. So if you can keep your microphone on mute for the duration of the presentation time. If you have any questions during the session, feel free to put them in the chat and we will address them as we go. Or if you want to, you can also just save it for the Q&A time at the end. And of course, we are recording this session. So now I will hand over to Paul who's going to kick off the information about Nectar Cloud and then the processes behind the new service. Thanks, Sonia. Yeah, so I'm Paul Cottington. I'm at the ARDC. I'm responsible for the Nectar Research Cloud. So just briefly, the Nectar Cloud has been around for 10 years now. It's run by the ARDC, Australian Research State of Commons. It's made, it's actually a federation of several partner organizations or nodes as we call them. So the nodes are responsible for running the infrastructure, the compute and storage hardware, including the GPUs and large memory servers that we'll talk about in a minute. And ARDC core services team is responsible for running the central cloud services that sort of glues it all together and into a federation that looks like a single cloud service. So it's quite a large resource. As you can see on the left there, there's a large number of physical and virtual CPU cores, quite a lot of storage, both object and volume or file space storage, and a large number of GPUs, some of which are already there and some of which are still on the way and I'll talk about that in a minute. So in terms of access, the Nectar Research Cloud is an Australian national research infrastructure. You log into the Nectar Cloud through the web dashboard and access it through a web dashboard. You just log in using your university username and password through the Australian Access Federation. Anyone who has an AEF account can just log in and try out the Nectar Cloud with a six month project trial. And then if you meet appropriate criteria, you can make a project allocation request so you can get sort of long-term larger amounts of resources. So essentially the criteria are you meet a national merit allocation, which basically just means you have a research grant or you're part of a project that's supported by ARDC or some other increased facility, or else you can also get an allocation if you are a researcher that's associated with a particular Nectar node can also provide what we call local or local node allocation. There are sort of other areas but that's the main criteria. So basically that's how to use the Nectar Research Cloud at a glance, but first you should read the user guide. So there's a cloud basics user guide and a getting started tutorial on our user support web page. So on to the new services I want to talk about. So we are focusing or we are providing a GPU and a large memory service through a reservation system. So I'll talk a little bit about why we're doing that. So there's a growing demand for sort of high-end compute infrastructure in the research sector. The Nectar cloud has been running for 10 years now and we mostly just provide generic sort of compute resource CPUs or virtual CPUs, but there's a lot of demand for higher-end infrastructure. So particularly GPUs and also mostly for machine learning, but also for image processing and for simulation and modeling, and also for large memory machines to handle sort of very large data sets, large scale analysis and simulation to a high-resolution modeling, things like that. So we've already seen this and we've supported this at ARDC in the ARDC platforms, the research platforms and the platforms projects at ARDC supports. So we have provided them with some dedicated GPUs and large memory servers. So things like the characterization virtual lab, the drones platforms being developed, Galaxy, the bioinformatics platform and several other eco commons and some other platforms are using the sort of high-end compute infrastructure that's provided by ARDC, but that's dedicated to the specific platforms projects. We also have seen the nodes, our node partners in the Nectar Research Cloud also provisioning some GPUs and large memory servers for their researchers as well, but we haven't in the past provided just a national service to anyone who meets the national merit criteria can use. So that's what we're aiming to do when all we have done now, just a basic infrastructure as a service for GPUs and for large memory servers. Now, there's some issues around this, one of which is these things are expensive, right? GPUs and large memory servers are quite expensive and the Nectar cloud infrastructure is provided at no cost to researchers. So we want to make sure they're utilized effectively. We don't want to have the situation where a project might fire up a virtual machine with GPUs or large memory and only use it sort of intermittently. We want to make sure we get good utilisation of that. Now that doesn't happen in commercial cloud because these things are very expensive, right? You pay for usage and it doesn't happen in clusters or HPC systems where people use GPUs because you submit it to a queuing system and it runs when the resource is free and it finishes when your job finishes, but that doesn't allow you to do interactive work and it means you have to sit in the queue and wait for an arbitrary long period of time before you get your access to your GPU. So we want to provide people with access for interactive use at a time that they want to specify, but we want to make sure we get reasonable usage of these expensive infrastructure. So the way we're doing that is essentially twofold, one of which is we're virtualising the GPUs so multiple users can or projects can get access to a single GPU. So if one of them maybe isn't using it right at this very moment, the others can use the additional resources and also a reservation system. So you need to book particular time that you want to use the service and once that time is up, the resource gets freed up so other people can use it. So you can't just sit on it if you're not using it. Okay, so we had to do a couple of things to enable this, one of which is, and those of you who have used the Nectar Research Cloud will be familiar with this, we've moved from the previously where we had quotas, so people, their project in the Nectar Cloud gets a resource quota for their project allocation. In the past, that was a maximum limit quota, it was essentially like I have a quota of 10 virtual CPUs, which basically means I can never use more than 10 virtual CPUs at any given time. And that's pretty much it, but we wanted to make it more flexible so we could support additional resources, GPUs, large memory machines, etc. that were that were, you know, cost more, essentially, or, you know, higher end than the just the generic virtual CPUs. And we also wanted to make a lot of sort of more burst use. So, you know, most of the time someone might only want to use 10 virtual CPUs, or, but they might want to use 50 for, you know, a week or a day or something like that. So we wanted to make it more flexible to allow that. So we introduced the concept of service units. Now, service units will be familiar to anyone who's used HPC systems. It's basically a unit of resource usage. In commercial cloud, the service units equivalent is dollars, right? I pay a certain amount of dollars, and I use a certain amount based on how much resource I actually use. So essentially, that's what we're doing here. We're asking people to provide their quotas on an estimate of how much they want to use in the service units. A virtual CPU has a certain number of service units. A GPU will have a larger number of service units. A virtual CPU with a lot of memory will have a larger number of service units. So each of the flavors of instances that we have in the Nectar cloud has a certain service unit cost associated with it. We've made some modifications to dashboard to support this. So if people ask for a quota in service units, they'll see in their dashboard a sort of line that says, you know, if you burn through your service units at a constant rate of usage, here's the straight line that that entails. And then there is the sort of blue line, what your actual usage is over time. So you can see if you're burning through your service unit allocation sort of quicker or slower than you should be. And we also show, you know, the usage on each day essentially. You can see your daily usage and how it goes up and down. So that helps people estimate how much they should be asking for and see how much they're using. Now, so that's been in place for a while now. We've had these service unit allocations for a few months now in the Nectar cloud. But now that we've introduced a reservation system, we had to add an additional component, which is a reservation system allocation. So you need, as well as your standard service unit quota, your project needs to ask for quota in the reservation system. So that basically is how many reservations would I like to make in the next, you know, in the period of my allocation, which is usually, you know, a year or a few months. And how many total days do I want to use the reservation system for? We've made reservations in units of days. We figured people mostly would want at least a day to use the system. And we've given it a maximum of two weeks. Now that maximum is can be varied based on different flavors. So the standard flavors that we provide in Nectar have a maximum of two weeks, but nodes or can provide sort of private special flavors if you really need it for longer than that. Reservations started and at zero UTC, which is about 11 o'clock these days, Australian Eastern daylight time. And as I said, last for at least a day or units of a day. So you need to ask for your reservation quota, you know, number of reservations and total days across all of your reservations that you want for. So once you've requested reservation quota and service unit standard quota that you need to do anything in the next cloud, and it's been approved, you can go into the dashboard and book a reservation, basically just pick a date that you want to use the GPUs or the large memory servers, select or see if there's any free slots at particular nodes or for particular flavors that you want, pick a flavor for a particular time slot, click the button and book it and away you go. So we'll have a demo of how exactly how to do that shortly. In terms of the requesting the reservation, here's a sort of snapshot of the screenshot of the reservation service. Again, you just go into the new request, the standard form you would fill in for an allocation request on NectarCloud, and there's this basically extra section. You can click a button to say I want to book, I want some allocation for the reservation service. So you'll see that's clicked in that's green, there's an on green on button because I've clicked that on. And now I can say, well, I want maybe five reservations, I'm thinking of using probably for roughly 10 days each, so a total duration of 50 days I want to get a request for. And you can say, do I want to use GPUs or do I want to use large memory, which we call huge RAM flavors, or do I want to use both. So here I've said I want to use both. So when I go into the reservation system to actually reserve a slot, it will show me, here's what your reservation system quota is. So the quota in this case is 20 days. And here's what your service unit budget is, which is 3,000 service units in this case. So if I try to book a slot for four days, it'll say, yep, that's good. Four days takes you from three days that you've already used to seven days, and you've got a quota of 20, so that's all fine. But if you use up all those four days, your service unit budget will, you'll have used up more than 3,000, which is your budget. So sorry, you're not eligible to do that, you'll have to either ask for less after a smaller reservation, or ask for more quota, basically. And in the Nectar Cloud, you're allowed to ask for more quota at any time. So you just go in, revise your request form, resubmit it, and you can ask for more quota. So one thing we've done with the service is to come up with standard flavors for the GPUs and large memory machines or large memory servers. So there's already a bunch of existing standard flavors in Nectar from Tiny Balance. Balance is about two gigabytes of memory per virtual CPU. RAM optimized is about four gig. And then we have some huge RAM flavors, which are bigger than that. So we added some GPU flavors as well. There are two different kinds of GPU flavors, essentially based on two different kinds of GPUs that we make available, which are the A100s, which are essentially for compute, and the A40s, which are essentially for visualization or rendering and showing things. You can't use the A100s for visualization, so the A40s are really for that. And then we've got standard flavors for huge memory, which are similar to the existing huge memory flavors that we have in the Nectar Cloud, but they just go bigger. And also, all of the flavors, the huge memory and the GPUs, have very large amounts of fast local disk. So we figure if you want to do some big computations on large datasets, you want to have some large, fast local disk to make the IO fast. So when you go into the Nectar system, the dashboard to reserve, or to, you know, in any time you want to fire up a virtual machine in the Nectar Cloud, you'll be showing this list of standard flavors that we have available with a bit of info about what all the flavors are and you can pick from them. So in the bottom box there with the green bars, that's essentially what you'll see in the reservation system. Here's some dates, the green shows that there's some free slots there for the particular flavors that are shown on the left. So you can see there's a G2, small, an H4 large, and for different sites, Monash, Queensland, Tasmania, et cetera. So you basically pick a flavor and pick a node or a site that you want to run along. So just quickly, a little bit more detail about the flavors. This is all on the Nectar user support site, the detail here. But for the G1 instances, which are the A40s essentially, we have basically three sizes, small, medium, and large. And basically, these are just how much have we virtualized the A40? How much have we split it up? So a large is essentially, A40s have 48 gigabytes of GPU RAM. So a large is basically saying we're putting two larges on that because each one has 24 gig, a medium as we're splitting it up, to four virtual GPUs per physical GPU, and a small is six. So it's basically how much you virtualize it. And then we have different amounts of VCPUs and standard memory that were RAM that we have associated with those as well. We have the standard usual 30 gigabytes of root disk that we provide for all standard Nectar flavors. But you can see we've got this ephemeral disk as well. This is the large, fast, local disk that you can essentially like fast scratch disk that you can put your data on and read and write to very quickly. And then you'll see we've also got the service unit cost for the different flavors as well. For the GPU, the G2 flavors for the A100s, they have 80 gig of GPU memory. So we can slice those up in a few more ways from extra, from again, from two virtual GPUs for physical GPU all the way up to 10, each of which has eight gig making up the 80 gig. I should say that each of these standard flavors, we have, well, obviously we have physical servers, physical GPU servers and large memory servers that we run this on. Every physical server, the way that the NVIDIA licensing for the virtualization works can only have a single flavor. So one virtual, one physical GPU server might just support extra large, the G2 extra large flavors. Another one might support the G2 large flavors. Another one might support the G2 small flavors. So at the moment we have split them up into a number of different servers. So we don't, at the moment, support all of these flavors, but we do support most of them at the moment. And eventually we'll be able to support all of them once we get some additional servers added to the infrastructure. And then finally the huge RAM flavors, which will go, I think the current huge RAM flavors, the H3 class only go up to about 360 gig. These ones go up to four, also off of 480 and 960. So almost a terabyte of RAM per virtual machine, 128 virtual CPU. So this is a pretty grumpy virtual machine and two terabytes of fast NVMe FML disk. So, so yeah, that's a pretty substantial virtual machine. So we have procured 16 GPU servers and several large memory servers to go into this reservation system to be available for national use. Also some of the nodes have bought their own infrastructure to provide for their own local researchers. The GPU servers have either two or four of the latest Nvidia GPUs. So I think in total we have about 44 GPUs that we're providing into the system. I've already said they're a mixture of A100 or A40s. And we've also, we needed to have licenses for the virtualization. So we use the two different kinds of licenses that Nvidia provides for two different types of servers. The VCS licenses basically allow you to split things up in a more efficient way than the sort of freeway you can do it. Basically, you split it up based on memory. But if someone is, if a virtual machine is not using the GPU at a given time, the other virtual machines that are using it can basically steal that compute resource that's unused. So whereas in the previous way that we've done the virtualization, the MIG, it's all statically split up. So if someone is not using the compute, it just goes to waste essentially. So the the virtualization licenses make them again more efficient use of the GPUs. Now, I should say we don't at the moment have all the servers in place due to COVID, global supply chain issues, etc. There have been delays in procuring the infrastructure. So we don't have all of it in place at the moment, but we do have some of it in place and the rest of it will be coming soon. So we've got resources set up to try to provide a sort of fair and equitable allocation of the resources, make sure they get efficiently used. There's a reservation system to do that, but also you can use a reservation system. For example, you have a training course in machine learning or something like that, and you need to reserve a number of virtual machines with GPUs for a couple of days for a training course, for example, you can do that through the system. We've designed the different flavors that we just said based on what we think the requirements for researchers are and based on the obvious way to sort of split up or virtualize the infrastructure that is supported by NVIDIA virtualization licenses. And it's probably worth pointing out that these GPUs are the latest and greatest kind. You can't actually get access to those on any of the standard big platforms and Amazon, Google, et cetera in Australia yet. You can get them over from overseas, but they're in Australian availability zones. They don't have these size flavors for these GPUs at the moment. Okay, so that's pretty much it. I should say that we had a pilot launch of the system in August with a small number of users to sort of test it out. It seems to be working all fine. So we launched it as a beta release in September, so about a month ago, with only a small amount of resources. As I said, we've struggled to procure some of the infrastructure in the sort of timely fashion that we were hoping for. But we will gradually add to that capacity over the next couple of months. We already added four new servers last week, four new GPU servers last week. So we have six GPU servers, about 24 GPUs at the moment. And we expect to get six more in the next month and have all of them by the end of the year. So by the end of the year, given the sort of way we want to do the virtualization, at least initially, we'll have roughly 250 virtual GPUs available for use. So quite significant resource for people to access. As I said, they're available to any project that meets the national merit criteria. And at some of our nodes, they will have allocations for projects that don't meet national merit criteria, but they'll let them be used by their own local researchers because they paid for them. It's worth pointing out that this is a sort of first pass. The flavors that we have, the limits on, you know, you can have it for a maximum of two weeks, etc. This is all initial sort of what we think we want based on the initial discussions with users. But we will review this periodically. We can always, we expect, for example, we will need to change the mix of flavors. So as I said, you have to, for each virtual machine hyper, sorry, for each hypervisor, you need to pick, you know, it can only have virtual machines of this flavor. So we're sort of having a mix of them initially, but we may find out that, you know, most usage is for particular kinds of flavors. So we will provide more of those and less of the other ones as we get. So that's just something we'll learn as we go along. So we are looking for feedback on the service from people. So please let us know if you have any good suggestions for what we could do differently or better. So the participating knows that ARDC has invested in infrastructure and the nodes have provided co-investment within this infrastructure as well. The ones on the left has University of Tasmania, Monash, Intersect in Sydney and QCIF and Brisbane, which has most of the Queensland universities as members, Swinburne and Monash in Auckland also have their own GPUs that they are expected to add into the system for their own users shortly. So I think that's pretty much it for me. I will hopefully hand over to Sam now to give us a demo. So I will stop my screen sharing. Yep, Sam is here. Yep. Thanks, Sam. Just while we're doing the change over, we did, Sam, you can set up. We did have a question in the chat from Afnan who asked, is this only applicable to GPU allocations or for other Nectar users as well? I'm not sure what you mean by this. So I was, I was, I was referring to the service units and that kind of structure. Yeah, no service units for everything. So whatever you want to use on the Nectar Research Cloud, you need to specify a service unit budget. It's just that GPUs have a higher service unit than the standard VCPUs. So would that be wrong if I say that the whole structure has been modified? So everything now will be based on the service units? Yeah, it already had been modified a few months ago based on service units in anticipation of some new services such as this one and others that would need, you know, would really need the service units to be able to function at all. So yes. So it was set up like this a few months ago. What's new is that we've just got these standard GPU flavors with associated service units attached to them. Thank you. I see there's another question which might as well try and answer while Sam's setting up. So yeah, all of that stuff is up to the user. So we're just providing infrastructure as a service. So it's up to you to provide the software or you can run essentially whatever OS you want. We provide standard default virtual machine images for certain types of operating system, including Ubuntu and a couple others I think. Others might want to chip in and tell me here. But yeah, you can run pretty much anything you like except Windows because of licensing issues. I was just going to mention for the GPUs, we provide a Ubuntu image for that and that's hooked up into our license service. So it'll because you need a license to run the instance. Yep, that's true in video. But what about a license service? Like things like MetaShake where you need, you've got your own floating licenses. Can this connect back into your institution to pull a license from that license server or was there blocks in terms of ports etc? That's probably on a per institution basis. So I'm not sure exactly how that would work. Yeah, it should be okay. We do have some sites using license software that's something you'd have to talk to the node about. So if you're going to run it in Melbourne or QCIF or whatever, you'd have to talk to them about it. It's sort of depend, you know, there's no problem with opening ports to license service or whatever. It's just sometimes there are gotchas in the license. Like, you know, you must run if it's, you know, if it's a University of Melbourne license, you must run it on the University of Melbourne infrastructure or, you know, there's stuff like that. But yeah, in principle, it's doable. But the details you'll have to sort out with your institution and the node that you want to run the GPUs on. Thanks. All right. My demo is going to be maybe a bit boring for people who are familiar with the dashboard, but if they're not, then maybe you get something of it. I'm just going to go through just some of the changes that we've made, particularly just around the reservation system and kind of how you would go about getting a GPU instance. The first thing is you'll need an allocation. We don't, we don't give that to, you know, project trials. So you'll need to either request a new allocation or update your existing allocation. I'm not going to go through this, but essentially there is, you'll see there's a new section here, reservation service, and you'll need to enable that and ask for a certain amount of simultaneous reservations and kind of how many days you want to be reserved. And, you know, if you, you know, you just want to access the GPU flavors or the huge RAM flavors. So that's kind of your first step. Go get your project, some reservation quota. Once you do have reservation quota, you can go into our new tab here, reservation, some of the compute tab. It's, you know, it's very similar to anything else in the dashboard. I've already created one, but I'll talk about that later. We'll go to create a new one, create a reservation. We've got some information here, just, you know, what your quota is. I'm already, I've already created one reservation for six days. So you can kind of see there what I've used and what I've got available. And then down, down here, we've got all the flavors that we support at different sites. You can kind of filter by if you just want a huge RAM type or a GPU type, or if you want something specifically, you know, if you want a Monash, you know, GPU, you can see here Monash supporting three of the flavors. You'll see here there's, there's, the flavors are available to schedule or reserve from today right up until we've got up until the, so the first of December. We're kind of at the moment where not allowing people to reserve too far in the future at the moment, just because we might want to change the flavors and we have to reconfigure the hardware and things like that. But essentially, you know, all these, at the moment, all of our flavors have got plenty of capacity because they are all showing up as available. So it's not exciting because I don't have any, oh, you can see these, these flavors here aren't as, aren't as available for as long as these ones. Click on the time period you want and so if I want to get it starting from now, maybe I'll do a MonG2 small and I'll start it, you know, from, oh, I can start from now. I've got enough quota just, actually, I'm going to just change this because I want to make sure I have a little bit more quota. So I would say we'll run for four days, so four days, yeah. And tick reserve. So that's going to go and create a reservation for you, make sure the hardware's, you know, free and you'll notice it goes into allocated status. So that just means that the reservation has been allocated to you. And when the time comes to start that reservation, it'll, it'll change to active. You can see here, I've got a reservation that is active already. And once the reservation is active, you'll get an email to tell you, remind you kind of thing that it is active. And then you can kind of launch, right? So once the reservation becomes active, we can now boot an instance. And come into here, do a test, and we're going to choose the Nvidia GPU image that we have. So that's got, that's kind of hooked into the license server, as I mentioned before. And it has a few libraries installed. We're still kind of, you know, open to adding more stuff to this image if we need to, you know, so if you know, you think it's not as useful to you and it could be made better, then let us know we can kind of add more stuff to the image. And then this is where the reservation comes in. So once the reservation is active, a new flavor is created for you and it's dedicated to you. And it will be called, you know, reservation dash and then a UID. If I just kind of go back, like, I'll launch it first, just like that reservation flavor. You can see it's specs is what I chose. We'll just launch that. We're building now, just come back to that. You can kind of check here, you click on the reservation, it'll tell you kind of what the name of the flavor is going to be, and just kind of the specs if you need to and when it's going to be reserved to and from. All right, so yeah, we've got our GPU flavor. The one other thing you can do is once your reservation is active, you can extend that reservation. So, you know, you need it for a bit longer, you know, etc. All flavors have a maximum time that they can be reserved for at a time. So I think this reservation here can be reserved for 14 days at a time. But because I've already been using it for a day, I think, when did I do this? Is it yesterday? Yeah, I can add it, I can extend that out to a few more days. I'm coming up against some quota here, as you can see, but we can, I can push it out another extra 10 days, nine days. I mean, that's from a dashboard point of view, that's all really the only other function you can do is delete a reservation. If you wanted to delete it early, you know, if you're not using the reservation, then please delete the reservation, so it frees up that capacity for other people. So, yeah, that's, that's me. I guess we can open up to questions and if Sonya wants to, or if I've forgotten anything. Thanks, Sam. I'll just quickly talk to some of these to support things that we have available to you as a user of this service and of the Nectar Cloud, and then we will launch into questions. I just said I did actually forget one thing that's quite important. So once the reservation ends, it'll delete the instance. So you want to make sure you'll get a warning email about that. So an email will be sent out to you before it doesn't, it gives you kind of like a before delete. And it depends how long the reservation's been for to when that warning is. So if it's quite a long reservation might give you a week, or if it's a shorter one, it might give you a day warning. So I, we really recommend if you, if you, you know, you've got data generating then to use a volume to generate that data onto. So when the instance deleted, you keep your data on the volume. Yes, so should I answer this question or should we, or do you want to go? Go for it, answer the question. So you're from a, from a service unit usage point of view, we will charge you from when the reservation starts, even if you don't boot an instance, we'll charge you from when the reservation starts until either the reservation ends or you delete the reservation. So that's how to answer your question then. So yeah, if you, if you don't use all of it and you delete it early, then yes, you won't be charged for the extra. And yes, you can use any image you want with the system. You're going to get into problems with, if using a GPU flavor, we just want to make sure that it can get the license. So that's why we kind of provided an image because it is quite tricky because the, and also the drivers, the drivers of the image need to match up with the drivers on the hardware and also the, the licensing. So there's, that's why it's, it's, I think it's going to be quite difficult for us to, for anyone to do it themselves. But I mean, the, the, the powers there to forward to do it for advanced people. But if we, if there's the image that we provide isn't, you know, suitable or we can help you there or make a different image and stuff like that as well. So we're going to do that over to you, Sonia. Thanks, Sam. I'll just bring back some slides. Do you want me to stop sharing? No, I think we're all good. Yeah. So just a quick final wrap up point. So in terms of what we can do for you as well for a user support perspective and using this service and anything else on the Nectar cloud. So we do have a range of support articles covering this topic and other topics that are connected, as well as, you know, things like launching an instance. If you haven't done that before, we have detailed tutorials and articles about that. We do have a support team who is available to answer tickets between 7am and 6pm Australian Eastern daylight time and also an online live chat function if you need to want to chat to someone in real time and ask questions. So we are definitely here to help you in whatever your needs are. And of course, specifically in terms of the GPU service documentation, we have articles on how to make a reservation. So basically what you witnessed in the demo and then how to use the GPU or large memory resources, so how to launch that instance and then dive in and use it. And then of course, those tables you saw with the different like specs of each of the flavors we're offering, we do have detailed specifications on our Flavors page. And we will send a link to all these articles and at some point a link to the recording after this session has concluded. So now that we've covered what we've wanted to cover today, we're into question time. So if there are any other questions, feel free to unmute or write in the chat and we can address them as they come up. So I'll go over to you, everyone. Laura asked in the chat, is there any tutorial about how to log into the instance using SSH? Yes, we do have a generic how to access your instance via SSH. I can include that tutorial link in the post event email for everyone. With the reservation, say I reserve something for Thursday. I get hit by a bus on Wednesday for something less drastic and you're unable to complete that or do anything on that Thursday. Because it's reserved and taken, you can't get to do anything to delete or whatever due to some reason. That reservation time gets used up and you've lost those credits. Yes, that's great. Cool. Hopefully you don't get hit by a bus. Yeah, I said it was something less drastic. There's a question from Rosalind about national merit allocation, which I saw answer that one. So there is in the user guide, there is a document specifying the allocation policy that has the details. But essentially, national merit is basically two things. You either have a competitive research grant or you have a project that's supported by ARDC, like an ARDC platforms project or some other ARDC project, or one of the other increased facilities, national collaborative research infrastructure strategy facilities like ARDC that supports national research infrastructure. So those are the two main criteria. There are a couple of other, if you're also if you're also doing this, then maybe the National Allocation Committee can support you. So for example, if you're hosting a service or if you want to host a service that supports a large number of national researchers, that's another criteria. So there's a couple of others as well. But essentially, yeah, if you have a research grant or an industry grant, if you're funded to do work with industry, then you meet the national merit criteria. And again, if you don't meet the national merit criteria, but you are a member of one of our or a researcher at one of our node institutions or node members, then you can talk to them and they may be able to provide you with an allocation as well. And I've popped the link into the National Allocation Scheme Policy and the section on the merit criteria is in 7.1. I've popped that in the chat. Thanks, Jo. Do we have any other questions? We do. Esma asked in the chat, is the University of Adelaide or La Trobe University a node? The University of Adelaide, yes, they are associated with Intersect. La Trobe, I'm not sure about actually. I'll have to check and get back to you about that one. But any other ladies? We can include that information in the post training email or event email. But again, even if your university is not associated with one of the Nectar nodes, as long as you meet the national merit criteria, you are still able to use the infrastructure. Do we have any other further questions from the crowd? Of course, if you sometimes think of questions later after the fact, so you can also put a link in the chat to our support site. You can simply submit a ticket as well. If something comes up later that you think of as a question. Sorry, I have a question. Oh, go for it. In regards to, I guess, adverse events, in the case that there's a brown outage in the server that's currently being run or some kind of technical issues, what happens with regards to either credit for that period of time that the system is down or for the cases where things take several days to kind of run and it crashes at the end? Will it be kind of refunded for the day that the time period where the server is down or for the precursory days that it took to run the model? Yeah, that's a good question. Yeah, look, I think we can sort something out then. As I said, you can always ask for more quota at any time. I think if you will submit a quota, an application of new quota and just inform, just made the note that something went down or there was some issue or problem, then there wouldn't be a problem for the allocation approvers to give you more quota to compensate for that basically. Excellent, thank you. So also, the tribe is a member of the Incigno. Thank you, Joe. It's opening up to any final questions, final call. As it seems like we don't have any further questions, I'm going to cap off this session here. But again, if anything comes up in like the next second, feel free to jump in. Wayne, did you have another question? Oh, sorry about that, Wayne. Do you want to put it in the chat? Because unfortunately I can't unmute you. Yeah, while we may have another call here. Can you have multiple workflows? So there are situations where you would want the desktop experience so that you can look at your models, deform them, do some stuff with it, then take that and send it through to then be computed. So is there an easy way to send a workflow from the desktop experience through to the compute or would you just have to make two reservations and then run them up as you buy them? You should be able to do both on the virtual machine. You can set up a virtual desktop interface to the virtual machine and you can also obviously just run compute on the virtual machine. So you should be able to do both with your single reservation. Yeah, but because you've got you're using the A40s and the V100s, there's different levels of compute on that. So if you're looking at something that's a heavy, heavy compute and it's going to require more grunt than what you're, because the A40 splice of it, four gig, may take it 10 hours to do the compute. Whereas on a V100, it may only take it two hours. Therefore, you're trying to look at your allocations of, I'm now only over the entire course of this project, I'm only going to spend three hours by maximising the compute capabilities of each of the different cards. Ah, right. I see what you mean. Yeah. I mean, yeah, if you want to do compute and then do a visualisation of something, yeah, that would be an issue. Two reservations, wouldn't it? Yeah, two reservations. Do you get two different instances running in a copy between all, you know? Yep. Okay, cool. You could use a, you know, maybe one thing I'll do, like, you know, you could have a volume, you know, boot from the volume on the visualisation, do your visualisation, then, you know, delete instance, boot again from the same volume on the A100 and then, you know, run it, then, you know, just things like that you could do, maybe. Yeah. Okay, cool. Thanks. Okay, seeing as we are getting close to time, I'm going to cap off this session here and thank everyone for coming today. And I will also thank our presenters as well, Paul, Sam, and again, participants for being active. Appreciate it. So thank you so much all for coming. Again, if you have questions, feel free to go to our support site. There will be a follow-up event email with the resources mentioned, the links, and at some point a recording once it's processed and uploaded. Otherwise, thank you so much for coming.