 Right, I'll get started. Thanks for coming everyone. So my name is Paul Clottington. I'm the Associate Director for the Research Cloud and Storage at the Australian Research Data Commons of ARDC. So I'm responsible for the Nectar Research Cloud. So I'm going to talk to you today about the Nectar Research Cloud. If you have any questions, please put them in the chat or wait till the end and we'll ask for questions at the end. So let's go. So just initially welcome, acknowledge and celebrate the first Australians on whose traditional lands we meet and pay our respects to the elders past and present. Right, I'm just going to very briefly give you some context for the Nectar Research Cloud. The Nectar project started about 10 years ago with the aim as its name implies of providing tools and resources, research tools and resources for research collaboration to support research collaboration. So it had two components, one of which was a national research cloud, which at the time was quite innovative. I think there was, you know, cloud was a quite new thing. There were actually no commercial cloud infrastructure within Australia at the time when the research cloud was started. And the idea was just to provide, you know, an efficient and simple way for providing compute and storage resources in a self service way to researchers to allow collaboration beyond institutional firewalls, for example, to make that easy. The second part of the Nectar project was what was called the virtual laboratories program to allow the development or support the development of domain oriented online environments, many of which were hosted on the research cloud. The Nectar project became an increased facility, the National Collaborative Research Infrastructure Strategy, the 2016 National Research Infrastructure Roadmap for increased call for the merger of three existing projects, the Nectar, the research data storage, RDSI and the ANDS project. So that happened and ARDC, the Australian Research Commons was born in 2018. So ARDC now supports the Nectar research cloud through its storage and compute theme. And it also continues to support the idea of virtual laboratories through its platforms program. So ARDC is part of a broader nationally supported e-research group called the Wonderful Acronym DDIRP that includes the IPOR, ZIRnet and AAF to provide nationally coordinated e-research infrastructure. Okay, so what is the Nectar research cloud? So the main concept around the Nectar research cloud is that it's a federated model of providing cloud infrastructure. There's a partnership between a number of institutions and research organizations who cooperate to provide a federated cloud, national cloud infrastructure using open source tools and technologies. So in this case, OpenStack, which is essentially the de facto standard for open source provision of research cloud. So we have a number of partners that cooperate to provide a national open stack research cloud. So essentially the Nectar cloud has some stand out unique features. It is a national federated research cloud that requires having national standards around cloud computing and cloud of the way it's operated. So all the different organizations that contribute to the cloud can do that in a standardized way. Researchers can see a standard interface, standard APIs, standard policies and processes across all the different partner institutions that make up the cloud. We wanted to make it easy access so people can just log in using their institutional user account through the Australian Access Federation, AAF. It's a self service model. It's open source. We can tweak things how we like. It's specifically customized for research use. We provide expert user support at both the national level through the ARDC and at the local, what we call the nodes of the cloud, the node level, the partners that provide the federated cloud infrastructure. And in particular, it's very cost effective way of providing cloud infrastructure in a federated way and leverages co-investment from our partners which in response. So although it's a federated model made up of infrastructure at different sites, it looks to the user as a single national cloud. There's a web dashboard and standard API. Some of the sites or the nodes in the cloud have national infrastructure that's funded by ARDC or national use, nationally prioritized allocations. Some also provide their own locally funded infrastructure for their own, that's prioritized for their local users and many of them provide both. So a user can ask for infrastructure at a particular site or a particular node of the cloud or they can just say, I don't care, I just please run my VM wherever and the system will do that. So the operation of the cloud is similarly federated. There's a central team within ARDC, what we call our core services team that runs the central core services for the cloud. So some examples there, the dashboard, the standard APIs, authentication, image repository, there's an app store that makes it easier to deploy certain applications on the cloud. So there's a number of sort of standard centralized things required to operate the cloud that ARDC runs. And then there's the sort of federated services that are supported by our node partners and the ARDC. And in particular, all the infrastructure, all the actual hardware, the compute, the volume storage, the object storage is all hosted at our partner sites. So in the bracket here, you can see the terms, the OpenStack projects or the OpenStack components that make up the dashboard, the authentication mechanism, the compute, the volume, etc. There's also a number of more advanced OpenStack features, so things like advanced networking and load balancing that allow projects to set up a sort of high availability services that might cut across multiple virtual machines or even multiple nodes and multiple sites. If you want to run a high availability service in the cloud, services like Kubernetes through the OpenStack Magnum. We're just trialling now preemptible or spot instances in the cloud. So there are a number of these high level services that the cloud also provides. And of course we also provide user support help desk tutorials training in collaboration with our partners. So how do you use the Nectar cloud? So basically the cloud can be used by anyone that has a AAF account. So you can just log into the Nectar Research Cloud dashboard and you automatically have your account set up and you're ready to go. You tick the terms of use once when you first log in or whenever it changes, which it has done recently. And you get an automatically get a small amount of resources, what we call a project trial and you can just start using it straight to try it out. If you want a longer term or larger amount of infrastructure, you have to apply for it. So you submit a, you fill in a form, essentially an online form in the dashboard. You ask for a project allocation, which can be three, six or 12 months. And then those project allocations get reviewed. There's a bunch of criteria for what we say yes to by an allocation committee. So we aim to assess and make decisions on those within a couple of weeks. And users can request an update, you know, ask for more, ask for less, ask for an extension of time at any time they like. There's no direct cost to researcher. Obviously it's not free. Someone is paying for it. It is ARDC that typically covers the capital expenditure, at least the nationally prioritized allocations and our nodes that cover the operational expenditure. So there is typically no direct cost to researcher for using the research cloud. So one important distinction here is there's two types of categories for cloud infrastructure and cloud project allocations, what we call national and local. So the national infrastructure is the infrastructure that's funded by ARDC through its increased funding. And it's accessible for nationally prioritized projects. As I said, nodes also add their own infrastructure that they pay for and obviously they get to decide how that is allocated. So projects are eligible for a national allocation to use the ARDC funded infrastructure if they meet essentially three criteria. They have a national competitive research grant, like an ARC or MHNMRC grant. They are supported by or they are part of an increased facility. So that obviously includes ARDC, we're an increased facility. So if you have a project that is funded by ARDC, you are eligible for automatically for a national allocation. For the lifetime of that project funding from ARDC. Now, we don't guarantee this, but we certainly our aim is to continue to support those projects once the ARDC funding finishes. So if, for example, you're a platforms project that's getting funded for two or three years to ARDC to develop or improve a platform, we'll certainly give you a national allocation to support that if you want one in the NACTA research cloud. But once the project finishes and ARDC funding finishes, you're still providing an important national service, hopefully an important national platform. So we aim to continue to support you on the NACTA research cloud with a national allocation beyond that time as well. And certainly we are currently doing that with a number of projects that were funded by NACTA to the, for example, to develop virtual laboratories and still operating those virtual laboratories as national platforms. There is a third sort of catchall category with a number of criteria where there may be other reasons why we might not support national allocations. So you can get national allocations in certain circumstances if you don't meet the first two criteria. Now, if you're not eligible for a national allocation, if your project isn't, so it doesn't have the national research grant, for example, you could still, if you're associated with one of the nodes of the NACTA research cloud, they may provide you with a local allocation, but you need that arrangement with a node to do that. Okay, so the cloud has been around for quite some time and we have a lot of users. So we have approximately or more than 1,700 research projects using the cloud at the moment. At any given time, there are more than 7,500 virtual machines running in the NACTA research cloud. It supports hundreds of services that are hosted on the cloud and we estimate with more than 50,000 users of those services, both in Australia and worldwide. And there's some examples there of some of the larger users of the cloud. There's a number of increased facilities. There's a number of virtual labs or platforms. Now, not all of these just use the NACTA cloud. Some of them may use commercial cloud or other infrastructure. In addition to using the NACTA cloud, some of them solely use the NACTA research cloud for hosting and operating their service. So just a bit more of a drill down into some of those things. At any given time, we have over 50,000 virtual CPUs running on the NACTA cloud being used, four petabytes of storage, and about 2,000 users who are running those virtual machines on the cloud at any given time. And last year in total, about 3,500 people actually fired up virtual machines in the research cloud. Now, it's hard for us to figure out how many people are actually using those things. So we could, you know, common circumstance, for example, is a person in a research group will fire up virtual machines and several people in that group might use them. We can't easily track those. And of course, you know, virtual labs, platforms, increased capabilities are running services that may have hundreds or thousands of users as well. We've had 18,000, more than 18,000 registered users in the cloud since it started in 2012. We get about 200 new users signing up every month. We've supported over 4,000 projects since the cloud started. And as I said, there's more than 1,700 that are currently active, supported over 700 research grants just last year, and a number of virtual labs, copy of research centers, centers of excellence and so on since the cloud's been running. So just briefly, in terms of the strategy of the cloud, because it has changed a little bit since ARDC took over running the cloud, first of all, ARDCs committed to continue to support the Nectar Research Cloud. We've spent four and a half million dollars to refresh the infrastructure at the sites, and we've committed to spending $3 million a year for at least the next couple of years for additional infrastructure and development of services in the cloud. In the past, Nectar really was just a basic infrastructure as a service cloud, but the strategy with ARDC is to expand that, to provide high-level services, sort of platform as a service offerings as well, and also to prioritize ARDC activities. So particularly ARDC-thunder projects like platforms projects, for example, and also to prioritize the sort of more innovative leading-edge infrastructure, things like GPU servers, very large memory servers, other types of services and functionality as well. So part of that is developing national services. So for example, supporting containers and Kubernetes. We've started a coordinating a national collaborative project to do that, which I'll talk about in a minute, and standard approaches to supporting analytics platforms like Jupiter, for example. We're also partnering to develop approaches for supporting commercial cloud. Again, I'll talk a little bit about that in a minute. And also trying to make the expertise that we have within our core services team available externally for consulting or assistance in particularly ARDC-supported projects. And we try as much as we can to align with what's happening internationally. So for example, in the European Open Science Cloud, they have an EGI Cloud Federation. Again, a federated open stack model similar to what we're doing in Australia, but a much larger scale, obviously. So I'll just briefly step through some of those things I just talked about. So first of all, I mentioned we were spending money on refreshing the cloud, one of the problems with the research cloud over the last few years is we haven't had significant funding to refresh the old infrastructure that we do now. So the central services infrastructure has already been refreshed much of it and more to come later this year. The nodes are all having refreshed cloud infrastructure. So Uni Melbourne node that's already happened. The Tasmania node, the infrastructure is in places. There's testing happening now. That should be available within the next couple of weeks. Similarly, the Monash node, some of the infrastructure is already, new infrastructure is already up and running. The rest will be in the next week or two should be available. And the New South Wales and Queensland nodes will have new infrastructure coming online in the middle of the year. So once that's all done, the capacity in the research cloud for national allocation will almost double and we'll be able to support at least 42,000 virtual CPUs and a significant increase in storage as well. But there's more. So that's just refreshing the existing capacity. We are also, as I said, spending $3 million a year for this year in the next couple of years on providing more infrastructure and new services. As I said, the aim there is to prioritise support for ARDC funded projects like the platforms project. So this year we're prioritising the 2019 platforms projects. So for example, the ones that were already using the cloud and been around for a while, the Biocommons, Galaxy Australia, EcoCommons, the characterisation virtual lab and a number of new ones, the drones, platform, imaging, machine learning and so on. So a number of these, won't be surprising, is the imaging, characterisation, machine learning platforms need lots of GPUs and some of these need high-end infrastructure, large memory machines and so on. So that's been primarily the focus of the new infrastructure that we're funding for this year. Lots of GPUs, a number of very large memory servers and some just generic additional capacity for computing storage to meet the requirements of all these new platforms. This new infrastructure should all be deployed by, again, roughly the middle of the year, June, July, this year. And we have more to come in the next couple of financial years. If you're part of a 2020 platforms project, you'll be aware we started gathering your storage and compute requirements already through a survey. We'll have follow-up meetings shortly to see how we can potentially assist you with providing the infrastructure that you need for your project. I'll just briefly go through some of these. So the GPUs, as I said, initially we're aiming to, we do already have some GPUs in the NICDA Research Cloud, but it's not offered as a national service. It's like some nodes have GPUs that they might provide to some of their local users. The current round of infrastructure is essentially just to provide GPUs that are dedicated to particular platforms so you can access them through those platforms, through the machine learning platform, for example, or the characterization platform. Next year, our next financial year, we're looking to expand that to provide a more generic infrastructure as a service using GPUs that can be used by any project. We'll be able to apply for the use of GPUs and be able to reserve some capacity. And we've also implemented the ability to virtualize GPUs. So some of the higher-end GPU cards could be sort of sliced into smaller, more digestible chunks, I suppose. The other project we're working on at the moment is essentially just to try and make the cloud easier for people to use. So to provide virtual desktop infrastructure so people can log into the cloud just like it's a virtual desktop that just has a bit more capacity. And of course, there's been a huge uptake in things like Jupyter and RStudio for data analysis. And we, you know, people can and do run Jupyter and RStudio in the cloud, but we just want to make it a lot easier for people to do that with simple web interface, for example. Some of the platforms projects already do that. We want to provide this as a generic, sorry, as a general service that can be used by anyone, including platforms projects, if they want to incorporate, say, a Jupyter Hub service or a virtual desktop infrastructure into their platform. So we've been working with some of the, well, we have, are working with some of the projects, particularly the Eco Commons project, on essentially tweaking what they've already done to be able for ARDC to support that. So we're looking to have a pilot for that sort of middle of the issue of the year with a production service for both virtual desktop and Jupyter by the end of the year. I mentioned briefly containers and Kubernetes. So these containers on the cloud and Kubernetes for container orchestration has been growing rapidly. So we are looking to provide a sort of national coordination of support for containers and Kubernetes across the various e-research platforms in Australia, not just the Nectar Research Cloud, but, you know, NCIs, CloudPorsi, local infrastructure, and so on, commercial cloud as well. So to allow people to be able to containerise applications and run it on different platforms or different clouds. So we've set up the ARCOS project, Australian Research Container Organization Service. This is ARDC's coordinating this, but it's a national collaboration of pretty much the main e-research providers in the country. We've set up a working group. Please join if you're interested in using containers or Kubernetes and a couple of specific working groups around the technical issues and around container registries and things like that. So we've been gathering requirements for projects and e-research infrastructure providers and working on implementing solutions to provide a more sort of standardised approach for how we do this nationally and how we support this. So please feel free to join these things if you're interested or you want help. The other thing we would like to be able to do, we've been thinking about this for quite some time, but we still haven't quite come up with an approach for doing this, is obviously the cloud is great for lots of things, the research cloud, sorry, but commercial cloud is also great for lots of things and can provide things that we can't, right? So people do use the commercial cloud as well as or instead of the nectar research cloud, there's a lot of advantages for some use cases for using the commercial cloud. So we want to be able to make it easier for people to do both. So we've been exploring approaches for how we do that. We're still figuring that out. We're hoping to have a sort of a focus on that in the next financial year as part of our plan for how to do that. So we can't currently easily support people who want to use the research cloud because we don't have finalised strategy for how we're going to do that, but the hope is we will have that within the next year. So please talk to us if you're interested or have particular use cases around that. And just finishing up, so we have some time for questions. Just to let you know, if you are engaging with the nectar cloud, just some of the people involved. So Carmel Walsh is my boss. She's the director of e-research infrastructure services in ARDC, which encompasses both the nectar cloud and the data retention project. So I'm responsible for the nectar research cloud. Joe Morris is the user support manager for the cloud. Senghor is starting on Monday. So he's our new cloud services development and operations manager, replacing Wilfred Brimblecombe, who some of you may know, who were retired at the end of last year. Sam and Andy are our technical leads in the cloud and have a lot of experience in opens and stack. Kieran is the ARCOS technical lead around Kubernetes and containers. The rest of the core services team, Adrian, Jake, Jacob, Stephen, Rocky, most of them are located. Most of the core services team are located in Melbourne. Some of us are in Brisbane. I'm in Adelaide, so it's a pretty distributed team just within ARDC. But then we also have the node operators at the different node sites around the country and their support staff, who also help organizations and projects that are using the nectar cloud. We're just recruiting a cloud skills specialist to ramp up the focus on skills and training within the research cloud. And we have a distributed help desk, which is staffed by node staff around the country. So more information on how to use the cloud. You can go to the ARDC website and find out some information about the nectar cloud. We have a support site and email to the help desk. So I guess the best approach if you want questions or anything about the cloud is to flag a ticket with the help desk and you'll be directed to the relevant person to answer your question. And you can just go to the nectar dashboard and log in and start using it if you so wish. So that's it for me. Thanks very much. That's a sort of brief overview of the nectar cloud and how to use it.