 Well, welcome all. Thanks for coming along. Today I'm presenting a somewhat higher level user story, I would say. The Nectar project has been running a cloud program for quite a while now. We're really early adopters of OpenStack. We've been running it for quite some time. This talk is going to be more about the research uses of the cloud and how research communities are using and engaging the cloud and our experiences of trying to bring those communities on to actually use the cloud and to bring their software infrastructure and their software applications on to the cloud. I'll talk a little bit about the Nectar cloud that we've actually built and been operating for some time now. We've got some people in the audience here. There's a number of people from Nectar at the conference who can certainly answer a lot more detailed questions about architecture and operations and how we've been running the infrastructure. My name's Glenn Maloney. I'm the director of the Nectar project. And we'll move on. So there's a little bit of background. So why is a research community building a cloud? Part of it was that there's been a strong background of investment in research infrastructure in Australia through what's called the National Collaborative Research Infrastructure Scheme. The view is that you can't get everything you need as efficiently as you can get it by just giving monies to universities to build their own stuff. Some of the infrastructure needs to be coordinated and deployed and run on a national scale. And that's the program that we're born out of. There was a round of investment from 2009 to 2014 that included a number of investments in a bunch of e-research capabilities. In e-research is really advanced IT infrastructure that supports research needs and is particular to the needs of research communities. And Nectar is one of those. So you can see there we put in bold. We got $47 million out of that round of funding, but there were very substantial investments. And this was a very big surge of investment by the government, the Australian government, given the relative to the Australian economy at that time. And it was motivated partly as a response to the global economic downturn at that time, the financial crisis. And it was actually a part of a suite of stimulus investment by the Commonwealth. And so research got to be out of there as well. But the key thing is there's investments in the traditional e-research infrastructure, the high performance computing, the supercomputing infrastructure, building and improving the network capacity, the research networks that link all of the universities and the research facilities in the country. There's also a program, investment in a program to help improve the management of research data and to prepare the institutions in Australia and the research community to deal with the data deluge that's emerging out of the research sector. But Nectar was a new bit. So Nectar was an investment to do something new. It wasn't more money to do things we'd done before. And partly it's a focus on software infrastructure for research. And the other major component is actually a cloud infrastructure as well, cloud computing for research. So in the context of all of that, the rest of the infrastructure sort of forms a bit of a layer that interoperates. They're not all discreet. We all build our projects separately, but research communities and researchers come and use bits and pieces out of all of the infrastructure that we build across all of our platforms. And it's sort of expected that these underlying capabilities that Nectar, partly through the cloud, but in particular through some of our software infrastructure investments, build the integrating platforms that tie all the other bits together underneath. And I'll talk a little bit more about that later, because we call our virtual laboratories. So in a sense, Nectar sits above those and sits directly to the research communities. This is also a bit of a slide we used to justify, people would ask, why on earth are you building your own? Why aren't you just paying someone to run one for you? Well, there were several questions, answers that. One of them was, we couldn't use the type of funding they gave us at the time to pay for services. We could only use it to develop infrastructure. That's not a good answer. But the good answer is because we really, we think there's a lot of value in embedding at least some fraction of the Australian research cloud computing or computing infrastructure to support research within and co-locating it with the other high performance capability, co-locating cloud with high performance computing and the supercomputing, embedding it on the national research network backbones, co-locating with the big data, research data repositories, the big research instruments and so forth and the universities. So that was our sort of our starting point. So I'll focus mostly about the research cloud. We actually ran four programs in Nectar but we grouped them broadly into two categories, the software infrastructure bits and the platform bits and the cloud is the research cloud is the main part of the platform bit there. And the purpose, the motivation for cloud computing was, there'd been a long history of investment in supercomputing needs for research and so on on a collaborative national scale and that could meet the high performance computing needs of researchers, the simulation, the large scale data analysis and so forth. But there's an awful lot of computing needs by research that don't fit into that category and that was being left to individual research groups and institutions to do the best they could to support those communities with very diverse needs. So cloud actually gave us that opportunity to invest at scale in a single national platform that could support all of that disparate use of IT infrastructure in the universities by research communities. We also see it is actually a platform, it's also an entirely appropriate platform for some of that computing load that was being done on the HPC facilities. It's a more cost effective platform than the big HPC facilities for those kinds of jobs that can be well supported on that. And that frees up the big resources for the high priority big science compute jobs that need to run on that infrastructure. And the other thing we wanted to do was give the research community an opportunity to get the same benefits that cloud has provided to a lot of the commercial tech startup kind of communities, that ability to innovate to improve and reduce the costs of rapid deployment and reducing the cost of failure for people who wanna actually deploy infrastructure on the cloud. So it is the world first. We've got, it's actually a partnership between eight different universities, eight different organizations in Australia that are some of them are universities, some of them are incorporated groups that specialize that are essentially member based of the universities that supply services and infrastructure to research communities in those universities. And some of them are actually the big high performance computing investment facilities as well. So we have two of those, the national peak facilities. They also host cloud nodes, but also others are run by universities. It's a single cloud, it's a single open stack based cloud running across eight different organizations. So they're not just eight different sites, they're independent organizations operating their own IT infrastructure within those sites as well. It's an open stack based cloud infrastructure. We actually, when we, we had to, we essentially selected the platform we would use for our cloud in April, 2011. And so at that stage, what we knew we were looking for, we were trying to project which of all the competing platforms that we could look at deploying to support our cloud computing needs was most likely to support our needs into the future. And we picked open stack, that was a risky choice at the time, but it's been borne out. We got a lot at the time, a lot of the reaction from people in the sector was what's open stack? Why did you choose open stack? But it's clearly been borne out by the, we chose it for the direction, the governance and the industry backing that we're starting to build behind open stack at the time. Now it is a single, we run a single open stack fabric across all of these different nodes of the cloud, the different sites, but each of the sites has quite a bit of freedom to differentiate in the way they support their research users and so forth. They can, despite the fact that we require interoperability at the software layers, they can deploy quite different hardware underneath. Some of the nodes specialize in more high performance machines to support high throughput computing in the cloud. Others are looking at just getting good value compute to support more common research workloads in the cloud. And that's one of the nice things that we've been able to use open stack for is actually to ensure that we do get that interoperability but allow nodes, each of those organizations to work in different ways with that common fabric. So what we've done, we've been operating since January 2012 and that was based on the very first node which was commissioned at the University of Melbourne and for a long time they were the only node that we had. And Stephen Manos is in the room and he heads up the team that runs the cloud node at the University of Melbourne. We're up to 18,000 computing cores online now. We've got a big rush. We've got to get to 30,000 by the end of the year. We have about, we have three more cloud nodes commissioning very significant amounts of infrastructure over the next two months to get online. We've had some delays in getting some of those organizations commissioned and ready and set to actually deploy their infrastructure but by the end of the year we'll be hitting 30,000 CPU cores. But importantly as well, we're getting users continuing to come on. So based on all we offer or we're starting to move beyond it but all we have been offering is an infrastructure as a service offering with no great significant value adds above that at this stage. We're starting to work on that particularly through heat and some other areas as well. But what we've seen nonetheless is sustained user growth throughout that time. And really the user growth has followed and kept up with the demand. At times it's exceeded the demand that we've run out of infrastructure at various points. We're still getting up to 250 new users registering every month in the infrastructure. It's now, if you count it collectively, it's now, we now have more users than any other research computing infrastructure in the country, any of the high-performance computing facilities or any other. And we're on target to, we think, to see that curve continuing to grow. Particularly as we start to offer those higher value services that come around the cloud as well. And one of the keys to that was lower barriers to access. So what we did was we actually implemented sign on using the Australian Access Federation, which is a shibboleth federation, a shibboleth-based federation of identity of all of the universities in Australia. So that means researchers can just come in and log in with their university username and password, immediate access. And what we do on the basis of that is we give them access to two cores straightaway so they can start to work with it. So there's very low barriers to initial engagement, initial access, and then we have an allocations process where people request more and more resources to sit behind that. But that was critical to getting that early engagement and getting people feeling comfortable to actually try without having to submit their allocation requests and so forth. Importantly also, what we're seeing is what we'd hoped we'd see is actually uptake across the whole breadth of the domains of research that are supported in Australia. So we're actually compared to what you would see in an HPC facility. This is a much broader range of uptake and benefit delivered to more research communities across the breadth of all of the institutions in Australia as well. I mean, in fact, you see there's a very large uptake by the biological sciences as well. And they are groups that have active users of HPC facilities, but you wouldn't typically see them as dominant users. But the cloud platforms are very appealing to these groups because of the ability to deploy their own environments and to deploy the software infrastructure tools that they need on the cloud. So why should researchers use the cloud? So essentially, how does Nectar support Australian researchers to bring their research onto the cloud? So what did we do to actually help drive that uptake and help bring people across the cloud? Well, the first thing we did as I explained is we built it and they came. So that actually did work to a certain extent. Now, the question is whether people are actually using it as efficiently as they can on that basis, whether we can do better to deliver more and more efficiencies. That would be great. But many research communities are actually perfectly capable of supporting their own use on an infrastructure like the cloud or perfectly capable of accessing the IT support from within their own networks to support their use on the cloud. There was a considerable skepticism initially that research communities wouldn't be able to deal with the complexity of the cloud. But we've actually found a strong uptake by early adopter groups and also by those groups that have their own embedded IT capabilities in their research communities and their research centers. We did run some early adopter training and workshops around the country. We were very careful how we pitched that. We didn't say this isn't for all researchers to come along. This is for the technical community who support researchers on the cloud, the people who will run stuff for researchers on the cloud. But it's also for those technically proficient early adopters and there's a lot of communities out there that can do that. My own background is in particle physics and certainly our community has a very high level of technical proficiency amongst the various communities that we can do and leverage that kind of infrastructure ourselves. But we also provided funding to the nodes of the research cloud to run research application migration programs where their stakeholders could come to them and get support to migrate applications onto the research cloud. Remembering that most research software tools are desktop based. They're either desktop based or they run on big HPC computing facilities. So getting those applications into a cloud environment or accessible through a cloud environment was one of the big hurdles we have in delivering benefit to the research community. It's actually what you need to do to transition the software culture in the research communities. And we also ran quite large software infrastructure programs. There's only $15 million out of that $47 million went to the cloud program. The rest went to the software programs predominantly. So we, and I'll talk a little bit about those and how we've used those to bring people on board. So Nectar, we do have these, what we call research software infrastructure programs. It was dubious. It was considered, when we began, it was considered that software doesn't count as infrastructure in terms of the Commonwealth government's views of where we could invest money. You can't call software infrastructure. The hardware was okay. But we run a case to actually make the case of the critical importance of the software as the basis of the infrastructure for researchers. We know that in the rest of the world, software is eating the world and it's doing the same thing in the research community and research sector as well. It's just that one of the advantages of the emergence of cloud platforms is it has been able to focus the mind on the importance of the software infrastructure by decoupling it from the hardware infrastructure for the research communities. But software is very important to research. It's really where and research communities. We can say that, based on my own experience, I know that there are far too many under-qualified researchers writing software for research communities. And those communities would benefit substantially by actually accessing professional software developers and skills and partnering with those. But some of the software is always going to have to be bitten by a researcher. The software is where we code the human knowledge. When people write climate models, they're coding our understanding of how the climate works into a computer simulation. That needs a scientist to do. There are, but they can work with people to build frameworks to run those models and to actually validate those models and to ensure better sustainability and reliability of the software infrastructure. So one of the key things with research software is there is a need, they're often driven by a need to upgrade quite quickly. No, it's not like one of the challenges for research communities working within a university infrastructure is the university IT department isn't well equipped for managing rapidly evolving needs. They're usually evolved to managing enterprise infrastructure. So one of the things that we've seen is actually that researchers actually often have to have a rapid need. They're often actually contributing to the software development themselves. They're needing to deploy the latest versions of the software frequently and rapidly. They're needing often to have multiple versions of the software deployed and in production and running models and simulations and doing comparisons across all of those streams at the same time. And in that complexity, there's a challenge in improving the reliability of the software the researchers use. And that's where what we're seeking to do as we'll talk about in our program is actually build and foster those partnerships between the professional software development community and the research developers as well. I was a little confused. So what we're doing is we're actually funding a program called virtual laboratories. We ran an open call for research communities to come to us with proposals for what they would like to be funded. It was researcher led. What we did was we put constraints around the program. We said that we really each virtual laboratory to be competitive had to be highly collaborative in nature. So we needed participation by large numbers of researchers across large numbers of institutions within Australia within their research domain. Remembering that the needs of research communities are quite different. The infrastructure, the software needs, the infrastructure needs of the marine scientist is very, very different to a researcher in the humanities to an astronomer and to a particle physicists. They use completely different software tool sets that they work in that space. So there's a need for very specialized knowledge and understanding of the niche specific software tools. But what we did for our virtual laboratories were that they were intended to be integrating platforms. So they're essentially platforms deployed on the cloud that integrate access to research data, research software tools, the compute, the high performance computing and also instruments and the infrastructure that's deployed around the universities around the country in Australia. And one of the key factors we had was we wanted to be able to support automated workflows within the virtual laboratories as well to improve research efficiency and so forth. Given that most research kind of IT infrastructure is highly fragmented. And I'll talk, when I talk about some of the stories we'll see about some of the benefits we've seen in terms of the savings of researcher time by being able to tie together access to the different disparate infrastructure in one place. So as an example, I'm just gonna run through a few examples of some of our virtual laboratories and how they do these sorts of things. So the Marine Virtual Laboratory is led by the University of Tasmania, but it again has a high level of participation by essentially all of the marine science community in Australia and they're also heavily engaged with the international marine science. It's a very international activity. And in fact, they're actually running some pilot for the global community piloting, running some data hosting and data analysis infrastructure on cloud because we happen to have a cloud. So it was an opportunity for them to actually pilot running things on a cloud infrastructure, which will be, and such infrastructure will become available to Europe and the US in the near future. But what we've seen is one of the key things is it's what we do, what the Virtual Laboratory does is it brings together the research data. So there's been previous activity to actually collect significant marine data that's held by and collected by researchers and agencies around Australia, including the Southern Ocean and there's a whole fleet of measuring instruments that are measuring water salinity and so forth and collecting data into large repositories. One of the challenges has been that that data is not as heavily used as one would think you would want it to be. And what the Virtual Laboratory does is it combines the access to that data, also hosts and deploys the simulation and oceanographic models that oceanographers want to be able to use to actually study and predict effects in the oceans and also the data analysis tools that they use that would typically use on their desktops but deploy them into their virtual environment on the cloud. And so put it all there and accessible from one place. And what we've seen is typically, and this is the typical kind of thing that occurs, we saw example of Ian Coglan, he was studying coastal erosion, he was an early adopter of these Virtual Laboratories, these Virtual Laboratory projects are just really coming to an end at the moment and we're seeing the first initial uptake and use. And what we're seeing is it saves him about three months because previously in the research communities, he would have to go and find the data, he would have to go and perhaps get the data from somebody, ask for permission for access to the data. He would also then have to install the oceanographic models he wanted to run, install, find some servers, install them on those servers, get them running, actually then separately get his PhD student to run some of the tools which are actually automatable. It's a fairly simple process but they don't have the skill sets inherently to be able to automate and workflow those things. So what the Virtual Laboratory does is bring that software infrastructure expertise together to tile these things together to automate those things. So as an example, he was able to save three months in just in the setup time for running his experiment. The Genomics Virtual Laboratory is another Virtual Laboratory, it's quite different, it's not one where we're talking about running modeling and simulation and so forth. This is actually an infrastructure, a cloud-based infrastructure, it leverages a well-known tool in the bioinformatics community called Galaxy, but it actually is a platform to support multiple tools. Galaxy is one of the key ones, but Galaxy is a bioinformatics workflow engine and that runs on the cloud. So we can actually scalably and on-demand access resources and deploy them onto the cloud to support bioinformatics research. So what this is doing is it's actually making it feasible for biologists to do bioinformatics. And that was one of the challenges, there are not now and never will be enough bioinformaticians in Australia to support all of the bioinformatics work that biologists want them to do. So one of the key things that this does is it lifts the capability, it creates a single integrated environment with the workflows so that with canned and presented workflows so that researchers can actually work with and start to do the bio. The biologists can start doing the bioinformatics for themselves. And this is a huge time saver and resource saver for a lot of the major sort of medical research institutes and so forth in Australia. We also have the endocrine genomics virtual laboratory. It says genomics, this is more about clinical research, supporting clinical research. And this virtual laboratory is really, again, leveraging the cloud to actually provide access to data so they've built a registry of 6,000 cases of adrenal tumor. That's one of the examples that you're using. One of the key things that this project seeks to do is bring together all of the data and the tools on rare diseases in Australia and also in Europe as well. It's also working with Europe as well. But on rare diseases so that you have the statistical power to make real scientific conclusions about what you see when you try to study clinical outcomes and compare them with genetic studies. So that's what researchers have missed before. They've only had access to small databases, small cohorts. This gives them the power to actually get that statistical significance in their results. Another example, the virtual geophysics laboratory. This is a community that had, through their previous investments, had a very, very mature and very solid infrastructure for sharing data. Really sophisticated models for describing data, for making it discoverable, for web deployed data services and data access. But what was missing was the ability for the access to tools to easily use that data for researchers. So again, a virtual laboratory and again enabled by the cloud platform that lets you deploy an infrastructure that makes the modeling tools, the solid earth modeling tools that are developed by the academic research community and geosciences available, as well as the data analysis tools that researchers use. So instead of cobbling these things together, instead of having to maintain large amounts of desktop based software on every desktop of every geoscience researcher in Australia, they're putting it in the cloud and managing it in a coordinated way. And that's what's transitional here. So again, another example where researchers have said that they've been able to complete work in hours instead of weeks. The All Sky Virtual Observatory, this is astronomy. It's really another example of, in the virtual laboratories that emerge where you've got modeling and simulation and data. And in the research communities, these are actually sometimes completely different groups. You have researchers in some domains, like in astronomy, who just do the observations, who are the observational astronomers. And then you have the theoretical cosmologists, for example, who will work almost as separate communities. What the virtual laboratory does is it brings together the tool sets that both of those communities use into one place so that they can actually collaborate more effectively and communicate more effectively in the way they do those things. Again, it's the cloud that makes that possible, for what we're trying to do. Human, oh, I'll skip a little over there. Human Communication Sciences Virtual Lab. Biodiversity and Climate Change Virtual Laboratory. This is one led by Griffith University. It's essentially a group of scientists who are trying to understand the impacts on species distribution due to changes brought through human induced global climate change. So what they do is they have quite complex needs because they're having to integrate the climate change data from the climate community that comes out of the people who run the big climate models. They've also got their own species distribution modeling tool kits. And then they also have the quite sophisticated spatial distribution analysis and observational data around where species have been observed. And again, this was a case very complex. To do research in this area takes a long time to draw the pieces together, what we're able to do. The Virtual Laboratory draws them all together. And so you can see from two months to five minutes to set up for a particular experiment that you might want to do. When it was first shown to a master's student, I happened to be there when I was doing one of the early testings with a master's student, the master's student said that does what I spent 80% of my time doing my masters. So they could actually spend their time doing science instead. And just to show, we also, across the whole breadth, we also have a virtual laboratory in the humanities. The humanities don't tend to run big modeling simulations and so forth. What's important for them are very sophisticated analysis tools for studying relationships between people. And what this project particularly does is it integrates or brings together 28 different cultural data sets around Australia and gives research the tools to explore and define linkages between those data sets to identify people who exist in different data sets and make relationships between those people as well. And there's more. We have a climate and weather science virtual lab, characterization virtual lab, and industrial ecology virtual laboratory as well, amongst those. And I haven't mentioned them. We also funded a bunch of research tool projects as well. So one of the key things is as a result of all that investment, we've actually been able to really engage the research community about thinking about their software as infrastructure and thinking about what you can do when you all understand the value of treating that software, as was said in the previous, every company is a software company now. So it's the software and the primacy of the software and the management of the software to get efficiencies out of research. So what we found out of virtual labs was that we actually found that they do support research collaboration. We got this marketplace or meeting point effect. You know, the fact that we brought tools together, for example, the modeling and the data actually created new collaboration between the researchers who used those as well. So that was an interesting effect that emerged out of the projects as well. Because people were held apart by the fact that their infrastructure wasn't connected. By connecting the infrastructure, we created the opportunity for these new collaborations to bloom. Go on. I just want to finish up with a few points about not all of the use of the cloud is done by our virtual laboratories or our research tools, the projects that we funded. In fact, the majority of the use of the cloud is people coming in off their own bat and bringing their own tools into the cloud with the support of people like Steven's group team as well to help them bring that in. But some examples of things that are running on the cloud at the University of Melbourne. It's an Army evacuation simulation. Again, it's running modeling out of the infrastructure and also interactive cell exploration. So what we're seeing is just all these research use cases that are starting to come out and come in and emerge onto the cloud platform as well. And not just happy to let research to bring their own stuff on, what Steven's team is also doing is identifying those high priority tool sets that would have particular value to lots of researchers and start to actually build those as services on the cloud. So there's MATLAB. Now, as I said, a lot of research domains have their own tools, so you can't, they've all, and it's got to be unique, and this group wants a different version from that one and so forth. But there are some tools that span many communities and MATLAB is one of those. And actually creating, which is usually used by researchers on the desktop, but actually giving them access to a MATLAB deployment in the cloud and a MATLAB service in the cloud is actually giving them new efficiencies. Oh, sorry, that was a cut and paste error. So the challenge is what we need to, so we've run these programs. The funding for that period of what we've done there with building our cloud, with building our virtual laboratories finishes at the end of this year. We have new funding to continue in a different kind of phase, but not to do more of what we did. So one of the challenges we're going to have is about sustaining that strategic approach to research software infrastructure and actually make sure that benefit beds in more. One of the things we're interested in is looking at each of our virtual laboratories emerged very differently by different groups, but actually looking at reusing those for other research domains as well. And we're starting to see some of those and we've started to fund some of those as a starting point as well. The other thing we're doing is, just as those projects, the advantages, those was we were actually able to build expertise in the research communities by having them work with professional software developers and also built the relationships because to be effective, the software developers really have to understand something about the research. Research needs are complex and you have to understand them. If you, you know, the particularly useful, a successful collaboration was in the astronomy project where the software developers, they've built a long-term relationship that they will continue with a particular team of software developers who really understand the needs of astronomy now and are able to support them as they move forward. But what we're seeking to do is to continue programs to actually improve the software capabilities and the software thinking within the research communities and software carpentry is one example of one of those. So it's an approach for researchers to actually take up and learn some practical programming, not deep programming, but practical programming that lets them string things together on the cloud. And it's interesting because the cloud platform actually makes software carpentry a much more successful approach. We do have additional funding. We've got $9.4 million to continue for another year and that's really focused on improving the operation side of what we've done. We were in a mad rush to build. We, what we've built under the cloud, the funding we had couldn't be used for operations. It could only be used for creation and development of infrastructure. The operations had to be through co-investment. This funding allows us to put some more significant tied funding in to improve the operations, which is where the maturity of our cloud is perhaps not as strong as it could be. We're also very interested in improving interoperability with emerging cloud infrastructures. We want to operate some support services for our virtual laboratories to keep them running. And one of the aspects is actually supporting that greater uptake and reuse of the infrastructure that was built under Nectar in the software side of things. So starting to get more of that infrastructure, starting to be reused by other communities. So on the research cloud side, our next steps, as I said before, to value add up the software chain in the cloud, moving beyond just the infrastructure as a service. We've been deploying heat and we have researchers using heat, but it's not, as the previous speaker indicated, it's perhaps not fully production ready for all the researchers, but we certainly have early adopters using that. And we have people engaged with helping researchers to use heat and start to build and use that for deploying and managing infrastructure in the cloud. We want to accelerate the deployment of other services. Databases are services of huge interest, huge value add for the research community. A lot of researchers, a lot of small research groups just want to rapidly put a database up and share it with the world. Because I've got collaborators in Africa or in the US or in Europe that they want to share access to an important research database. And we're just taking this data and we want to put it up there quickly. Usually IT infrastructure is too slow to respond. At universities, IT infrastructure is too slow to respond when you need to do that quickly. But cloud gives them that opportunity to move quickly. And finally, we want to broaden our partnership. We want to look at federation. Interoperability is really important. We don't want a research platform silo in Australia. We want, just as the Nectar cloud has enabled researchers to improve their collaboration in Australia, we want to ensure that we support and maximise the opportunity for collaboration internationally as well. So we certainly have a close eye on what's emerging elsewhere in the world. The EU is funding an EGI federated cloud which is operating at the moment. And some form of cloud infrastructure for research is emerging in various sectors. We really want to work with people to make sure that we have interoperability and even potentially federation between all of these infrastructures as well. But we're also interested in ensuring that we've got good levels of interoperability with commercial cloud providers as well. We actually think, it's our view that the scale of the cloud infrastructure we have in Australia to support research is just the beginning. The benefits can be much larger. But the future capacity growth can be mixed. We could actually offer the Nectar cloud, could actually be a mixed offering of stuff we own, stuff we run ourselves because of the reasons I said, and also commercial cloud providers as well who can support the needs of the research community. So we're very interested in pursuing those. And clearly, those vendors engaged in the OpenStack community, all those operators in the OpenStack community certainly make it easier in terms of interoperability as well. And we also want to strengthen relationships with industry partners as well. We certainly have some of those. We've had a lot of engagement and discussion and planning with various of the commercial operators in the OpenStack cloud space. But I think now that we've got a stronger focus on operations, we're interested in really strengthening the opportunities to partner with people who can help us to run our cloud better, who can help us to plan for the future better. And that's the sort of things we're going out. So thank you. Questions? Yes. Yeah, one of the challenges is typical HPC facilities are fairly closed, so you don't have that ready access. If you've got data or software tools sitting inside the HPC facility from the cloud, you can't access it readily. And partly in some parts of the world is grid infrastructures have emerged that actually provide some external accessibility into those high-performance computing facilities. In Australia, we don't have a grid anymore, for example. So that's a challenge. There is certainly within some of those operators, like the two high-performance computing centers, they're deploying their cloud nodes and their HPC facilities with access to shared infrastructure like data stores and so forth. But the cloud does trigger a whole different way of thinking about the security. When we first started talking to universities and research centers who had previously run HPC, they thought of their cloud node as, well, we're going to sit it. It'll sit inside our firewall. I said, no, no, no. You want to put this outside your firewall. This has got to sit outside of your infrastructure. You've got to protect your infrastructure from the cloud, as it were. So there are challenges in how you integrate HPC and cloud facilities. But you can certainly do it, and people are doing it. We certainly have some of those activities underway at the moment. Was that the kind of thing? Oh, and also there is, in terms of the tool sets, we're also supporting virtual clusters on the cloud as well. So for people who have existing tool sets, like existing things that work in HPC facilities, software bases, and deployment scripts, and so forth, we were actually running virtual clusters in the cloud that emulate that experience and that environment in the cloud, and that's an early way. A small hit. Once upon a time, that was true, and it was particularly true if you were highly IO bound, but certainly not our experience now. Yeah, there is a hit, but it's much smaller than this. Well, one of the challenges we have is we talk about a model where we define differing service levels that people can access within the cloud. We don't have that yet. So we run into problems. It is a problem for us. Part of the new operational funding will actually help us address some of those problems. But, and also there is very quite deliberately structured so that each of the nodes can build their own business model for future growth and future operation as well, while staying within the Federation. And we actually anticipate that for some nodes, they will be able to offer value ads by offering higher service level, particular infrastructure, baseline service level, but then they can offer a higher service level for higher IO needs or whatever is required, but we're not in that position at the moment. But it's important to us. Yes. Oh, yeah. Do you have money? No. I'd say the usage at the moment is not as efficient as we'd like it. I'd say we do have a problem with poor resource usage, but we haven't been, we're still growing the community as it were. So it's bringing the people on. So one of the key things we'll be doing is, or we have a plan, is to structure our allocation processes to create drivers. So if people are under-utilising their resources, that will certainly impact their ability to come back for more grant beyond that time. But one of the challenges we've had is we need to have the sophisticated reporting systems in place to be able to understand that. And that's there now, but it's only relatively recently that we've been able to have that kind of reporting. But also we do see a future where some will be merit, but also entirely that people will pay as well. There'll be research centres. It'll be subsidised because that's the purpose of investing in this way from the Commonwealth, but the people will pay as well. So there are eight sites, as it were, and the way we manage that is using open-stack cells. So cells was developed by Rackspace, primarily to address scalability issues, but we were a very early adopter. So Sam Morrison, our technical lead in the Nectar Cloud, was a very early adopter and deployer of cells. So cells came along just as we had to commission our second node, so it was just in time for us. Some sites have multiple cells as well, so we have hierarchical cells that you can use. But certainly, so at the University of Melbourne, the data, the equipment is split across two different data centres, geographically distributed, so each of those is a cell in their own right as well. No, no. So we keep common servers in the centre, single dashboard, single keystone, single API servers at the centre. So it's one cloud, and that's really important to the users, really important to the users, but that they can access the different regions and denotes the different cells. Yes, for the... Pardon, sorry? Yeah, yeah, yeah. It's probably not at the image level. We're not working at that point yet, where we essentially act as custodians of the versions of the software that you have in that community. That's something that those communities are managing at the moment, and in some cases, I think I know of one group that's actually doing that by managing images that have the different reference images that have the different versions of the software, but for many of them, it's managed in systems, as it were. So it's without reference to particular reference images. It's an installation kind of process, but it's an interesting thought. Yeah, yeah, yeah. Okay, no other questions. Thank you.