 Hello, good afternoon, everyone. Yeah, as Andrew already introduced, it's about the EcoCloud or EcoScience Research Data Cloud here. It's basically kind of a big project not just technical development. So we have a department of environmental and energy. They are developing the essential environmental measures, which we want to make available, that there is a whole lot of work streams around species trade data, making those away little, getting access to the daily spatial weather grids, and other existing data streams. Then there's a whole work bundle around standardized modeling and analysis capability, which came out of PCCVL, basically, to have a robust set of well-defined and reliable models. The most technical side for us is to develop a platform which gives you access to compute and research and makes it easy to access your data. Log in, go on and start working without caring about where the data comes from and what else is otherwise set up between those every level as possible. Another big part is about training and skills development. This will be part of the EcoEd and EcoPathways programs that will develop a whole lot of training, material, cost, material, run for workshops, and similar things, and also engage its industry. And the last stream is about trusted data. So I'm not too familiar with that bit. My book is more about the technical side, but it's more about knowledge about reliable data, where it gets from, who is using it, and which data can it use to do proper decision-making. A bit of history about it. So with Relate PCCVL about three to four years ago, it's been very well received. It gives easy access to data and modeling and reduced technical barrier to access data and use it. What we have found, though, is that there's an increased demand on customizing the work and having an interactive workflow to work with models and data. Another big use case is users want to use sensitive data, which can't be released publicly anywhere. Many more users are coming on, and they want to use their own data, which they have compiled or retrieved from somewhere, which is publicly inaccessible. Another big thing will be that we try to increase the interoperability with our systems. And all of these we took together and all the lessons we have learned through PCCVL, and we're trying to put into the EcoCloud platform. So the biggest challenges with the EcoSciences is that it's a very diverse community, and with a very diverse set of data we are going to integrate our users want to use. So it's about space data, climate data, marine data. All described in different ways and often doesn't make sense to an early career researcher who comes out from an apology background. Another big thing is portability. If you develop something, share your code, should be able to run anywhere, not just within the EcoCloud platforms. It's said we want to provide the ability to work with sensitive data, so security is a big thing here. And data discovery, data access, and data usage is still a very big challenge. While the work with data description and data portals has been moved forward very well over the last couple of years, accessing actually the data and using it is still a bit of a challenge. Then we want to provide some way to work with large data sets, which are impossible to download or require some HPC environment, and some way to get easy access and integrate projects. And as our users are using their own private data often, we need to integrate cloud storage, so for the next cloud, Anet, Cloud Store, or Dropbox, and similar things. The users we are targeting, researchers, academics, undergraduates, post-graduates, all the high-end users who know how to code and use HPC environments, but still want to share their work. Big data users will be included as well. And of course, to help with collaboration, there should be some useful opportunities to collaborate with proper software engineers as well, to optimize their code, help with coding, or even produce new third party products on top of EcoCloud. The opportunities there, of course, research, science, the ability to publish your code on models you're developing, along with your data you've used, various workshops, software carpentry, data carpentry, provide resources to run curriculums, and all the research community. So on the technical side, there are being three components developed, so the EcoCloud Drive, that's here. Easy, cloud-managed, online storage for code, script, small data sets that will always be there. So even if everything crashes, next time you come in, the last piece of work is still available. The EcoCloud Explorer, which we hope will increase our help with data discovery and access, will mostly be part of CSRO Knowledge Network. And our work there will be to provide help with using the actual data through, for instance, providing code snippets. And of course, there will be the compute side of it. So at the moment, we have opted for providing two-bit notebooks, which are a Python and RStudio, are already available, and also access to virtual desktop tools, at the moment, provided by CSRO, which is a term project. Three minutes, Gerhardt. Sorry? Yeah? So our big architecture is sort of a user comes in. There will be a dashboard from which you can access everything. There's a separate centralized user management. So all services are independent. They don't know about the users, but they use this user management service to identify access to various things. From there on, you can publish clone scripts from other users, find other scripts, notebooks, browse through examples, how to use various data stores or use certain processing. You can explore data, try to get the data in in an easy way, start a new project that essentially gives you a compute environment to pick whatever you want, R, Python, virtual desktop, and use the tools that are available there. And these tools, these environments are usually also customizable, so you can install your own software there as well. And our tech stack, it's a fully, it's a microservice architecture, fully OpenID connected and over to enabled. So all APIs that will be offered and are available will also be accessible through third party systems. Our development itself is mostly in Python and JavaScript. It's been designed to be horizontally scale-able because it's just easier. Vertical scale ability is still there, but there are challenges with resource allocations sometimes. And the whole system also allows to go multi-node, multi-cloud with the work. And the way we have built it up is at the bottom, there's, of course, OpenStack provided by the Nectar Cloud with all its services we are using. The gray box, Sahara, Nokia and Trove, we are currently not using, but certainly looking into it, how we can make an instance for our users. On top of that, the whole OpenStack infrastructure is managed by Kubernetes, which takes care of orchestration, security, all sorts of monitoring, and due to that, we can do wire Kubernetes, we can autoscale everything. Kubernetes itself orchestrates everything that runs within EcoCloud. So there's our attributed notebooks. There will be web processing services, plus the Web UI, but there are still three developed that provide APIs. And Fuzera Virtual Desktop sits a bit outside of it, but will seamlessly integrate with EcoCloud. And all the work we are deploying here, and it's running within EcoCloud, it allows you also to access external services at the moment where we are providing tight integration with knowledge network, various data services, like data.gov.au and other things. Cloud Stories, as mentioned before, your Google Drive, your Dropbox, and external web processing services that would be of high interest for us, just to offload data transfer and compute services. And that's pretty much it for me. Thank you very much. Okay, thank you guys.