 I haven't used Google Meet for this before. Feel like I've been in Zoom for about, I don't know, a year and a half nonstop. Anyway, so I thought I might just give a quick intro to something that we've been working on now for a while. I guess if you include procurement time, but that's, I guess, really just getting into some of the interesting parts now. And we've been working with partnering with StackHPC amongst other vendors, but StackHPC obviously specifically on the open stack side of things here. So we're building a new cloud-native platform for Nessie. Nessie's the New Zealand eScience infrastructure for people who haven't heard me talk about it before, where I work now. And Nessie's a collaboration in New Zealand amongst a couple of the research-intensive universities and the Crown Research Institutes, which are basically the government science organizations. There's a handful of them. Couple of those are direct collaborators with Nessie, a bunch of others are subscribers, and we've got some new partnerships that I'll talk about here from that part of the sector too. Oh yeah, lots of, this is just recycling slides from a different talk. So I don't know how well this hangs together. But basically this is an open stack private cloud with high performance tendencies. We are, it's KOB deployed, and of course we are working with Stig and his crew on that side of things. Once the cloud itself is up and running, we're planning to support a bunch of different use cases, but one of those is the Asylum appliance style tendency. Probably the next, we've got a bunch of our own internal use cases, our present supercomputing and HPC infrastructure it doesn't really support any multi-tenancy, and that has been frustrating our developments in a few different areas in particular sort of data services and other things where we want to use, be using cloud native tech like Kubernetes and other and just sort of standard CI CD based development pipelines that don't fit very well into a more monolithic HPC environment. We're all aware of these challenges and motivations towards cloud, but we also want to explore I guess the next stage of what our capacity HPC might look like and whether a cloud native approach to that is suitable. So sort of continuing the theme that other people have looked at within the SIG and that I was working on at Monash. So from a New Zealand context perspective, we've got one organization already, University of Auckland, which is one of the messy partners that is part of the Nectar Research Cloud that came from Australia. And there's potentially one or two others that have been interested over time in getting involved in that community cloud as well. Nessie's at this point not planning to go into that area of service, into a VMs for all sort of direct researcher consumes infrastructure as a service. We're looking more at managed services, virtual labs, platforms, that sort of stuff where there's value add that we can provide. We're hoping that this platform and the general architecture around the technology will address some equity issues in the sector that smaller groups and institutions have and help with specifically I guess collaboration, national collaborations. And we're going for a bespoke architecture, in particular with some direct integration with our local Enron re-ends so that we can do direct campus integrations from this environment. So, and this sort of layer cake diagram of the architecture gives a flavor of this. There's, we just have the concept of in the green box out of sort of the main set of services and so on that users might be able to consume. So say, if we took for example, a user being maybe a Nessie developer or a DevOps research software engineer working for maybe the Antarctic Science Platform or something like this, one of our national collaborations that are going on at the moment, then they would be able to consume things like Magnum from the cloud in a typical on-demand manner using capacity that Nessie's already invested in and purchased throughout our procurement. And we have, Nessie has models around the entitlements for resources. We have a merit scheme. We allow subscription and these sorts of things as well. But we'll also have, we also have a model for direct hardware tendencies. So expanding the environment to have dedicated infrastructure and doing that deeper integration. And in some cases, that's not necessarily gonna be open stack services that will be some level of network integration that's specific to certain tenant or tenants in the environment. And so one of those that we've been working on very intensively over the past few months is a new partnership with Ag Research. Ag Research is one of our CRIs in the sector here that is, as the name sounds there, an agricultural and pastoral agri-food, agri-tech science organization. And they had a need, essentially they've got a new science plan that's quite technology intensive and their present internal data and computer infrastructure is not up to scratch and basically weren't scale, it's creaking. And they were about to go out and purchase a solution maybe at the start of this year from HPE with some other sort of, I guess, commercial technology providers within that partnership as well. And at the time, the government had just released a review into the sector and called for greater collaboration around shared assets in particular, asset-intensive, sorry, infrastructure-intensive stuff. And so they came to us and we were just working on this platform and sort of we'd just, I think we were just finishing up our procurement activities around the initial build and so we started to build a picture of what this was going to be and asked whether we could integrate their environment and do something that was broader than just what ag research needed. And so we spent a bit of time reworking some elements of the architecture that HPE had proposed to them initially to integrate it into this new private cloud environment. And so this is a bit of a component diagram showing how that works. They'll have, it's a very, their needs are very data intensive, of course. And so there's, you'll notice there's a bunch of stuff outside of the FlexiHPC boxes. So there's DMF and that's got a bunch of storage that sort of direct DMF stuff, zero watt tape library. There's also a third copy of all data in a separate SIF cluster at another site. And in the Flexi environment, they'll have a bunch of bare metal nodes, some hypervisor capacity and some extra all flash SIF. They also scale out our network and have their own direct WAN connectivity through Reans. So they'll have, for example, a GPFS primary file system that has protocol nodes within this environment facing clients in Palmerston North and Lincoln around the country and allowing them to do, have their nice SIF shares on their Windows desktops and users able to log in directly to the login nodes and so on in the Sloan cluster on FlexiHPC. And that's probably about all that's interesting to talk about here at the moment without going into, starting to get into excruciating technical detail. So I might just stop there and see if there's any questions. Hi.