 Thank you, everyone, for coming to this projects exchange. This is the first of the platform sessions. My name's Kerri Levitt. If you haven't already met me, I am the Platforms Program Manager at ARDC. We also have Andrew Trelaw here, who's the Director of Platforms and Software. For start, I would like to acknowledge the traditional owners of the lands that I live and work on, which are the Ghana people. And I acknowledge their deep spiritual connection to these lands. And I pay my respects to the Elders past, present and emerging, and also to those from the lands where you are. Okay, so just very quickly, I would like to give an overview of the platform's program objectives. So what we're trying to do is enable the development of e-research platforms that are transformative. So we would like and we would like more Australian researchers in more disciplines to have access to these transformative platforms. And we are really looking for platforms to be sustainable in that they exist long after these projects are finished. And as part of this, we want to bring together a community of developers and managers and operators of these platforms to enable peer-to-peer learning, to support each other in best practices. And that is one of the reasons that we are here today so that you can all meet each other, you can learn about the projects and we can identify some of the common challenges across the projects. And so the ARDC, we can see how we can best support you. So we would like to go straight into it. We now first presenter is Helen Thompson. I just actually, first of all, I'd like to ask everyone to mute if you are not currently speaking. And because we've only got three minutes per presenter, I am going to use my very rough bell, just if you get just over three minutes. Yeah, so I'll ask Helen to start. Thanks, Kerry. Good afternoon. I'm joining you from Wutherong country and I'd like to thank ARDC for this opportunity to share some information about the Agricultural Research Federation Platform. During 2020, the strategic directions for agri-fed were informed by a series of conversations with agriculture researchers and stakeholders. These conversations provided a range of insights on the current experiences of researchers and confirmed that many had limited data management skills and no access to platform's workflows and analysis tools. The conversations also assisted in identifying what agri-fed in partnership with others could deliver. That would achieve overall improvement given the significant challenges that confront the estimated 80% of agriculture researchers who currently perform their data management and analysis on desktop computers. Next slide, please. So the agri-fed platform will address these factors that are limiting agricultural science through enabling a transformation in the way that researchers collect, describe and disseminate their research findings. This will be achieved through enhanced search and discovery of distributed agricultural data sets, through access to and adoption of tooling to help researchers describe and store their data and also through access to cloud computing and the creation and sharing of workflows and models. The agri-fed platform will facilitate data reuse and cross-discipline collaborations. It will generate new research insights and support practical applications and on-ground decision-making. Next slide, thanks, Kerry. So a number of use cases will guide the expansion of the agri-fed platform architecture. The technical team will combine the capability of the University of Sydney Informatics Hub, UCU-SIF and Federation University's Centre for E-Research and Digital Innovation. We'll partner with national science and data facilities and adopt and adapt platform tooling, including through RDA, EcoCommons, FAMES and COPPO. The first use will focus on fair agricultural trials data. The second use case will address the challenge of finding and accessing key foundational climate soils and other data sets. And the third use case will likely focus on the integration of information on soils resources with other measures of natural capital. And that's just the background for today, Kerry. Great. Thanks very much, Helen. And if anyone's got questions, we'll leave those to the end. And I'd like to ask Kim Carr. Yeah, good morning. Oh, no, I guess we're afternoon now. Sorry. So I'm here to talk about the GMRT OCBED project. We know that the authoritative and standardized bathymetric data is critical for a range of application. Amongst them are the ocean and coastal modeling community. And so what we're doing is that when people have access, this is the modern art that you get access to with data from a bathymetric background. You can go next, Kerry. But what the community of user there are facing is incredible amount of time in collating, processing, reformatting, merging, gridding, et cetera, which takes a lot of time. And dollar can convert into this and frustration in the end. So what we're gonna try to address here is we're, yeah, you can go to the next one, Kerry, thanks. We're gonna look at it in three deliverables here. We're gonna adopt a platform that's existing in the United States already. And what we're gonna do is deliverable one is we're gonna assemble a subset of dataset together if you click again, which will provide extra data into the bath straight area. It will also inform some new guidelines around how we deal with different type of datasets. And it will leverage off the OCBED program we already have existing. The second one is because we're really wanting to address the ocean community, the ocean model community there, we're gonna do a user need analysis, which will lead to the expansion, sorry, it'll describe and capture the requirements that we need to put into the platform. And as I mentioned, the platform will adopt is called the global multi-resolution topography. We will have some, we want to enable people and the user to do the grids, they wanna do the way they want to do it. So it needs a fair bit of metadata behind it. And then to do this, what it will do is then it will be feeding directly into the marine and coastal models that they can run later on. And obviously that has impact, the impact of this work will be to reduce the time it takes and the money and also make sure that the data actually they're using is more accurate so that the derived product are actually better and more, and it will have downstream effect in terms of, sorry, I've got a blank there, for decision-making that we know these models do. So the last step is the technology I mentioned we're leveraging off the global multi-resolution topography platform and the workflow and the synthesis. So we're gonna take the stack there, most of it will be into the AWS store. We're gonna try to dockerize or make it easy to deploy in the cloud. And most importantly is listening to the user, it's the great composers, is we're gonna add content and functionality so it's really user control for what they need. Of course there'll be some API for a user customization. I mentioned just quickly through that, data format will leverage off the OCBED and also the GMRT data standards that exist. We're gonna look at OGC compliant format which we're already adopting and we're gonna expand through with metadata leveraging some of the technology and the learning that the open data cube has gone, has evolved with. So that's where we are at, there's not a road, it's a year and a half project so we're just, we're on the way now. Thank you. Thanks Kim. And now we have Roger Javies. Okay, can you hear me? Yeah, all right. So I'm gonna talk about the geodynamic adjoint optimization platform. So there's a lot of big words there but hopefully by the end of these three slides you'll have a rough idea of what we're doing. So if you could move to the next slide, Kerri, thank you. So as we saw from the previous talk, we can map essentially the topography of our planet at unprecedented detail and we can image the interior now through a number of techniques but we have little knowledge of our planet's past structure and flow history. And this is a fundamental limitation because it limits our ability to understand processes that depend on those interactions between the surface and the interior. For example, plate tectonics which you'll all be familiar with but also things like long-term sea level change and its effect on the environment and also the environment's underpinning mineralization. Now in the past and actually at the present most of our understanding of the Earth's evolution comes from forward models where people just run a model from sometime in the past to reproduce the present day. The fundamental shortcoming of these is that it doesn't exploit the available observations that we have of our planet at the present day. So the challenge that I think GeoDoc tackles head on is to fuse these observational data sets to try and reconstruct the evolution of our planet. So what we're going to provide is essentially the research software infrastructure to allow this. It's based upon these adjoint schemes which really allow us to integrate observational data with dynamics, physics and chemistry. Over the last 20 or 30 years we've known that this is the way to go but the development of these schemes is notoriously difficult. So the challenge which is on the next slide for us really is to move from these idealized forward models to data-driven simulations that rigorously account for these observational constraints. Now we're really going to leverage two state-of-the-art software libraries, so FireDrake and Dolphin Adjoint. FireDrake is a high level finite element code that, oh sorry, finite element library that automatically generates code for a given set of equations. So it takes away those years of development that you need and then Dolphin Adjoint essentially provides the adjoint of these equations for free. Okay, so GADOT is essentially focused on the adoption of these techniques, their enhancement, their validation and the support for using these techniques within the Australian research community. The other challenge which is equally as difficult is to fuse these observational constraints from many disciplines to reconstruct the evolution and where we've seen an explosion over recent years in the observational data that constrains Earth's evolution, but we've never had a framework that allows us to combine that with physics and chemistry in a self-consistent manner. So again, GADOT will facilitate that integrated approach to understanding Earth's evolution. If we jump to the next slide, so the technologies, as I mentioned, so we split it into three parts. So there's the modeling platform where we'll essentially leverage FireDrake and Dolphin Adjoint, which I mentioned previously, Petsy and Roll. So Petsy is a suite of libraries for the solution of these matrix equations and Roll allows us to rapidly optimize these problems. So that will be the first strand. The second strand will be data integration where we'll take observational data sets from a range of fields and provide the tools for integrating them with the models. Another core aspect of the project is building a broad community of scientists, computational engineers, data scientists and mathematicians within Australia. And that's really going to be facilitated by our partnerships with Oscorp and the NCI in particular. And ultimately, whilst this has all been focused from the Earth's science, the tools are transferable and the final stage of the project will really be taking the technical developments that we have, essentially importing them across to other disciplines. So that's it from me. Thank you very much, Rod. Okay, now for Paul Roe. Paul Roe, sorry. Paul, we can't hear you. There we go, lucky mistake. Thanks, Cary, and thanks, Andrew, and thanks to the LSE for supporting our project. So this project is about open eco-acoustics. Next slide, please. So Australia's currently in a biodiversity crisis. There's a desperate need for large-scale monitoring, most biodiversity measurements and things are predominantly manual. Acoustic monitoring is sort of revolutionizing monitoring. A lot of people are going out and putting out sensors, and you can see some of the sensors in the bottom corner of the slide there. And the sensors capture these long-duration acoustic recordings which form a direct and sort of permanent record of the environment. It's analogous to sort of GIS, but for fauna rather than vegetation. The big problem is that there's no open platform for eco-acoustic data. So data analyses are sort of homeless and the users are really desperate to have some way to kind of keep their data and to sort of share it. Next slide, please. So we already have a solution, but it was very much sort of an in-house solution and what we're looking to do is to sort of open it to everyone so that other people can kind of use and set up the system for their want-to. We want to get sort of aggregate and shared data and our data of other people's data and analyses and tools. We want to be able to interoperate with other services like sort of TURN and the ALA and of course to support fair data. The big part of that is involves sort of supporting and developing standards and community and training around that. The impact will be to really transform environmental monitoring to make it a big data science. And this will sort of contribute towards a national ecosystem observatory capability, which is what's in the sort of the columns or the older roadmap. Next slide, please. So in terms of what we're going to do, so we've got a list of sort of technologies. Essentially what we're trying to do is to sort of improve the existing technologies that we're using to make the system easier to maintain and sustainable and accessible and have fair data and to be able to produce reusable services both in terms of microservices, but also in terms of web parts. So if, for example, someone wants to develop a citizen science website which involves eco-acoustic data, they can just sort of bring in web parts that do the sort of annotation and analysis. We want to interoperate, so there's... We need to sort of get a talk to ALA and TURN and things like that. And there's this need for sort of standards. So there aren't currently a lot of standards around eco-acoustics. We need to sort of develop them and build on existing standards and the expertise of other people. ALDC and ALA and places have got that. Shall we just stop? Thank you. Thanks, Paul. OK, and now we've got Rod Colley. Hi, it's Kerry. This topic, this project is about using AI to fundamentally improve the monitoring of fauna underwater. So that's Australia's oceans, the rivers and the estuaries. We have a bunch of partners. I'm from Griffith University. We have a bunch, just go back. Actually, I was just going to say something, you know, just about the partners there for people to know. And before we go to the next slide, I was going to get you to think about the fact that as you walk around the CBD of most Australian cities, you know now that you're being watched by at least one camera, sometimes many cameras at the same time. But going forward now, thanks, Kerry, to the next slide. So that's actually happening underwater as well. And this is a good thing, generally, because the monitoring underwater of fauna and the consequences of all sorts of changes hasn't really been going very well. And the fact that cameras are now cheap, they're safe, safer than divers, for example, and they're better connected. So there are now a lot of cameras underwater. That's the good news. The downside is that for each hour of video footage that is taken, manual extraction of the relevant data, invasive species or indicator species that are of interest is taking a long time and it's totally untenable. And the monitoring is not really being done after all, even though the cameras are allowing it. Next slide, please. So we have a solution, and that is deep learning computer vision on that. And we can now currently already identify and count animals of interest. So monitoring will be cheaper and faster. And it actually turns out more accurate because the computer doesn't have days where it's feeling sleepy or is hungover from the night before. So the science will become better. The monitoring will become better because at the moment it's very spotty in space and time and it will be much better. How are we gonna do that? Next slide. I have some details on the left which I don't have time to address today, but if we look to the right-hand panel, the right-hand side of the panel, we'll have more streamlined ways of annotating the imagery in videos so that anyone can do it in for the topic of their interest. Same goes for the training and evaluation of the AI models. So currently you need a software engineer to sit alongside you and mentor you and we're gonna make that through training packages and so forth available to all researchers. And then likewise make it streamlined for the inferences and predictions to be turned into actionable data for whatever purpose people desire. Thanks, Kerry. Thanks, Rod. Okay, now we've got Aaron Dodd. Good afternoon, everyone. I'm joining you today from Jaja Wurrung Country and we're gonna talk to you about the Bisecurity Commons project on behalf of the University of Melbourne, where I'm based, but also my colleagues that you can see up there in the top left-hand side, a right-hand side. Next slide, please. So the problem that we're facing in the biosecurity sector is that the system or the biosecurity system in Australia is facing somewhat of an existential crisis. Between now and 2030, passenger and import volumes will increase by 70%. And the estimation that will occur as a result of that is that residual risk of being the language used in the sector will double. But what that actually means in plain language terms is that the damage is caused by the things that we're trying to keep out will double. And the reason why that increases at a faster rate than import volumes is that we can't increase the efficiency of the system, the interventions that we deploy to keep things on the other side of the border at the same rate at which the volume is increasing. And what that means is that progressively worse things slip through. And that's what we're showing on this slide here. We go to the next slide that you would have noticed on the previous slide that all of those things were sorted and we were able to keep out the high-risk things that only let through the moderate low and very low risk things. But the challenge that presents to us is that we actually need to be able to identify what the high-risk things are and we need to be able to target our interventions in the areas of the greatest risk. This is a simple risk-based approach to regulation. And the challenge that we have in the biosecurity sector at the moment is that both in research and in practice, I liken it to us using the yellow pages to find things. It's a simple straightforward process. You can look up the yellow pages. You can do a bit of a comparison of the different competing options and you form a qualitative judgment. Where we need to do, if we are able to keep in touch with these increases in volume, is actually start to move towards data-driven approaches. And currently at the moment, all of the different ways in which we do those data-driven approaches are done by individual experts independently. They're not joined up and they're not able to easily be developed into an end-to-end workflow in the same way that a decision-maker might want to be able to use those tools. So what we're going to do is we're going to take EcoCommons, which is an existing ARDC funded platform, and extend on it. So if we move to the next slide, please. So this is the method in terms of EcoCommons that's currently being employed at the moment in terms of their high-level architecture. We're able to pick up most of that high-level architecture and add to that in terms of additional microservices. So on the previous slide, you would have seen we have a series of different independent analyses that need to be able to be done. What we can do is we can factor those up as independent microservices. We can add those into EcoCommons' existing microservice toolbox and really make advantage and leverage of the existing work that platform has been able to be done. The key thing for us is that lots of the tools that are already in EcoCommons were able to reuse, but also the tools that we're adding into EcoCommons will ultimately be able to translate back into other environmental questions. Conservation and biosecurity are kind of two sides of the same coin. One's about driving things towards extinction and one's trying to drive them away from extinction. And so we see benefit both for us in the biosecurity sense, but also for EcoCommons in terms of their conservation work. Thank you. Thanks very much, Aaron. Oh, I have no idea why that. Oh, I must have clicked on something on your slide. Now we have Ivan Hadigan. Thanks. I work on a range of environmental hazards, but I work at the Centre for Air Pollution and Energy Health Research. And this means that air pollution is really something that's front and centre about thinking. And this project is about improving the scientific workflow system that sits behind how we do environmental health impact assessments. And air pollution is just one example. Thanks, Kerry. So the problem is that environmental hazards are bad for us. And for example, air pollution, some of air pollution is avoidable. And so we can avoid these health impacts, but the decision-making that goes into deciding, I could have shown you a picture of a smoky exhaust pipe or, it doesn't take much imagination to think of things that we could fix. But there are costs associated. And so what happens with deciding what environmental hazards will be managed so that we'll improve public health is that you have to calculate the burden of disease that is attributable to that exposure to environmental problems. And what would happen in a counterfactual world where someone had removed that problem and then calculate the cost of that and see if it was worth it? And so in health impact assessments, there is, for example, this air pollution example that I'm using, we can calculate through a protracted set of data inputs, model inputs, statistical estimations, economical arguments, a whole range of calculations, we can estimate exactly those attributes that we want to feed into the decision-making process. An example here is the flowchart from the Global Burden of Disease study, which is very famous in the Lancet. I don't want you to read it. I will put the devil there because clearly the devil is in the detail. And I'll put the sausage there because in some respects, this is like that analogy to a sausage factory. You don't really wanna know what's going on inside, but you do want to know that good ingredients went in and the product that came out is worth eating. But in general, so there's effects, health effects, there's exposures, there's theories about risk, there's estimations of populations and there's estimations about outcomes. So that's the problem. We have solutions. In the next slide, here's one solution. And if you're in America, you can use this BenMap software from the US EPA, which has that whole backend kind of worked out and allows people to... This map is showing disease rates that some proportion of are known to be caused by air pollution and it has options to change the air pollution and change your effect estimates. And this is a great tool, but in practice, it has big limitations. They're boiled down to, it's not a living system. It's not flexible to do other environmental hazards. It's not updatable. Some of the things we want are early warning systems because of the emerging infectious diseases. And so it's also not linked with that strain data. So what we need is an improved scientific workflow system to run the backend that would feed into this kind of health impact assessment tool. So next slide, please, Kerry. So how we will build it is to adopt and adapt something that's already going on in state and territory governments because the environmental agencies are required to manage these things. And in New South Wales, they have developed something using Airflow, which is a scientific workflow system that we will be working with that team. And here's how they describe it in a slide that they've allowed me to use. And I'll just point out that the Airflow engines in the right-hand side pull down meteorological data and satellite data and they do ingestion and integration and allow all of these workflows to be run in a highly automated and orchestrated way. And I think I'll leave it there, but it's quite exciting for us from the environmental health community to be able to think maybe we'll be informing evidence-based policies soon. Thanks, everyone. And now we have Navneet Dhan. Yeah, hi, everyone. Can you hear me? Yes, we can. Thank you. Thanks. Yes. You know, this ARDC project builds on what we had done in the wet compass project where we developed a system that imports data from hundreds of wetneet practices in Australia and then provides aggregated data to researchers throughout Australia. However, there are some problems with the system. So first thing is that, you know, although some of the data that we are collecting from wetneet practices is already categorized like sex and breed, but, you know, vast amount of data is in the form of free text clinical nodes. So currently, researchers have to go through each record manually to classify, for example, you know, say whether the animal was vaccinated or not, whether it was deceased or not, which is not obviously efficient. The second problem is that the clinics currently capture pathological data quite inconsistently. Some clinics have embedded pathology reports within the electronic patient records, but most of the clinics add only the summary final outcome or the result in the patient record. So which leaves the detailed reports in a separate file in which the system is unable to currently access. So the third problem is that, you know, we provide data to, as I said, we work with researchers throughout Australia. It is a consortium of researchers from all wetneet schools in Australia. And we provide data to those researchers to analyze. However, you know, researchers conduct their analysis using their own methodological pipelines, which, you know, reduces the comparability and reproducibility of the results and makes it difficult to compare results across different studies. The solution we are providing through this new project, you know, we will transform the existing system and provide solutions to most of the problems. So first we will pre-process the free text data using natural language processing. By doing that, we will assign codes to disease, diseases and diagnoses will enable us to provide the much, much better structured data to researchers and enable us to perform sophisticated search functions than that are currently available. And secondly, then we will identify clinically relevant questions and then develop pipelines of analysis for those questions. So by doing that, we will be able to provide access to a single unified data analysis portal to all the researchers calling it virtual lab. That is going to streamline data cleaning and analysis, but also improve the reproducibility and comparability of our findings. The third issue is about the pathology reports. So the new system will ingest data from pathology PDF reports and then use wet compass unique patient identifiers to link on the pathology report for that animal with the electronic patient report of that animal. So that providing a rich source of data for investigation, investigated to, you know, interrogate. The, how we are doing it. So Ryan is our, the guy, he's going to talk about that slide in a minute, right? So instead of a, I guess, traditional architecture, we want to discuss how the data flow will actually look like. So that compass is not an entity unto itself or by itself, it partners with many clinics. So the question is, how do we extract data from those clinical sites to have this common virtual lab? So for any site, there are three parts. One is the pathology data. This, we're looking at extracting from both site itself and pathology provider, which has a much more structured backend as opposed to a lot of free text OCR required stuff. We'll have a number of drop locations for each pathology provider or clinical sites. And then we'll have a series of tools to parse to map the different data structures to a standard ontology and to clean the data before uploading to a staging database. Similarly for EMR, we'll have a drop location and a number of tools for parsing and cleaning. And then lastly, we're looking to leverage other services where it makes sense. So this includes the Australian imaging service from last year's platforms around, Oliver and OMEA to be able to reference clinical imaging, microscopy imaging and omics data from the central data set. From there, we'll have a gold quality controlled database, which forms the foundation for the virtual lab and the analysis environments, where we'll have users, PhD students, et cetera, be able to export a subset of that data. And then for the duration of this project have four exemplar kind of labs, looking at animal longevity, body weight curves, prevalence of different disorders and pharmacotherapy data pulled from all these disparate data sources across Australia. Thank you. Thanks, Ryan. Thanks, Nemi. And last, I think we have Chris Bennett. Thanks, Kerry. So yeah, our platform is the Australian Housing Data and Analytics platform. I think these slides are not the slides that I can see online. Okay, I'm not sure if we've got a different version or they haven't updated. Do you want to share your screen? What I'll do is I'll stop sharing and I will maybe if I get out of sharing and then re... Can you see mine or do I need to? How do I share in this? Just wondering if I reload it, it may actually, if you've been working on them. There is a care button down at the bottom, Chris. Oh, that's okay. I think I might have the updated ones. So there's real-time slides being put together in the cloud. Is that better? That looks like, yeah, because the project, thanks, Kerry, is UNSW-led, but with a number of universities involved and other partners I'll talk about. So we're looking at the Australian Housing Data Analytics Platform. Next slide, Kerry. And so the problem that we're tackling, I guess the real-world problem is there's all sorts of problems facing our cities to do with housing supply, housing demand, housing affordability and having evidence-based approaches that researchers can access the best data on housing and real-time data on housing to provide evidence to government, state government councils and the federal government around what's happening in our cities and what does this mean around social housing, supply, access. And there's a major issue in accessing a lot of good quality housing data. So there's a lot of aggregate level data available from the ABS and the Australian Institute of Health and Welfare and other agencies, but trying to work out what's happening on the ground, what's happening in new suburbs. What, for example, Marsden Park in Western Sydney, 1,000 new houses went in last year. But if you look at the data, it's five years old from the ABS. So we really need that real-time on data coming in. So that's an opportunity that the platform will be tackling. So we don't have a nationally integrated housing platform available for researchers. We have some platforms that have housing data and one of our key partners, Oren, have a number of housing data sets there. So we're very pleased to be partnering with Oren in working through how we take the lessons from Oren into focusing on this national housing data platform. We've got other partners, Ahuri, the Australian Housing Urban Research Institute, that brings in a number of universities. They fund a lot of research into urban research and housing issues. So how can the platform support those research projects? There's government support, the National Housing Finance Investment Corporation, NIFIC, that was set up under Scott Morrison's government, is really tasked with understanding that housing supply and demand equation. So we've got good support from the federal government, also ABS Australian Institute of Health and Welfare, Front AirSI, previously the CRC for Special Information, helping on the project, and the Commonwealth Bank are also contributing. There's other partners that we keep going. Let's carry on. So one of our solutions that we need to come up with is this government structure, because that's one of the tensions when we look at housing and city related research, is how do we get that across government and industry? And so looking at a government structure to sort of unlock some of those data assets and we can feed them into the platform. We have a housing data model, which doesn't exist in Australia. We've got the OGC have developed this land administration data model. So we're looking at that and how do we structure this housing data platform? So it is interoperable and based on existing standards. We're leveraging existing platforms and open source technology like GeoNode that underpin city data and the ORIN software stack. We will be taking those existing infrastructures and then rebuilding those for this federated data platform. Comprising that is an initiative coming out of the Turing Institute out of the UK called Coloring Cities, which is essentially out of Coloring London. We'll be setting up Coloring Sydney, Coloring Melbourne, Coloring Adelaide and so forth in redeploying that Coloring Cities open source platform so we can then enable these communities of practice that will be led by various universities across the country to feed in other housing attribute data, whether it's building age, building materials and so forth. There's a number of housing decision support tools that have existed, been developed by a number of research groups that these will be refreshed and built using microservices. And importantly is training in outreach, not outreach, but outreach, of taking these new data analytics and tools out through the Ahuri conferences. So we can run workshops towards the end of the project. So training up the researchers and those in government industry to actually use the tools. We go to the next slide, Gary. So our high level architecture, which actually has a lot more of the partners as our user groups. We've got our peer user groups, the housing researchers through the Ahuri network of universities. And of course this will be open to other universities to use too. And then we've got planners and policymakers in government related to those housing agencies. And we also have a lot of industry groups like the Planning Institute of Australia and the Urban Development Institute of Australia. We'll have an industry advisory group that we hope to tap into for more data. What's sort of in the dotted red lines is the infrastructure that we will be building or refreshing. So there's a series of decision support tools you'll see there, including putting a housing affordability tool. There's a what if sort of future growth city modeling tool. There's a tool for calculating property value increases around new train stations and infrastructure that goes into cities. Having those tools rebuilt into this platform. And then there's an initial list of priority data sets that the project has come up with. We'll be revisiting this through our data working group. And looking at what are those data priorities that are gonna be the best thing for the investment to make available onto this platform. So there's already some thinking there around those data assets. So it's having the interoperability tools that the housing platform can then talk to the national map. It can talk to the transport cloud that's been funded through ARDC in the previous round. And so making sure we can test that interoperability between this platform with other platforms that others might have presented today and platforms that already exist. And that's I guess our first sort of go at the high level. Architecture you'll see there's the land administration data model in there as a common data storage model. We've got translation modules feeding into I guess a portal, the harmonization of various data products and translation modules. I'll leave it there. Thanks, Kerry. Thanks, Chris. That was fantastic. Thank you so much to everyone who presented. That's fantastic overview of the projects and well, they're all really exciting projects.