 Okay, recording is happening great. Okay, well, let's begin then arena That's okay, we have 30 minutes for both of you and we're not quite there Can you see the screen? Yeah, okay, good. So this presentation as I said already is about location integration project or location index and the number of government organizations like ABS, Department of Environment and Energy, Agriculture, Science Australia and CSIRO. This project is also part of a big deeper project which that integration partnership for Australia. So it's quite a big project. It's two years in duration and we are now half way through the second. Yeah. So to start with, it's just Oops, can we get to the video? Can we get the sound? Arena, we're not actually here in the audio so it's not We will? Can we? Okay, calm down. I'll try to pause it. So let's try now and let me know if you can hear it because for me it's We're not really getting the audio coming through now. Okay, no, okay. So we skip then the video itself. That's okay. And it will be included into the presentations or the links so people can look at it later. Okay, thank you. Yeah, no, that's okay. Yeah, so basically the purpose of location index is to integrate different types of geography and try to bring together spatial and non-spatial data and connect data on society, economy and environmental layers. And what we're trying to do is to create a system which will support decision based on fit for purpose data with non-operative source and using best technology and available on demand from the device of choice because the whole infrastructure is available on the web. So our challenge is how to join multiple geographies and observations together. They are not structured in the same way and we're probably all familiar with those so we have our raster data, we have our vector data and these are just examples of different data types and we have a lot of different tabular data as well and sometimes it has no coordinates itself so geography is represented by description, Tambora, New South Wales, it could be area, it could be represented in latitude-longitude or it could be collected and integrated this way. So the question is how to bring them together and that's what we are trying to do. So we have a number of challenges. So challenge number one is data integration and what we want to do is to have the data to be accessible and also fair principles not just delivered and downloaded so people can access it in SUTI and integrate with their own data set on the web. So we want it to be findable so it needs to be registered or indexed in a searchable source and each of the data objects needs to be uniquely identified and assigned with business identifiers. This also needs to be self-describable from metadata. It needs to be accessible so it needs to follow standardized protocols which are all both free and universally used. We also need to be aware of authentication and authorization procedures. The data needs to be interoperable so it needs to follow some agreed format open preferably and use the same language and agreed set of vocabularies and also reusable. So it needs to be machine-readable and also with no license security constraints and provenance and follow agreed community standards and this is the link to the fair principles for people who want to learn more. So what are we trying to do? We are trying to achieve with our goal semantic integration or clean data, data available on semantic web. So what is the power of the link data? So we can discover it through the web. It's enabled machine-to-machine communications so any data integration or searchability can be achieved quite quickly and at quite large scale. It's also extendable system by the default using semantic web capabilities and because of that we can extend usage of this data beyond our traditional client base and use cases. And also it enables the consistent data mining of multiple spatial and non-spatial data through GIS, without GIS software for opening the data. The other challenge for us is innovative technologies and the technologies we selected to use as link data and also discrete global grid system technologies. So because it's actually unique to globally unique project, no one tried to integrate data at that scale between multiple agencies. So there were a lot of testing of the capabilities themselves. We need to build those capabilities based of our testing, implement new tools for semantic integration of data. We also try to reuse our existing tools and infrastructures where possible. For example, Research Vocabulary Australia was selected as a central place where we can all share our vocabularies. The system is implemented in the cloud and also private repositories. So how to communicate between those is a big question. And CPU access and integration is a huge requirement, particularly from ABS and information about personal circumstances. So we really need to be careful. So what we're trying to do, we're trying to build that extendable system, which is ready for reuse by other people. So the system which is highly scalable, it is secure, and we use cloud tools available to us. The system is flexible. So it can be deployed by different users and custodians in the same repeatable code. We use GitHub code management system and using infrastructure as a code approach for this. And also easy to administer system. So the code deployed as a workflow through the Bitbucket pipelines. Basically the whole system can be picked up and deployed by our users. So I just wanted to say a couple of thoughts about the digest technology because it's not necessarily well known to people. So this technology allows us to integrate data and in spite of its projection and data formats. And the technology is math based rather than GIS based. So what it does, it creates an indexing system of equal area translations around the globe. And therefore every set of larger scale can be divided multiple times. So it's quite easy to apply mathematical algorithms and statistical algorithms rather than our traditional GIS queries. And I'll talk about it a bit later in an example. So our challenge number three is social architecture. So what are we trying to do? We're trying to improve user experience and efficiency for human related processes and also institutional culture. Social architecture is quite a big issue because every organization is doing things in a different way. There is no centralized methods for coordinating activities between multiple parties. So what we're trying to achieve is to build a system which is user centric and also to improve overall governance for data. So we need to be aware of security privacy and understand the ethics and our constraint, potentially constraint factors. We would like to improve data integration workflows between our partner organizations. We would like to build a system which help us to ensure that the currency rather than downloads and then we don't know whether it's new data or not. We would like to enable multiple and quite complex user requirements and use cases. And yes, as I said, a support user centric approach. And basically what we're trying to build here, going through this spaghetti way of managing data when everyone is downloaded, everyone's data and we don't know how it's used, to a more streamlined approach where you basically bring data to location index and index it and capture relationships. And then multiple analysts can access the same index and then apply it to different methods of usage. So how we bring it all together? So at the moment we're building a set of ontologists. They're one centralized ontology for the location index itself and individual ontologists for each of the data sets. We're also building individual registers and planning pages so people can look at how their particular objects look like and see minimum metadata about it. But it's also the system which allows multiple alternative use which is suitable for machine readability of the data. We are pre-calculating indexes of linked datasets. So we know what object is connected to the other object and in what way. And then that allows us to query because multiple datasets. And this example allows us to bring together information for place names, from digital elevation model, and from the GIS index as well. So this is an example of how we plot for digital earth Australia. It's a quite large compilation of satellite imagery and also some pre-built products. However the satellite imagery just give you a range of values and it needs to be calibrated and assigned with different attributes which are suitable for machine or human understanding and further mining and analysis. And it was previously very difficult for them to bring together attribution like in this example's service water attribution and information from the satellite imagery. It would take months and months and was impossible in some cases for the national scale. So we were able to bring together the observations of or measurements from the surface water dataset together with DEA and the two days not months. It's literally took a couple of days to bring it together. It also allows us to integrate raster and vector data and we didn't need to create complex schemas and download them into the GIS was done very quickly. But because it's a repeatable process, it allows us consistent to bring consistent answers for a range of questions. It's repeatable and it allows us very easily to connect big data with little data. And in this particular case it's been done through a common DGIS index. These two columns here. Okay so basically what we did we we assigned DGIS attributes to satellite imagery cells and also to objects from the surface water and were able to connect them together. This is another example where we use the same methodology. So it's matching again digital with Australia satellite data and land parcel. So in this case we are connecting DEA and cadastral information and the question was which land parcel are in which irrigated area. So it was again quite quick exercise for us. So now DEA teams are quite interested in continuing this collaboration and that's the moment we are calculating the whole Australia sort of integration of this data. These examples were done just preliminary just to test the capability itself. So as I said location index is a two-year project. So the first year it was mostly about learning capabilities and technologies. So we created and tested new infrastructure. We created link data index and built capability for both link data and DGIS implemented a number of APIs. And this year we are trying to progress operational synchronization for link data infrastructures which is quite challenging. So we already released the governance framework and the social architecture documents. We are continue we continue developing APIs and our tools at the moment and building a demonstrator at the moment. We are also developing user guides, spatial analysis of data using both semantic web and DGIS capability. So it's going to be available shortly. And we are extending link data index and DGIS to R. So the initial set of data which we were testing included statistical boundaries, GNF or addressing dataset. We also looked at the place names and geofabric but we're extending and being on testing our surface water data for that purpose. And so far it's willing to be quite successful. So what are the benefits? So we are unlocking data potential. So we're enabling machine to machine data integration, analysis and mining. And we're also removing needs for cross agency data transfer. At the moment a lot of time is taken by this. So it's huge improvement in efficiency and also getting more current information to consumers. We also accelerated innovation. So all the developed protocols tools, APIs, etc. they will be available not just to the partners but to all interested parties and their data assets will be available through Internet of Things. So people who we don't know of may basically access our data very easily through the endpoints and services. So we're trying to bring data to seamless data integration across multiple datasets and systems without opening them across. And also maximize collaboration. So one of the big achievements from last year was and all partners indicated this was collaboration. So improving relationship because multiple sectors understanding each other, understanding each other capabilities, building new common use cases. And governance and workflows, common workflows and governance are quite important aspects of this as well. So that is the first presentation. Any questions so far or we'll move to Zavalan? I've got a quick question, I mean, or if we want to use this system, is there any training approach? Not yet. But it's on the red map? Probably we'll be met financially. Because for the moment we're still trying to develop new tools in the procedures, guidelines, etc. So guidelines will be shown as the list. So that would be used as a part of training, but we would like to test those on our part mobilizations first. So any other questions from anyone? Just jump in here. Saru is doing some of the development work on the data aspects. If there's any interest in previewing some of those functionality and providing feedback we'd be interested in engaging anyone who would like to do that. Thanks. Thanks, Jonathan. And is there anybody else internationally who's doing anything similar? No. No, no. That's fantastic. It's such a real fear. Sorry now. Yes. Hi, it's Robin Tottenham from the Department of Agriculture. I mean, I just wanted to ask if you can elaborate a little bit more on there was a point there where you said that there wasn't a need cross-agency data transfer. So I was just wondering how does that actually work? Okay. So what you do is you build this common index which allows you to record relationship between data in one agency and data in a different agency. And through APIs and data services, only this subset of data is getting connected and re-jumped. So the data is open at the agency in the best instance? Yes. So there's a number of prerequisites for the data. So each object from the dataset needs to be assigned with unique identifier. The dataset needs to be open and to be available on the web. So it could be some authentication and authorization processes supplied as well. So it depends on the data and the system. And then basically this index is used as sort of orchestration mechanism for bringing data together. Okay. Thanks, Irina. I also just had another quick question. Do you think Loki will actually change the way we collect information in the long term? So part of this is about the ability to integrate different data types. And I'm just wondering, is this an infrastructure to support a whole different way that we collect different types of information? Or do you think this will actually change the way we collect information? I think it changed the way we use information. That's for sure. It's also how we manage information. Probably not necessarily the collection side of it. So for collection probably my second presentation would be more suitable. But definitely how we use it. Make it more fair. But I think it will actually change some of that collection because you won't have to be as constrained from the start of going, we must collect this data in these units. You actually might collect it one area and you'll be able to transfer it across and so we get to watch this space. Thank you. We can ask Jonathan, do you want to add anything to this? No, I think it was a good overview of Loki's project. We're starting to push some products out in the next few months. So I expect that more will be available for people to hear and interact with in the near term. Yes, so the Department of Agriculture is involved with Location Index and we're working with them. And Jonathan's group in particular on the particular use case. Hi, it's Kiran here. You mentioned that one of the key challenges there was social architecture. So you just elaborate a little bit more on that. Okay, so the social architecture and as I said, it's a number of areas we're trying to look at. And one of particular, well, I would say several focuses, but this central governance is a big question, discussion question for us. So how we can govern this data considering it's a common architecture and there are many partners and many parts coming from different organizations. We need to ensure that all these connection parts are working together. So we have common understanding how we update data, how we bring data together, how it's getting access, etc. So then multiple questions for the governance group and it needs to be a little between multiple partners. So at the moment we basically have a big gap in this space. So this discussion is just starting. So we need to probably talk to multiple partners starting from Australian National Data Commissioner and with between all the partners, but basically it's a bit of a question mark at the moment. The other bits and pieces of social architecture. So when we had the review of the project, most partners said that location index helped them to improve quality of their data because there are certain requirements for data connection, how we present data on the web, the quality of the data was assessed and improved. So it's bringing two changes in the organizational culture itself about how they think about data, how they manage the data itself. Yeah, so those things. And I think something that will be part of that conversation is when data is aggregated, how do you then cite the derived product? It's multiple questions and we're very touched on this, but everyone is aware that it's extremely important factor, equal to data itself and technologies. Thanks. The second one. Presentation number two. So at the moment I am the chair of NSNIC or ICSM metadata group. So metadata itself has multiple challenges. So what we face at the moment is a change in technology. And we all know about cloud computing and hybrid clouds. We know that there are certain requirements for machine availability of the data through advances of things like machine learning and artificial intelligence. There are different expectations from people. We actually in just as Australia is introduction from questions from people, individual questions, they want to find useful for them information online very quickly without calling or connecting someone and make decision about whether they want or not to use this information. The standards themselves, this environment of standards is very complex. So we have ISO, we have OGC, we have WPC, we have community based standards. So to understand their connections and the standards themselves requires a lot of skills. And we don't really have a lot of people who are skilled in this area and sometimes resources are not available in organizations to actually maintain those skills. And we also have our common things like budget time and rules. And in many cases metadata can help with resolving some of those. So what is the metadata for organization, business and the users? So what we want to see is content reached family data. And then it can be used as a promotion and communication tool within organizations, but also as risk mitigation and resource management. So help to promote your organization because users can easily discover the outputs. It can improve efficiency. So as I said, it could be a metadata supporting self-serve service. So they will understand what they're dealing with and make their own choice about it without taking resources from the organization itself. It's reducing cost because you can find information and then we just need to use and to use and share. It's improving machine to machine discoverability and integration. And it's also minimizing the business risk and liability. When you have information about things like legal or security constraints and lineage, you're basically protecting yourself declaring what people can or cannot do with the information upfront. And it's reducing cost for improved resource management overall. So what is the metadata working group and who we are? So the Enzliko was sorry, the Enzliko endorsed establishment of this group and the ICSM is actually established in November 2017. The first meeting was held in June 2018. So we basically went again a bit into progressing. And the idea was that we will coordinate implementation of global metadata standards across Australia and New Zealand. So at the moment, it consists of government agencies, federal and jurisdictional levels, research and academia organizations. They have initial number of organizations involved for 15, now it's going to be 37. And at the moment we have about 100 people on the mailing list. We also run the technical metadata working group, which is subcommittee of the metadata working group, and currently we have four night meetings. And it's quite enthusiastic and persistent group of people progressing a lot of different outcomes. So it was established as a forum for communication and engagement with special communities and interest groups. And we have Australian representatives at ISO, OGC and W3C as members, which help us to integrate our development with those books or get feedback from them. We advise on best practices for metadata and associated tools. And develop and publish metadata best practices relevant for tabularies, crosswalks, communications materials, et cetera. So this is a website where you can access information about the metadata working group. It also has information related to previous meetings and presentations which were done during this meetings, but also the outputs of the group and the documents. So you can contact me if you would like to join the group. And I just put several examples from our projects. So we developed a roadmap which helps us to align our activities with overall decisions from the working group. We are developing crosswalks between different organizational catalogs and different metadata standards. So at the moment we have about five organizations crosswalked between their catalog entries and these organizations using ISO 105-1. We also have mappings to RFCS, which is ARDC standard, and also to DCAT and CCAM. And that's basically the example of what we're trying to do. We also developed the best practices user guide. So at the last meeting which we held in at the end of October, we endorsed the user guide. So it's going to be available on the website shortly. And at the moment we are working on a number of our user guides investigating requirements for things like metadata for imagery and metadata for digital data preservation. But the current focus is metadata for services, how to describe services, what are the important elements. Vacabularies. What we're trying to do is also to publish vacabularies and we using Research Vacabulary Australia for that purpose. And then those vacabularies are available in multiple formats from XML to RDF. So they can be used by multiple use cases and mechanisms. But also the usefulness of those vacabularies, we're always looking for some vacabularies. How to classify these like roads for example, or how to classify the water streams. So these will give people at least centralized access to those vacabularies and they're open for usage. It could help with collaboration in terms of bringing maybe common vocabulary system and yeah, maybe form basis for some private vocabulary system. And also provide advice on metadata for different types of compliance. For example, digital continuity 2020 or GDA 2020 as well. So we're trying to explain to people what elements of metadata should be used to be compliant with those regulations. Irina. Yes. Oh sorry, just wondering how much left in the presentation. That's it. That's it. That's the last slide. So it's very time. So everyone be in touch with Irina and then we can distribute questions and ask some answers along the way. So we'll roll over to looking at the time. Melanie while we're getting the presentation up and running, we might actually differ item number four. Next time we'll take it offline. Just so they run over time. This will be a little bit shorter. So we've called it Elvis has left the building because it's actually went to the cloud. And we're here today, rather than being a GA because our whole infrastructure has gone down. So that's a good good example of why we don't build things in house anymore. We actually go to the cloud. So probably three, three and a half years ago, we used to have a system, the elevation data called native portal. It was terrible. He'd had buttons that never actually did anything. It was written on a server, Microsoft server. And every time Microsoft updated it would cost us $15,000 to get someone to change the code. So just before Christmas three and a bit years ago, we said, it finally fell over. And we said, no, we're not fixing it. So Shane and Crossman and myself were in a meeting. We just, we're not going to do that. We're going to reinvent the wheel and see if we can do this better. So we actually had to set, how do we want to do that better? One, we actually wanted it to work, which at the time was actually quite critical because we had something that didn't work. It had to be technology agnostic. So we didn't want to be stuck like in a Microsoft server anymore. We wanted to actually be out. So we went to the cloud. The cloud for us is that, yes, it works in AWS, but if tomorrow AWS put up their costs, we could pick it up and we could move it to Azure or any other cloud provider. We didn't want to be tied into an infrastructure or a software solution to a problem. We had, it was big data. So we're talking terabytes of data. Had to be fast and had to deliver it really quick. And I had to do it over the web. I had two staff at that point in time just handling queries and putting stuff on a hard drive. We couldn't do that anymore. And it would have been really nice if it stopped the complaints that we were getting, because we're getting a lot. So that was one of our main focuses. And it had to ship and click. We didn't want to do what we'd currently done, which was give someone a terabytes worth of data if they only wanted two gigs worth of actual data from that data set. And probably the biggest change that we did is we actually had to make it simple for the user. My experience of working in government is we love making things simple for ourselves. So we have to do less work. But we tried to not do that, because about how do we actually get data to the user may mean more work for us, but we actually want to focus on that. Had to cost less. We had to keep a linkage to our users. Because in the past, we'd actually made things sissy buy and available, but if you can't tell who's using it, it's hard to validate why you're actually doing it. And the last one was we gave ourselves a month and a half to do it with Christmas in the middle. So we built this infrastructure, we're trying to be quick. We really wanted to just go against a long business model. There's a shop, there's a warehouse, there's a factory, and you deliver it. Don't really care about the shop that much. A lot of people say I'm on another portal. If they're worrying about a portal, that's the wrong thing to be worried about. So my analogy is that you can buy your tin tams from Coles. You can buy them for Woolies. Do you really care which shop you got them from? Not really. You care which warehouse and which factory they came out of. So that was sort of our goal is that if we have an area where we can keep the data such as the warehouse, it doesn't matter how many portals access it, because everyone's accessing the right data. So we used that. We used the factory which is FME to actually clip the data and send it to the warehouse and deliver it to people. So we cut things down from about three days at its quickest to now you're looking at probably two to three minutes to get the same amount of data that you want. So currently two and a half, three years later, we're sitting at nearly 7,000 orders a month, but I haven't changed these sats. We've got over 40 terabytes of data sitting there, oh 40 terabytes delivered each month. We've got 60 plus million dollars worth of data sitting on a server somewhere in AWS and it doesn't cost us a lot. It costs us about $70,000 a year for storage and delivery. Before, we actually had to pay $200,000 a year basically for the disk that was actually running it before. So it's going to take a long time until we actually get to the cost that was costing us mail system. It didn't work. I've got a lot of discrete users because we actually deliver everything through an email, but also now represented with probably got 70 terabytes worth of data sitting in one location. So how should we use that? What should we do? And that's the question we're starting to use. But also we thought we knew who our users were, but we don't actually, when we've actually looked into it. So this is more getting into the users of Elvis. I will be doing a demo quickly of how to use it first. So we've actually surveyed them about a year and a bit ago. Who's using it? So I've got a lot of stats, but I've only got two in this presentation. So people are using it weekly and monthly, significantly. So it's part of how people work now. But this is probably the biggest change is we identified where those orders were coming from. And our biggest user engineering. And that's significant because an engineer wants the data to make a design or decision on, but they want the data that they made that day. So they want to download it. They don't want the service because it doesn't actually allow them to go to court and argue that this is the data they made the decision on. So we actually haven't had a great deal of people saying, oh, it would be great if we had a service of the elevation data because that's not serving the need that they want. And then we've actually worked this into the order process where you actually put what industry do you think you come from. So this is last month's. You can see engineers again. The fourth one, which is construction, is something that a year and a half ago, we hardly had any, but now it's starting to actually take off in that construction. So we're getting some business analysts analytics about this and how useful it is. So that's really quick because I wanted to quickly. So this is the data we've got. It doesn't look like a lot, but there is data covering the whole of the country. New South Wales has been very specific about covering and making available all of this. So it's quite significant. Victoria, not so much, but hopefully that's changing. Western Australia still runs in a model of charging for data. So they haven't got a lot of data on there, but we're starting to build that pressure and that change of what it can mean of making data available is much more beneficial for your jurisdiction or your state to do that. So I'll quickly just give you an idea. So this is going off, checking what data is available for that area that I just selected, try and zoom in. And I purposely chose the border. So we've got some New South Wales planning and industry data. So this is actually top-over symmetry. So if I scroll over it, you can see where that coverage of that data is. New South Wales spatial services. I've got one meter and two meter products. You can see all the tiny little, they do it in two kilometer square grids, point clouds, Geoscience Australia data. I must have missed the border. So it basically goes and it finds this. So all the data that is in the system that is available and gives you the option of how you can actually access it. So if I choose one, so it's quick. I'm actually interested in that one. I want to download it. Put your email in. You tell it you're not a robot because we're the only site that actually has been hit with an attack just by someone actually reporting repeat tiny little orders. The good thing is it didn't actually break it. It just slowed it down. So that's it. Anyway, we'll do that. Put your email in and it'll send you an email to download it off from Amazon. Thanks so much, Ben. So if anyone has any questions, probably best if you know you've been and if people might be interested in the system architecture in use case, I think they can understand how it might overlap with what they're doing or control to do and also be interested to hear how because this was pre-locked, how did you have all of this across various providers, consistency, etc. So yeah, it's never been a technical problem. It's always been a relationship problem. So it's about going out and actually talking to the states and territories about how they can be involved. If you don't charge them for the data there, that's their value proposition. And it's to the point where they are saving millions of dollars because they don't have to build an infrastructure themselves to do it. Yeah, it's over the time. So Melanie just very, very quickly to reply about location in that. So if we have a presentation about place names, that's probably connection to location metrics as well. So that's really good. Yeah. Great. Let's do that. Super. Okay. Apologies to people who didn't want to go over time. We are over time. Thanks so much to everyone for participation. We will defer extending our teams until the next meeting. Would anyone like to say anything before we disappear? So Melanie, the next meeting, date? Nothing's set in stone. So let's have a little chat about that and come up with one that might be and yeah. That'd be good. Currently in the presentation list, we've got things down for the 21st, which is next week, probably a bit too soon. It was the FSDF and the 2026 agenda. But we'll have to confirm that by email. Yeah, let's do that. And please, everyone, we'd really like to hear from you. And we have some documents in place, the upcoming presentation ideas, ideas or resources that you'd like to see come out of this. So yeah, please be in touch. Then I'm seeing what will be in touch about the occasion and on my presence, et cetera. Thank you. Great. Thanks so much. Thank you. Thanks. And please contact us if you've got any requests, information, requirements. Bye. Great. Thank you, everyone. Bye.