 Hello, everyone. Can you hear me? Can everyone hear me? So, so the next session is about Geoscience Australia. Geoscience Australia was looking to push the boundaries and create a meaningful digital change within the organization with this pilot project, which aimed to create a new standardized digital platform for their web properties. The presenter for this session is Stuart Ronan. He's the CTO at Southside Digital. He's involved in the Geoscience Australia project from a solution architecture perspective and has been working with the Geoscience team since its early beginnings. He's an advocate for open source, Jamstack, and the static web. Over to you Stu, all the best. Great. Thanks, Sushi. Cool. So, I can assume everyone can see the slides and I'll hear a whisper in my ear, if not. So, yep. Perfect intro. Thanks, Sushi. We are talking about an ambitious digital experience platform, specifically Geoscience Australia, the journey that they went on to create this entirely decoupled static web hosting platform. So, let's get into it. Hi, I'm Stuart Rollins, CTO at Southside Digital. We're supposed to be joined by our co-presenter, Kristin, today. Unfortunately, she heard her back earlier this week, so I've taken her off her plate. She may be working in the background if you're there. Hi, Kristin. Hope you're feeling better soon. A lot of the work that I'm presenting is Kristin's, the slides of Kristin's. Really, I'm just a glorified cheerleader at this point to promote all of the great work that Kristin and the team did here. Okay. So, what we're talking about today. First of all, who was involved in this? It was a very collaborative project. Multiple organizations came together to get this one out the door. Why re-platform? So, what were the driving issues, challenges that the organization was facing? How was put together, the tools and processes, how Trooper was built, and how it all came together? The wins and challenges, what the future of the platform might look like, which ties into some of the conversation that you may have seen in the keynote via Lee, and then back into questions and halfway track after that. Okay. So, the organizations that made the magic happen, there were a few beginning with today design in their excellent user-centric research and design that built the design system that fed into this project. GS Science Australia, who's the organization entity that actually embarked on this journey, social digital service provided, delivering on the project, and GovCNS. So, we'll touch on who each of these organizations are in a bit more detail. So, GS Science Australia TA, as they're known, is Australia's preeminent public sector GS Science organization. The nation's trusted advisor on the geology and geography of Australia. They apply science and technology to describe and understand the earth with the benefit of Australia. So, they're doing some incredibly cool stuff with incredibly rich scientific data sets, have amazing technical products that sit on top of those data sets and allow it to be surfaced and better understood by data scientists and citizens. As I mentioned today, design, they were integral to the project, working with GS Science Australia to better understand their needs to create this cohesive design system that met their current day requirements, but was also flexible and would cover the requirements of all of the programs and bodies of work within the organization into the future. I can't say enough good things about the incredible work that today design do and are doing. And us also digital. So, we're an open-source company made of digital engineers from across the globe with a primary focus in government and primarily to create or assist governments become more open connected and consolidated. And Digital Earth Australia. So, they are the first pilot site launched on a new platform and a program within GS Science Australia and they've got some awesome satellite imagery and incredibly rich scientific data that is surfaced through the new site on the platform. So, why did we do this? Let's go through the rationale and the goals for this new platform. So, GA needed an efficient, cost-effective, secure and resilient digital experience platform that allows users to discover, identify and consume Australian Earth data. So, it was all about trying to create a cohesive set of standardized platforms, tools, content creation and presentation layers to assist in making very rich scientific data more discoverable and easy to understand. Some of the challenges that they are facing are probably familiar to many and many large organizations obviously face similar challenges when it comes to fragmentation. In fragmentation across technologies, largely driven by the fact that you have many programs, many groups, many people who use different technologies, different CMSs, JavaScript frameworks all the way through to processes and everything else that comes with that. So, they use squizz, various versions of Drupal, various different JavaScript frameworks, mapping solutions and so on. And these solutions are posted both internally and externally. So, maintenance ongoing is just a mishmash of various applications in various different places. So, off the back of that, there are a few obvious goals. To reduce or remove duplication at both a technical and content level, to promote content sharing, so creation of content in central places and reuse that across various web properties and even beyond the web. A great editorial user experience to meet and exceed compliance requirements. Easy setup was a big one. So, specifically to measure the time to create new sites or on board new web properties measured in minutes rather than hours or days to unify the tech solutions involved in the platform to create consistent user experience, not only from a public interface, but from a backend content authoring perspective and to have accessibility built in from the offset. And depending that on the platform level, there are some goals and there are some overlap here. So, behind that, we need to make a platform that's secure, robust, resilient, scalable to the organization's growing needs. And what we're talking about is, you know, creating these standard models and the standard consistent way of working, but we don't want it to be rigid. We need to have interoperability, configurability and modular design that allows the platform to continue to grow and evolve with needs of the organization. It needs to be easy to maintain, so central maintenance that can benefit everyone. And again, all of this is built with accessibility in mind. So, a visual representation of what that means. It's all very obvious, before, fragmented, after, consolidated. We've got all of these individual sites, all of these bespoke user experiences, concept models, CMS technologies, JavaScript frameworks, and moving these into consolidated platform. Okay, so how did we do this? Well, yes, hint, it's more than triple. There's a lot that went into this. So, design system, started with the design system by today design. So, they worked on creating a really rich and modular design system at a granular component, you know, atomic level, with a bigger picture vision in mind, that all of these components could be brought together to create cohesive web experiences. So, there's a very rich tapestry of 40-odd different design components that, when merged together, create a great web experience. So, always had to keep that flying on the prize and end goal with all of these components working well together. Behind the scenes, the tech stack can be broken up into three buckets. So, on the back end, we're talking about Drupal and content stores. So, Drupal running on GovCMS, both the distribution and the GovCMS hosting platform, and using JSON API to expose content. So, this is a decoupled platform. In the middleware, we've got GitHub actions, which is our CI layer, which is responsible for the build and deploy process. So, we use Nuxt.js to compile a static artifact, the entire static websites, and we push that to the front end, the static edge. The static edge is written by QuantCDN, which is Content Delivery Network for static websites. We have a integration with our goalie of the search, and again, you're interfacing as an end user with that compiled and Dujs components. Okay, so we'll dig into some of these other technologies in a bit more detail, starting with our friends, GovCMS. GovCMS has been around for seven years now, 347 sites, which is probably a lie. They seem to be growing a pretty rapid rate. And GovCMS were great in all of the other conversations and conversations around what we were hoping to do with this project, because at the end of the day, we wanted to show that you can build decoupled platforms on GovCMS without the need to go too far above what the core distribution and the core offering provides. So, thanks to GovCMS for coming to the party. And on the Drupal architecture side of things, we'll start at the bottom right, 160 modules, most of which is the Drupal distribution. So, as I mentioned, we didn't need to add a whole lot to the distribution to build this solution, largely driven by the fact that the API first initiative allowed Drupal from D8 to start moving towards these decoupled architectures. So, yeah, it really is a lot of core and minimal contrary. On the paragraph side of things in the content schema, 21 paragraph types, as I mentioned before, something in the order of 40 odd design components. So, those map broadly to these paragraph types. What we did, though, is make the paragraph types more configurable. So, instead of having very verbose, you know, massive list of paragraphs, we've got a slimmer set and they can be configured so they can be tailored to actually render and map to the different front end components. The vocabularies, 13, largely these are split between back end administrative content tagging, as well as enabling content to become searchable through facets and filters, as well as some system vocabs, which we'll touch on. Rolls out of the box, govcms, menus, per site menus, media types, text formats and peers, not used heavily, largely because of the decoupled nature. We're using very structured data and using JSON API to expose it. We don't want, you know, rich, WYSIWYG HTML content, we want it to be as structured as possible. And one content type to rule them all. So, we'll talk about that content type a little bit. What we have is a multi-site Drupal installation where sites are managed through a site's vocabulary. So, you create sites in the vocab, you configure them with all of the things that a site needs to exist, such as what its menu is, you know, what the header and footer configurations are, what the analytics plugin identifiers are, and that's all just a taxonomy term. To create a new site, create a new term, configure it, and away you go. On the content side of things, it's equally simple. There's a single content type with the page type subcomponent which allows grouping of content. Because of the nature of the design system, you can basically embed those design components in any order, which allows for a very simple and intuitive way of just attaching paragraphs and putting them on a page so we don't have a lot of complex content schemers in the CMS. At the end of the day, what we get out of Drupal is a pretty rich content back end. So, as I mentioned before, that simple site setup is key. The fact that you can just create a new site by adding and configuring a taxonomy term means that you really can just start creating new sites and new content very, very rapidly. There's an inheritance model with the site text fallback, which effectively means that sites that aren't configured with their own content can inherit from their parent. Component reusability is a cool feature which allows you to have a library of components. So, for instance, if you create one of these paragraph components, you can actually embed that and reuse that throughout the site. There is a content cloning feature. Reference content, in this case, refers to the fact that content can have a different contextual view as a reference piece of content. So, if you embed it in a card, it can have more contextualized representation of that content. The customizable search stuff will touch on a little bit later, but it's a content-managed search solution. As I mentioned, we use our goal here on the front end to render the search, but you can actually create an embedded search component directly as content in the CMS. What that means is you can basically say, I want to inject a search widget here. I want to filter it by these page types. I want to display these facets based on these taxonomies and it's zero development effort required. It's all just additional content. There is rich content taking for search, which enables a lot of that functionality. And, as I mentioned earlier, content sharing between sites. So, when you create content, you can take it to a primary site and share it with others. On the left, you can see an example of these page type templates. So, while I said that you know you can create a single piece of content and put any component in any order, what GA are doing and what the other web properties are doing are to create standard page type templates, which can be reused as the base. So, for instance, they've got a product template here. They use this as the basis for all of their products. So, it just comes with placeholder values need replacing to keep things less consistent. Along a similar line, there are master component templates. So, I mentioned before that we have this component reusability function. The same applies to those paragraphs. At paragraph level, you have a library of these templates that you can use as a kind of a bootstrap starting point. And then we get into some of the front-end view components. So, going back to that design system, they're all converted to view components with their own sets of configurability and properties that end up representing the final design. And these all live in a storybook. So, we use storybook as the living design system documentation. And so, all of those design components are all captured in here. They're all maintained in the same code base. So, when you update or add any of the functionality, create new components or update the properties or configuration associated, then the storybook is just rebuilt and it's very easy for developers to come in and see exactly how to interface with those at a granular level. So, we'll look at what the process actually looks like through back-end to front-end. As mentioned, Drupal content editing is quite standard. You just create nodes, attach routes, use paragraphs, put them in menus. So, the content-authoring experience is very, very traditional, standard Drupal. Site configuration, taxonomy list, configurable through that taxonomy term. The middleware is responsible for pulling the content by a JSON API. It maps that content and does some data transformation to convert that to those ViewJS design components and that it compiles a fully static site and deploys it to QuantCDM. And on the front-end, all users just interface with that static content, the static artifact through the Quant content delivery network. And the way it works is when content is edited in the back-end, it just compiles the changed artifacts and then pushes those. So, they're always up-to-date for end-users and neighbors at the site. A little bit more about Nuxt. So, Nuxt is an aesthetic site generator. It can be configured in varying ways, one of which is a complete static target which will create HTML, JavaScript, CSS, which you can just push to static hosting solutions. It can also be used as a reverse proxy server-side rendering. It's a very configurable tool. We've heard a little bit about TruxJS before. So, TruxJS is an awesome project. If you're interested in decoupled in general, it may help to bootstrap you on a similar journey. It was assessed and we didn't use it for this project simply because we had some requirements that we needed to tackle ourselves. So, that's Nuxt. Next, we're talking about why go static. So, a lot of this is probably pretty obvious. But by moving to a completely static public serve, it gives you a lot of options with your CMS in the back-end. So, you can imagine by having Drupal not serving any content, we don't really have an attack service on our CMS. We can move it behind basic auth. We can take Drupal off and put it behind private networks. Or we can even just turn it off when it's not in use. Just take those servers away and not allow public access. It's that it's faster. So, if you're hitting Drupal and your rendering content, you're talking about twig and databases and all of the various layers and subsystems that are required to generate and deliver that page. In this model, the static artifact is only rebuilt when content changes. So, you're getting the fastest possible results. Static also costs less. So, off the back of that, if you aren't using your CMS as much, if you're scaling back your web presence, then you can see cost savings. And it obviously is leaner and wider. That reduces the energy requirements and reduces environmental impact. So, a little bit more about point. So, point is a content delivery network, global in nature with 60-odd regions engineered for static and jam stack paradigm. Has full support for static stack generators like NUCs like we're seeing today. But it also has a integrated Drupal module. So, if you're interested in taking a Drupal site and making it static, you can enable that module. It will create static version or push it directly to quant. And any content change in Drupal will automatically just create static version and push it into Drupal. So, it will track content change. That is it. We've got that integrated search solution with partnership with Algolia. Scheduled releases is cool. It's basically a two-to-second content release globally. So, we push the content ahead of time and then you can schedule it for 11.59, 59. And everyone around the world will see that content at the exact moment in time that changes. So, nobody else is doing anything similar. Alongside that, there are traditional CDN controls. So, if you want to use it, it's a traditional CDN you can. There's also edge content editing. So, in the case where Drupal isn't available, for instance, in a DRR failure scenario, you can still edit content through WYSIWYG or co-editors. All of the content and media items are tracked with infinite revisions, which is interesting when talking about Government and Archives Act. You can basically go back and see point-in-time snapshots of what any page or content looked like at any point in the past. And with that comes the ability to roll back to historic versions if you need to. All right. So, we'll look at the search solution a little bit more. So, search is baked into every site, which means that when you create a new site, it will just work. You get site search. And the reason this is possible is because we are using standard and consistent content schemers which just means that you can get a really rich and nice search solution out of the box without any development effort. Similarly, when we talked about search widgets and components as managed as content and examples like this, you know, you basically say, I want to embed a search widget. I'm going to restrict it to these content types or page types. I'm going to enable these facets and the way you go. You've got search. Cool. So, we'll talk about some of the wins and the challenges that we faced along the way. The first win was the DEA site launched, which was awesome. So, as far as pilot site go, it's a pretty awesome end result. So, encourage you to go and check that out at the end of this talk. Some of those post-launch results, we've talked about the build process and build times. You can see that it's hovering around three minutes 30 for that DEA site. We've got plenty of additional ideas and proofing concepts to bring this down to a more consistent one-minute kind of build time. And the way we can do that is through iterative build. So, we're just building the small iteration. And Quant already has full support for comparing MD5s of the built-out effects and what's actually being served in the static edge. So, it only pushes content that has actually changed. Yeah, SSR report and the security support looking pretty good as is the Lighthouse. And because this is all coming from one standard platform, this is all centrally managed by the platform administrators. And it means that adding a new site will immediately give you good results without having to spend additional time redeveloping or, yeah, rebuilding anything that's built in from the offset. Some of the challenges. So, yeah, it was ambitious. This particular setup, no one has done. No one's built something that's decoupled using WCMS and all the various tooling that we used. So, there was definitely some teething pain along the way. It's by the right team with right skills and fairly tight time limit budget. It's a pilot project after all. In the back end, there was some component complexity. As I mentioned, we did try to create paragraphs that were configurable, which just added a little bit of extra complexity to those components. And naming things, funnily enough, was one of the harder things, largely because of the various people, the various teams, and all of the systems that were, yeah, design system, front end, all had to have similar names to limit the confusion. In the middle, in the middle where we built that from scratch, as I mentioned, we didn't use thrust. We had to handle a whole bunch of edge cases with, you know, what happens if you enter weird character encoding in CMS? What does the build pipeline look at? So, definitely a lot of hardening that went into that process. And build times, as I mentioned, that's always a challenge when you start to talk about static site generation, especially when you talk about thousands and tens of thousands of pages. In the front end, there were quite a few parties involved. We had Geoscience Australia, but then the programs within Geoscience and making sure that all of the components and the actual final representation would meet their needs. Yeah, a few different parties to communicate with them. There were many components and the storybook was the last one of the way, so standard projects challenges of your face. In terms of wins, though, it was a pretty great outcome. These are the same goals that we had in those earlier slides. The ones with asterisks are the ones that we are working on improving in future dev cycles. The same goes for the project ones. So, on the whole, we met all of the original goals. And a nice quote from Alan, a digital experience manager. Geoscience Australia now has an innovative and sustainable solution for our future platform needs. So, that was a really nice quote from Alan. Okay, so the future of the platform, what can do to make it even more awesome. Step one, obviously launch more sites. So, EFTF, the future and com safety are already underway. They will be launching soon with hopefully more to follow. The platform itself, we are hoping to extend beyond the current web and extend it to these other applications, these other data rich mapping and scientific applications that can benefit from standardization consistency and content and data sharing through an API-first lens. And then beyond GA, at the end of the day, this is a very generic solution. What we built isn't really tailored entirely to GA. What I would like to see, and off the back of some of the conversations that have come out of the keynote, starter kits and sharing what we are doing here. Because at the end of the day, the more people who are going on this journey and sharing in the same sets of tooling, like DrugsJS, the better off we'll all be. So, if there's any interest in what GA are doing, I'm certain that they'll be willing to share what they've got and more people that contribute and share what we're doing, the better, ultimately. Looking forward to D10. So, Olivero, when we talk about what we're actually using Cripple for, it's purely the content altering experience. So, anything that we do in the content admin side is obviously going to improve this platform considerably. So, same goes for CKEditor and the extensions on the API-first initiative. As Lee mentioned, what's happening with exposing menu items and some of the other improvements by adjacent API will be able to piggyback those improvements for sure. And KTPA, obviously, is faster than seven, helps the snappier admin interface. Also interested to see where we go with decoupled components and frontends to directly consume some of that content. Cool. So, we're into questions. So, first question is open source, there's the anywhere we can see the code. So, yeah, as I said, I would love to see a starter kit or something that we can base from this that we can have a broader community input. After this, we'll have a chat with the fine folks at Geoscience Australia and see what we can do to progress that. But at the moment, the answer is no. We'd love to see it then. Webforms, yeah, that's a good one. So, Quant CDN has built-in support for webforms. What it can do is effectively just capture any of the post submission data and stores it in a separate secure enclave. So, it's kind of built-in. There isn't actually webforms on that DEA site, it wasn't in their requirements, but the platform can handle when the time comes. Maintenance and support plans for the middleware. Yeah, so, obviously, this being a bespoke and custom solution, at the moment, there is an ongoing support contract that's in place to manage and maintain and keep regular patching cycles and all of the rest ongoing. One of the benefits, though, is by having this decoupled solution is Drupal's actually doesn't have any public access. So, when it comes to patching cycle and managing highly critical patches, the exposure risk isn't there on the CMS side. The middleware is all contained within the CI pipeline as well. So, yeah, there's not really a lot of exposure risk, but there's an ongoing maintenance burden associated with that. The new system presents geoscience data. No, but that is definitely when we talked about the future of the platform and raising some of the scientific data that is absolutely on the roadmap and on the agenda to try and create more standard and consistent ways to bring that scientific data to the websites. So, yeah, definitely on the roadmap. Had a staging work when a new term site is created live and look behind server restrictions. Yes, so, at the moment, they've got different environments. So, content at the moment, they can create in a separate environment, which is hooked up to a different static serve, which has basic auth in front of it. So, it's not ideal. Again, on the roadmap is preview environments. So, when they're creating draft content, it will be pushing that draft content through to a preview environment. What that's, is some of those initiatives in Drupal Core largely revision capabilities through JSON API. So, we look forward to tackling that one when the time comes. The project fit into original time frame and budget. Yes, mostly. So, it did go over a little bit, but, you know, a week or so. It wasn't too drastic. Having said that, we did have to cut some of the original features down to try and meet the budget. It was, it was quite an ambitious project. I'm supposed to press the tick that one. Cool. I think that might fit for questions. All right. Well, thanks everyone for joining. If you have any follow-up questions or anything else you'd like to discuss, then feel free to reach out. I'll just put that to the contact slide. So, I'm on the Drupal Slack or Twitter or various other means. So, if you want to reach out, then feel free, feel free at any point.