 Thanks all for coming. What we're going to do is give a bit of a run through of what Lagoon is, what Lagoon's been doing, and what we'll be doing next. And then we'll cover a little bit of how AMAZIO has changed over the three years we've been in the Australian market. And we'll do a little bit of a quick Q&A. So hopefully some of this is new information. And hopefully some of this can sort of start interesting conversations about what's happening and where it's going next. So obviously Lagoon is the developer friendly cloud native application delivery platform for Kubernetes. It's in use extensively in Australia. We've got a number of large platforms like CMS and DPC, Vic are the biggest at the moment, but we'll talk about a couple of other ones later on. It's built specifically for the web and specifically web developers. There's a number of application delivery platforms out there that help people do more generic tasks. We've super focused on people building websites. And what Lagoon does basically is it provides all the tooling you need to take a site from local to production without having to worry about configuring Kubernetes and setting all your pod scaling and doing all of your log configuration and setting up backups and all the other myriad stuff that comes with running a site at scale in production. The goal is that Lagoon knows how to make your site run in the most optimal way and we'll take your code out of your Git repo and we'll put it on the web and everything is going to be hunky dory and it's going to work perfectly and fine first time round every time. Lagoon was open sourced by Maze.io back in August 2017. So Lagoon is almost as old as Maze.io. Over that time, there's been a number of changes, a number of improvements to it with firm believers in continuous development. So every time we do engagements with customers and clients, we're continually building new features. We do a lot of sponsored development for customers who've got a particularly large site installs. We'll refer to Lagoon as a platform as a service. In the basic stack, Lagoon sits above Kubernetes, which sits above your cloud infrastructure. So AWS is your Google Cloud We don't really mind as long as it's a conformant Kubernetes distribution and OpenShift 3, we'll talk about that a tiny bit. But Lagoon sits on top of Kubernetes and then the customer's data and application sits on top and Lagoon abstracts away that Kubernetes layer and the underlying hosting layer for the application developers. Compatible with any of the open source frameworks. So Drupal 789, we could probably run Drupal 6 if someone really wanted to. I think you might find it hard to get a response out of us if you emailed us asking for Drupal 6 hosting, but we'd probably give it a red hot go, as well as all the other frameworks out there. So PHP base ones, Python, Ruby, Node.js, and the whole host of other things. So sort of very much deploying anything everywhere. We've worked quite hard on Lagoon. We've got Lagoon to a really good point. As we say, it's running GovCMS, it's running STP. About this time last year, we started development on Lagoon 2. And the idea was that we were very constrained because Lagoon only ran on OpenShift and only ran on OpenShift 3. And a number of the conversations we had with client's customers and agencies, I said it's brilliant, but I don't want OpenShift. I want Amazon EKS, or I want Google Cloud, or I want to run it on my Raspberry Pi Kubernetes server. So we sat down early 2020, just the pre-pandemic times if we remember those at all. And we sort of worked out what do we have to do to bring Lagoon to Kubernetes. So about this time last year, we released the first element of being able to run client sites on Kubernetes. So we've got a number of Kubernetes clusters around the world. And the projects that Sean and Tom were talking about was migrating a number of sites from our old OpenShift clusters across to our new Kubernetes clusters. Once we'd achieved that, obviously, the itch was well and truly there to scratch. So to take the next step, and that's to be able to run Lagoon itself on Kubernetes. And as we were doing that, it was a really good opportunity for us to relook at our architecture. We'd previously deployed Lagoon as a bit of a monolithic application, I mean, a monolithic cloud native containerized application, but it sort of deployed all in one blob into a region. And then we could make it deploy out to other areas. But we really wanted to say, hey, what does Lagoon do? How does it work? How does it fit together? Is there a better way of doing this? Can we logically compartment the bits of Lagoon that handle the smarts from the bits of Lagoon that handle the deploys? So we've really looked to the architecture again, going sort of full native Kubernetes has meant embracing Helm charts and operators and controllers and the works. So there's been some incredible work out there. Call out Ben and Scott particularly, because they've done some incredible pieces of work to make the whole Helmifying of Lagoon and getting it easily deployable has been a huge job. And the piece of work in getting the communication between the two elements of Lagoon, that sort of core, central core and the remote, was a really impressive piece of proper microservice type architecture. And it's really improved our security posture. It's improved a number of opportunities that we had. We've spent a lot of time and effort fixing things like logging problems. Being able to, we had a logging solution and our log solution was getting huge. And we were running elastic search in OpenShift clusters and elastic search pods in clusters are not pretty to observe. So we did a lot of experimentation and we basically moved to an open distro based logging solution that runs in its own cluster. And then we set up Fluent, the Fluent bit forwarders to collect logs from customer clusters and send them off to different logging infrastructures. And at the same point, one of the radio requests from our customers was being able to have their logs in their own system. So be it Sumo or Datadog or Splunk, we've been able to set up the logging to be able to divert out to there. And just having the build of a site closer to where the site's being deployed makes it far more resilient. You're not having to talk across the internet all the way back to Switzerland, all the way back to the US at every step of a build. Working on a number of new features as well. So one common theme you'll see with us is that we do a lot of work with agencies that have lots of sites. And we think we've got a really good offering for people. Lagoon is a really good tool for people who manage multiple sites. And a lot of the way we develop features is targeted towards users who run multiple site portfolios. So whether it's 10, 20, 50, 100 or 500 sites, we think we've got a really good solution in there. So we tried and develop towards that. Token architecture slide. So sort of simplified a developer pushes code into their Git repo. Lagoon sees that that's happened. It will go through and it'll tell the Lagoon remote, hey, you've got a job because Lagoon knows which remote's responsible for which project. That remote will then start the whole Kubernetes build process. And that all happens in the remote Kubernetes cluster. It goes ahead, does the job. Occasionally it'll send a message back to Lagoon Core and say, hey, it's still going, still happening. Okay, that build's finally finished. And at that point, Lagoon Core will notify the developer that, hey, your site's up and available on the web. Go check it out. So really trying to separate out those two elements that sort of Lagoon Core is the architect of the build and Lagoon Remote is the builder. So Lagoon Core tells the builder what to build and the builder goes ahead and build it and comes back to the architect when it's done. So yeah, obviously it's more cloud native, doing helm files, using controllers, using operators. It's a much better way of doing it than the previous ways we had. We generally, in a normal AMAZ-IO engagement, AMAZ-IO will manage the Lagoon Core. And we've then got the opportunity to host customer sites on a sort of a shared cloud instance. We've got a number of shared cloud instances around the world. And we'll manage those Kubernetes remotes for you. Or we've got the opportunity for you to have a remote cluster in your own AWS or Azure account. Again, connected back to our cloud, to our core, but there's a whole lot more options. We can do this because we've taken those architectural steps to sort of decouple the two parts more than we had before. So a number of our customers like to manage their own infrastructure. That's fine. Because of the way we've built Lagoon 2, Lagoon 2 doesn't require the secret passing that Lagoon 1 did. So it relies on messages, on RabbitMQ messages. So our Lagoon Core doesn't need super-privileged access to your Lagoon remote. So it just makes it a little bit safer that way. If you've got a conformant Kubernetes cloud that meets all the requirements, yeah, technically we could probably run on-prem. It would be painful. And we'd have to sort of do quite a lot of investigation to work out with it. But technically that's possible. One of the other big things about this is that we can have multiple remotes running on a core. So as in the olden days, one core can run multiple remotes, but we can also have multiple cores on a single remote. So it allows us to do better testing. So we can have our core deploying sites into US or Australia. We can also have a test infrastructure that can deploy projects to the same remotes, albeit namespace separately. But that really means that we can do genuine end-to-end testing on all infrastructure. And that is going to make our life an awful lot easier when it comes to rapid developments and rapid iterative development. These remotes, we're getting them more and more powerful every week. So they can be deployed with their own logging. We've written a really neat database as a service operator. So they can connect up to AWS RDS as your SQL, Google Cloud. So we can provision those databases in those remote clusters. So the remote cluster effectively becomes pretty self-sufficient. All it happens is it's told what to do by the core. So this means your Drupal sites basically, your Drupal, your solar, your Redis, all those elements, they all run in the cluster that's geographically closest to your customers. Some of the management happens out of the central core, but having the two split a little bit more is a real opportunity for us to improve our security posture and to improve a lot of the performance of the way the builds happen. To make this, we've changed a bit. Lagoon hasn't really changed that much. A lot of these changes that we've enacted at this stage have been under the hood in the way it operates to give us that freedom. Obviously, the Lagoon's installable by Helm charts. If you're brave and slightly foolhardy and give it a go, we'll be working extensively on documentation in the coming months on how to build that and get that up and running. But we're working with two or three people at the moment who are building and deploying their own Lagoons. So their own Lagoon cores and their own Lagoon remotes, and we're working with them to make sure that their processes are smooth and nice and neat and tidy. The base images that people build through poor PHP Python sites off, we've split those out from the main Lagoon so that we can do more frequent updates on them so that we can keep PHP versions in check and we can keep iterating and keep getting new developments into those images more quickly. We're building a fairly formidable set of examples. One of our main criticisms was that we were very Drupal-focused and I know that this audience will say, well, is that a problem? It's not a problem because Drupal's awesome, but we're getting requests from people who want to host WordPress, they want to host Laravel, they want to host Node, they want to host CCAN, all these things. So we're building more and more examples, and these are sort of template repositories that we can really say to this is how we would recommend building an application that works on Lagoon. The idea is that then people will take those and extend them and customize them to their need, but they're really optimized for running on our infrastructure. We can then use those examples for testing. So when we build our base images, our base images are used, use our examples to test, and the tests use our base images, and the examples use our base images to test. We're keeping it all quite tight and keeping that development quite close together. For those that follow us closely, we've been working on integrating with Lando because if you're a Drupal provider and you don't have Lando support, are you really a Drupal provider these days? They seem to be properly ubiquitous now as the local dev solution, so we've been working really hard on that. One of the things we're really proud of is the level of collaboration in there. I joked earlier about developing. I actually built some of the integration for Lando and Lagoon, and I think it's given me a really good understanding of how the integration works, so that's going to help bring Lagoon to more people's local dev, and it's going to really reduce the switching cost, which is something that we've had a little bit of criticism in the past. We've also, in the last few months, we're really moving towards doing, being design-driven, but also being research-driven. For those that know me from previous lives, I was the tech lead at GovCMS, so I've got through the last year or so at Amazing IO with my voice of the customer hat on a little bit. Well, when I was a platform owner, I wanted to see this, so we'll do that. There's only so long that'll last, or people will put up with me, so we've decided to go and really start doing some deep and extensive research into how best to serve the next generation of customers and the next generation of users, so we'll be putting a lot of calls out to our communities over the next few months for people to participate in studies and feedback and to tell us openly what you think. We know that most of our communities are very keen to come forwards and tell us exactly what they think, so that should be exciting, but you'll see calls out from us. We really want to foster that spirit of collaboration and get our users involved in building the tools that they use, because that's the only way you can really truly represent the way your customer wants your product to work. What are we working on next? Well, all sorts of stuff. Keep watching this space. We're doing some really cool stuff. We've got a few really good features in the pipeline. We've got to get better at communicating what we're working on, and that's something that we will be doing over the next few weeks is really giving people an idea of what's up next and what are the next things we're looking at doing. Over the last few months, most of our focus has been on building Lagoon 2 to a point where it's basically feature equivalent with Lagoon 1, but as we start to speed off and start building some really cool new stuff, keep an eye out for what we're doing. You'll see this a little bit from us over the next few months, is that Lagoon will also be a bit more present as a standalone open source project. A project, not that we want to differentiate from amazing IO, but we really want to encourage the fact that Lagoon is an open source tool, and now that we have a number of partners who are building and deploying it themselves, there's got to be a little bit of logical separation between Lagoon, the product, and amazing IO, the company. We've set up some new channels, Use Lagoon is the standard one. We're running documentation with Twitter stuff. There'll be a little bit more. We're still amazing IO, I still have to tell, but yeah, just a little bit of giving Lagoon a bit more personality and giving a bit more opportunity to differentiate between what amazing IO's installation of Lagoon looks like and what vanilla Lagoon install looks like. We'll be re-hosting our community outreach office hours type stuff in the coming weeks as well, so I'm more than happy to talk to people. At this point, I can probably hand over to Tom. Do you want to share your screen, or do you want me to click for you? I'm happy to share, if you want. Cool. I'll stop. Toby, can you all see that? Sure. Cool. All right, yeah, off we go. So, the next part of this talk, we'll actually just talk to you through amazing IO. Amazing IO is basically the company behind Lagoon. It's what's allowing us to be able to work and contribute and do a lot of these open source tooling. So, first of all, it's a fully remote company. It was originally started in Zurich in Switzerland, and like Toby said, it was the actual product, the hosting product, was open sourced in August of 2017. Basically, our philosophy is we want everything to be as open source as possible, whatever we can, we'll make it. So, could they about us? So, amazing IO, we provide our power via the Lagoon platform. It really gives that automation very quick and easy to use system for application delivery. It's super flexible. You can really go to town and really use a lot of technologies and configurations that you need to do, which makes it really powerful for developers. Fully open. So, we love that open source and contributions coming back in from the community as well. We take security very seriously, particularly with some of our more important customers. We just have to be secure and have good processes all over the place. So, yeah, our main mission really is about hosting anything anywhere in the world. And this is what Lagoon and Kubernetes essentially allows us to do. So, through to Sean, which I can Yeah, just keep clicking. So, yeah, this is a rough map of where we are in the world. And, you know, maybe half an hour before this meetup, I had a few more pins because the old map was a little bit out of date. This is still probably not 100% either, but at least you get some rough idea on where all over the place. And, yeah, that really helps us to keep that kind of global presence. So, if there's an issue with your site at two o'clock in the morning, the person getting, you know, addressing that is going to be in business hours. So that's, you know, it's quite handy. And, yeah, Australia and New Zealand where Auckland, Wellington, Para Para Umu, Perth, Melbourne, Canberra. I think that's it. But yeah, that's the contingent in the region. And partner program. This is obviously not a very well fleshed out slide, but it's just to let you guys know that we are going to be making something more official soon. This has been worked on for the last few months. Our head of partnerships, Brian, will be keen to reach out to our existing partners and formalize that relationship. But yeah, the idea being you can be certain tears of partnership and they get you access to discounts and stuff like that. So, yeah, I guess watch the space and if this interests you, yeah, then, yeah, we can put you in touch with Brian. And this actually just came up today from one of the sites we looked after. And it's a moderately busy site. This is a new relic graph for those not familiar with it, which shows you kind of like the request per second and stuff like that and how quickly it's responding. I put this graph up just to help show you how the platform scales. And this is not so much anything you had to do manually. The platform just took care of this. If you go one more click, Tom, you see there's an increase in throughput there. It went from like 100 and something to 1140 requests per minute. So it's a fairly significant mood swing that we can see a web server. So the next thing you notice at the top there is the web transaction time actually stayed fairly flat during this. So the end users never actually knew the site was actually getting smashed at that particular point in time. And if you go to the next slide, this is a new relic as well. It's actually a breakdown by pod or container. And you can actually see as the load spike hits, additional containers are spun up to help manage that load. And you see a few more colorful lines sort of show up and and then eventually you see them start the drift off that back at the end you're back to three pods, which is the base for this particular site. So I guess this is just an example of how you could scale in cluster paying exactly no more dollars than what you're paying. And it just takes care of itself. Sites are more free to flex within the bounds of the existing cluster. And the cluster itself can also flex as the sites flex themselves. So there's really like two degrees of scaling that you just have at your disposal, none of which requires any, you know, persons to look at or, you know, to actually address. Anyway, I thought this is a nice little couple of questions today. And it's definitely helpful, especially with a lot of sites on a single cluster. I mean, obviously, things can just spike up and down. There's no reason to really have to worry about increasing core infrastructure per site on a per site basis. That's just pretty cool. So this is all mainly Kubernetes, but obviously driven by by Lagoon. So on this side as well, we've got some pretty notable clients in the region that from Solstice Digital to GovCMS. We're also hosting for the State of Victoria, the SDP platform. District CMS, as we heard earlier, that's a new project from Doghouse team. So also working with Doghouse, Victoria University and Communica. So some really, really cool clients that we're working with locally and obviously others that we work all around the world. And just a quick overview of the basically our hosting offerings. So it's not just a single offering that we provide. So Lagoon basically sits across everything we do. That's the commonplace. Every site we host will be running via Lagoon. But we do have two main options for customers. So on the one side, we've got a dedicated cluster, which basically means that you have your own cluster. Either that can be provisioned via us or even into your own cloud server as a cloud provider is always talked about. And that way we can manage it all completely. We run the upgrades, monitoring everything 24-7, but it's all inside your own infrastructure. And the other side is our cloud, which is effectively us managing a set of dedicated clusters. But you have your sites hosted in our shared clusters, which are distributed at the moment. There's UK, Finland, Switzerland, Germany, US and Australia. And that's growing as we grow as well. So those are the two options. And then on top of that, we provide the extra services around hosting, consulting, so helping out with any developer development builds, customizations. We have a full service help desk, which is also 24-5 around the world. And because we've got teams in Australia, Europe and the US basically follow the sun. So anytime of day, there's usually someone online to go to help you. And one great thing about Lagoon and what we provide is that we don't lock in pretty much anything that flexibility really extends through to how you want to develop. We can provide recommendations, but you may have your own personal preferences. So we really open that up to the customer to really let them decide what they want to choose. And they can pick and choose a lot of these toolings. And then pretty much if it's not on this list, I'm pretty sure wouldn't be too hard to add support for anything else as well. And we really pride ourselves also on the support processes we provide to customers. So a lot of it is chat-based. So we're able to actually inject a chat channel usually into your team, which really is just like an extension of your team to be able to provide help when you need it. We use a system called Help, which is HALP, which is really great. So you can just basically throw a comment on a Slack channel, you tag it with an emoji that escalates as you need to do to summon on board. And then we can jump in and help. And we've got people here ready to go as you need to. We also have the ticketing system, if that's your preference, as well as always having an emergency phone number. There's any of those systems are down. And you need to reach us immediately. And we use Zoom. Being a remote company, we're in and out of Zoom all day. So, yeah, often we'll have catch-ups with customers as well. And sometimes that's easier to explain problems when providing support as well. And I guess something that people may not be aware of is that we are partnered with CDN provider now. And we're in the process of moving all sites behind it, just so that Maze.io gets more flexibility when it comes to, for instance, moving your site between clusters and, you know, being able to keep that as seamless as possible. So, yeah, I guess the CDN in general is something we use to also increase performance for the sites that require it. And we also have a managed WAF offering that we can also plug into this. Russian end user. Yeah, watch out for that one. And the neat thing about the CDN and WAF is that it's fully managed. So, what I mean by that is we don't just set you up and then kick you down the road. We actually look after it with you. If you've got a problem, then you raise the same Slack message, the same emoji, and the same SLA is applied to it. And you get that same single throat to choke. So, you know, we support your end to end top to bottom, no matter what. And as a result of that, we can inject obvious practices and these will evolve over time without you having to need to, you know, raise issues or concerns. So, a good example is Servile Stale. So, if the origins ever down, say you're doing the deployment, say it's the middle of the day, either put the site in maintenance mode, and you don't want to serve a 503 to end users, well, the CDN will just take care of that for you and, you know, present you the last known good 200. So, you know, we really want you to deploy when you want to deploy, how you want to deploy. So, having that CDN gives us a few more tools than the tool about to deal with that. Collaboration with customers. So, yeah, we always came to chat with our customers, work out how we can better suit their needs. So, we've got technical account managers such as myself, which work with the larger customers with their more exacting requirements on how things are supposed to work, but also see ourselves as, you know, helping to instill those best practices and, you know, keeping the sites running as optimally as possible and ensuring that, you know, scaling is configured and, you know, we're, yeah, can we help make this cluster smaller or save money some other ways. So, that's always kind of front of mind. Lagoon contributions. So, Toby's already mentioned the GitHub repository. It's all open. There's 400 and something odd issues in there already. But if you've got a new issue and you can't find an existing one, then by all means, we welcome contributions in any form. And Toby's already mentioned it already, but we do also support paid development of features. So, if you need something to be written, then let us know. Sometimes we will co-fund it, you know, by, you know, offering discounted rates. Sometimes other customers want it. So, it's like a pool of money that can be shared around. So, you know, it may not necessarily need to front the entire cost. We can just wait for it to be slotted into the roadmap and, yeah, hope it gets done. Maybe ask Toby really nicely. I take Uber Eats vouchers. There you go. And, yeah, we're always looking to make integrations with third-party systems, yeah, easier where we can. For instance, like that New Relic integration I just showed you is basically a couple of environment variables. And that's it, right? You don't actually need to know how, what version of the New Relic agent you need to install and, like, the effect you need to still lay the startup on the containers a little bit, otherwise it goes a little bit haywire. So, all that stuff you don't need to know. We make it simple. We make it easy. We kind of hide that, you know, the stuff you don't need to know about under. And, yeah. Yeah. And then, yeah, on top of that, we also provided by this, provided those professional services. So, you know, you can engage us for any sort of hosting, custom hosting solutions or even big projects that you may need to do. Migrations or anything, you know, anything all the way up to and including now some of the application layer because we've recently just got some new team members, which are the development experience engineers, and we can sort of help out way up if required to that application layer. Yeah. And that wraps it up, I think. So, yeah, I'd be keen if you're interested in any more information. I mean, we're always happy to give demos. There's a link on our website where you can request one. So, we can sort of take you through the process of what is Lagoon and how it all works. Always happy to have a chat in the Amazio rocket chat. You can sign up. Just there's some channels in there where you can just ping us. We're always in there. And as Toby mentioned, there's going to be the Lagoon office hours, which will be announced shortly, probably via our standard social channels. And, yeah, you can reach us on Twitter and Amazio.