 having a in-depth look at some of the issues that have happened this year, particularly around bushfires at the start of the year and also around the COVID sort of crisis and how that's impact on engagement and managing projects in that environment. So without further ado, I'll hand over to Ann and get started on a topic I think most people will have quite a bit of interest in. And thanks Jeff for the introduction. Yeah, I think crisis project is something that we've all probably learned a little bit about this year. For those that don't know me, I'm Ian Laslet. I'm a director in EY's digital and immersion technology and I was formerly managing director of Adelphi Digital and we did a lot of work in the Drupal space also in the kind of web projects space and with the huge range of organisations. And today I guess I want to focus a little bit on some interesting case studies and anecdotes and some advice I think around what to do and how to manage projects from the crisis. As a starting point, at any given time we support a huge range of clients and this is just a quick one that we can develop with this year. During the bushfire crisis last year we were supporting both ACT and the emergency services. We're currently supporting emergency management Victoria. During coronavirus we were supporting smart traveller and we built and managed that site. We also look after a whole range of Department of Home Affairs sites, ACT Health, we've been working with them on their coronaviruses sites and the general public information from ACT Health over this period. And also things like TransLink, Transport in Brisbane and things like that. There's a huge depth of knowledge and experience that we've got from these kinds of clients and I thought I'd share some of that with you today. For those that live in Edinburgh, these red and yellow emergency warnings might impact some horrible memories. Earlier in the year and during the fire crisis last year I wanted to talk through a couple of examples of what happened and how to manage during the crisis. The first red emergency warning you see there is for a fire that occurred on the 22nd January last this year, sorry, over at Beard and Oaks Estate which is just over near the airport here in Canberra. What was really interesting about this one was the fire itself obviously was very quick and happened really fast but it was also a day where AWS was having a huge amount of issues with their instances caused a lot of outages around the country for some very big and notable organisations. So there were some dramas around how you manage your infrastructure, how you manage your hosting and support while an emergency situation is going on and some lessons learned in that one. And the second I guess one I wanted to talk about a little bit was around the Aroral Valley fire which was one of the really, really big fires that occurred in the week after that actually and burned for many, many weeks later. And I think talking about those two fires and the context around how you manage during that period is the background for the talk today. I think there's a few things I wanted to talk through. Preparation is obviously key. It's really important to understand how to prepare what you're doing, what to do during a crisis, how to manage your technical users, stakeholder engagement, everything that's going on during the crisis. What happens when your crisis continues? I mean this year we've had a crisis of the last year, the coronavirus crisis has effectively gone on for the whole year. What does that mean for your teams? What does that mean for your users? And I guess I'll sum up with some lessons learned arising out of that as well. So let's just focus on theme one for a second which is around preparation and I've got a few points on each particular area to focus on. I think I always start and I think it is very important to start with knowing your users. For all these emergency sites and for any actual site or anything that you manage, I think you need to know your load and traffic profile. You need to know in detail what the performance and load of your site is expected to be. Then triple it. I mean some of these sites that normally get a few hundred users a day were in the tens or hundreds of thousands of users a day and I think the smart travel site at one point was on the day that the travel restrictions were announced for the Australians weren't recommended to go overseas. It was something like two million people visited the site or something on that particular day from all around the world far beyond the expectations of what originally the site had been scaled and designed for. Yet we were able to deal with those situations and kind of grow and scale to what we think. One really interesting story and I think that's important to note is like we do load testing and performance testing on all sites that we build but something that isn't always immediately apparent is the user journeys that people will take during a crisis. So if you're for instance running one of those biocides, people don't want to go to the homepage. They'll just go and hit the biopage and refresh it constantly. You might be expecting that you'll get and we actually expected that the whole population in Canberra would hit the emergency services site over that period. What we didn't expect was that every single man installed would go on and refresh that page every 10 seconds just to see if there was an update. So when you're doing your load and performance and really understanding that, you need to understand what's really going to happen in terms of the crisis and really plan for that and it's far beyond what you expect. The other thing to think about is what is your support model in advance? Are you going to fully staff during a crisis? Are you going to have people there in the middle of the night sitting making sure everything's up or not? Are you going to have people on call? I guess in the technical space, it's much easier to run a kind of on-call support center but that may not be sufficient during the crisis and that's one thing I think we've worked closely with some of the emergency services around is how do you upscale a team from what during the rest of the year might have a couple of people running some support during the week and then all of a sudden during the crisis you need 20, 30, 40 people potentially available to support this thing to publish content to manage issues to get back to people and all kinds of stuff. So you need to have that planned and ready in advance. The main aim from a user point of view is to reduce risk so you want to get information out that reduces people's risk during the crisis. Get them information that helps them in their area. Internally, I do want to talk a little bit about the process and how you run and manage your people. You need to know how you're going to deal and communicate and publish and everything during the crisis and you need to have tested those processes and procedures. A lot of these things can be alleviated by preparing events but if you're going to be publishing content, you can prepare proform as you can have your content kind of ready to go and all you do is populate it with the information that's current at that time and things like that. One of the big sticking points around all these potential emergency sites is who approves content and how does it actually get out there. There's almost a hesitancy sometimes in some places to publish and release information but users want that information out there as quickly as possible. So you need to have decided in advance how you're going to manage and release that content and get things out there. One really interesting thing is as well if you've designed your architecture in a high availability way, how do you get content out and how do you publish it as quickly as possible? If you've put a lot of caching in front of your site, if you put a CDN in there, if you've got web application firewalls, if you've got caching in your database, if you've got everything through there, it may actually take minutes or longer to get content live. In an emergency situation that may not be appropriate so you need to have thought that all through and really understood how you're going to publish and get content live as quickly as possible. Leading on from that, I guess you need to design your system and architecture support that. In advance testing your disaster recovery, testing your failover, make sure you've done all that. Back to the, I guess the beard fire and the initial stages of an AWS heritage around that. We had multiple availability zones in AWS set up for the site we were running at the time. But when the whole provider starts going down, what do you do at that point? Do you have a disaster recovery period in place for that? How do you set that up? Have you tested it in advance? And this is one thing that I think became immediately apparent that, and you can't rely just on one data center provider for an emergency situation. You really need multiple data centers. And Drupal itself is not particularly great at dealing with that. It's not really set up in a way where you can have different data centers with different masters publishing content or anything like that. It's actually a really hard technical problem to work out on how you do that. So if you've got, say your AWS instance is primary and then you've got some Azure and you've got something else in there as well, how you can direct traffic to those things is easy enough. But how do you then switch it to publishing content and things is actually a real challenge that Drupal hasn't yet solved I guess overall as a way of doing that. The other thing to think about is in terms of your preparation is what to do with your social and video content. There's not just your website, although that was the primary source of truth for most of these things. People are publishing things constantly on Twitter, constantly on Facebook. You're getting a lot of feedback through those channels and things as well. You need to know where those channels are, how you're going to manage the content that comes through them and you need to set up and test and monitor everything around it as well. So that's all the things I think and a lot of the things you can do in preparation for a crisis and then you can go from there. So what actually happens during a crisis? Yeah, it's a real, I think at any point when one of these fires or one of these things happened. This was less than 24 hours later from the initial release of the Aurora Valley fire that significant areas were under emergency warning. There was a huge, almost panic in the Canberra community at the time around this because the fire was effectively about 10 kilometers down the bottom of Togonong and very, very close to homes and things. But what does that actually mean in terms of the technical support and project management point of view during that period? What were we doing to help with the crisis and what could you be doing to help, I guess, if you were running in a similar situation? I think once again, it goes back to the users. Like at that point, what users really want is timely local information. They don't want to be in a situation where they're waiting to find out about them. They also don't want information that's not relevant to them. You see a lot of situations where people are publishing, like in the national news articles might be talking about fires all around the country. That kind of stuff becomes irrelevant to people at that point. What they really want is their real local information about what's going on in their suburb. Are they in danger? Do they need to evacuate? Do they need to move something on in their own area? And I don't think, I know we haven't done a great job at this and I don't think many organizations have yet, is how do you personalize and localize your content in that way? How do you get it out to users at a real suburb by suburb level to tell them what they need to do at that situation? So I think it's something that's a challenge for the project management and digital people around here is thinking about that and pushing that content out to a localized level is something I think we'd really see from the emergency services and other sports services in the next little while. One thing that also happened a lot during the crisis is you got a huge amount of feedback. Like we were getting feedback through all the social channels. You're getting lots and lots of people with strange technology issues. You know, I'm running IE version six and my map won't load. It doesn't mean I have to evacuate things. Like how do you need to be kind of ready to deal with that, jump into it? The other thing that happens a little bit is that there's a pressure to change things. You know, in that situation where there's an issue with your old browser, do you try and pump out a fix for it? And I think the strong advice for myself is no, I mean, you've got a site that might be serving hundreds of thousands of people at that time. What you don't want to be doing is releasing a new code or trying to fix a particular bug that's affecting a tiny proportion of the users. But there is a lot of pressure. You know, executive will be calling up saying, oh, there's people having this drama. Can we fix it? The answer is, well, we can fix it. But I don't think it should be fixing it at that moment if it's not affecting the base. The huge number of users out there. I think during the crisis, grabbing the team, really owning what's going on. I've got some points here that talk about sharing a shared space. The emergency services and all of them run this way where they gather people together in a physical location and the guys are going to serve as ACT around it. They've been there. You know, actually it was great. We got out there with our support guys, the infrastructure support guys, the publishing team and everyone was placated there during the fire crisis. And it was actually faster to solve issues there in the room than it would have been over teams or setting up a call or whatever else happens. So I know everyone loves virtual communication and we're all here today speaking virtually. But it was super important to be in the same room with each other and jump in there. If you didn't need people to dial in, the other tip I've got there is use a consistent calling mechanism. Everyone wants to use something different. Everyone says, oh, I'll just set it up on my teams or I'll run my Zoom meeting and everything. And then you get dramas. Just set one up at the start of the crisis and say, this is what you're using. I'm going, everyone just use that. And don't forget the emotional strain like during a crisis, some people don't react well. Some people get very worried about their own property or their own families. Obviously it's important to let people, if they can, not have to deal with the crisis situation in the work sense as well. If they need to go and do something with their family, you need to be able to deal with that as well. Develop a second team. Make sure that you've got two people working on things. Develop lots and lots of collaborative ways for those guys to work that aren't impacted by the actual crisis that's going on. With the Beard fire, where the AWS instances were going down, we stood up a second team that went off and developed a microsite within a few hours that was ready to be launched. And we're in the process of switching the DNS over to that site through a separate climate center during that period. I think that's a really crucial thing that that team wasn't dealing with the core issues. They were completely removed from that and had their own priorities and things to deal with. I think that's a great way to deal with crisis and issues. If you've got lots of those, then set up lots and lots of different teams and get lots of people out there doing that stuff. Keep scaling your tech. It's really nice these days the way you can scale your infrastructure. At one point, I think during the worst of the fire season, we've gone from a couple of production instances up to 20 or 30 large instances at AWS. And we're able to scale and grow that capacity as required tell the databases and things like that. Just to be forward to the multi-zone and multi-cloud providers is super important. Keep monitoring your technology over that whole period. We had guys sat there on the infrastructure watching what was going on in the databases and things like that. Crisis can last for a long, long time. That fire and the overall value lasted for weeks. This is just a few days later. It was back to advice and a state of advice for quite a long time. But there's still a lot of people interested in what's going on. What becomes even more important then is, I guess, once again, time of local content for users. If it's not impacting them, they're not really interested, but you need to be able to keep serving that content out and maintain connectivity with those users. There's also a lot of questions around cross-border jurisdictional things at that time. The users didn't care about that. They just wanted to know the information that was current for them. Make sure you keep that in mind. If you are in government and dealing with cross-border things, the users did not care who they're talking to. They just want all the information relevant to them. Keep monitoring your team. Make sure that everyone's on board. You have to bring and rotate your team through a crisis. It's very difficult just to have everyone working through days and days of a crisis situation. You want to have lots of people being able to draw in and deal with it. Crisis fatigue becomes a real issue. I think we've seen during coronavirus this year, Victoria running, say, 100 press conferences in a row. I can't imagine the stress that those guys were under during that period to continually do that. It becomes a real issue over an extended crisis of how you keep people focused on the job and things. I think rotation, motivation, incentives and things are great ways to do that. Make sure you're planning and your major announcements and things as much as possible. Keep monitoring your second team. During a crisis over an extended period, you can have lots of good work being done that's not been impacted by the crisis itself. While the emergency services stuff is going on, we actually developed a whole different way of publishing and managing content through this dark site process, where we should focus only purely on a crisis. Instead of having a normal site, you just have a crisis site and be able to do that. We were developing that in parallel while the main crisis was going on. Also, testing and managing everything in parallel. Nearly there, but I think you can automate a lot of this stuff as well. You can automate your performance, obviously, automate your testing. There's a lot of tools out there and some other really interesting talks today about how you might manage that. Long big gotcha. Equipment costs can become huge. If you're in a crisis and you scaled up your servers because you're meeting a particular demand and you're a budgeting person in the public service, your infrastructure costs can go from $5,000 to tens of thousands as well as a day very, very quickly. That becomes a massive cost for everyone. Make sure you keep testing your DR processes because things change over time. Last but not least, go back and revisit it all. We had so much feedback during some of these crises and it actually took us a long time to go back and gather all the feedback and work through it. We spent the last few months improving things for this five seasons that things should be much more smooth for this year. That's about it. Thanks, guys. I hope you take any questions. Yeah, thanks for that. Ian, I'm just checking on there. There's no question showing up. If anyone wants to put anyone in there now or to catch up with Ian a bit later throughout the day. I guess I was just going to say that was a good point in terms of that sort of fatigue because I would see a lot of people here in Canberra had visions of 2003 when big fires hit the city again when that happened. Obviously people working on the project his own homes might have been threatened by these as well. I guess that's a real challenge in balancing that need and trying to, I guess, the other thing about co-locating stuff as well and how important that was. That was a pretty important one I saw on that. Yeah, definitely. Co-location was super helpful, I think, and I'd really recommend that as a way to get people together and work through these issues. Even though it's harder in COVID times, I think you can be COVID safe and get that done. All right. Well, that's about all for now. I think we're on to the next session shortly. So thanks very much again, Ian, for that. That was a really interesting and relevant talk, given all that's been going on. So thanks very much. Thanks, guys.