 I'm the senior director of IT at VictorOps, and I'm here to talk a little bit about crisis communication and kind of our journey with crisis communication, some of the things that we've learned as we've been in business for the past few years. So it's really important to the whole company because it's how we keep our brand promise, which is that we're gonna make your life better and not worse, and then we're not gonna let you down in a crisis. We promote open communication internally, and that's how we want it with our customers too. Now this hasn't always been a core competency for us and our alpha and beta phases, we learned on informal channels of communication and they really didn't scale. So as we came out of beta, we knew we needed to step up our game and so we made a conscious effort to do so. Now it's a work in progress and it always will be. We learn something and we adjust the process after almost every event, and the whole company takes an interest because everybody has something to contribute to the process. Even if we feel like everything went great during an event, we take time to reinforce what went well. A crisis communication is scary. You're talking publicly about a failure often before you know what caused it. You could publish information that later turns out to be inaccurate and look stupid, or your competitors or detractors might use your candor against you, but you know the alternative is worse. Information has a way of becoming public, especially when customers have been impacted. If you're seen as dragging your feet or trying to cover up something, you're gonna destroy your trust with your users and once you lose control of the message, you're not gonna get it back. Your users will reward honesty and candor with loyalty. Every organization experiences failures, but not every organization has the courage to own their failures and communicate with users about them. Customers care about that and they prefer that to a company that covers something up. It takes buy-in from all parts of the business. If you do it without management and business, the messaging is gonna be poorly crafted, and if the technical team isn't involved, it's gonna be inaccurate and your reputation could suffer. You need to agree on when and how often alerts go out. You need to communicate as soon as possible, but you might need to contain something before you make your issue public. Update on a regular cadence so that users aren't left wondering even if you don't have something new to report. Agree on what gets communicated. Not every detail should be shared, but users need enough information to make intelligent decisions of their own and you will like those decisions a lot better if you are participating in the dialogue. You need to agree on who is managing the message and you need to speak with a single consistent voice. If you've got different information going out on different channels, that's gonna be confusing. Again, you're gonna break down trust with your users. You need to agree on where the message is being shared. Needs to be easy to find for your users when they are looking for status. We use status page and you all should too, but Twitter, email, and telephone are important tools for us as well. At Victor Apps, we have a cross-functional crisis communication team that gets involved in incidents in real time. Members of the team come from the business and technical disciplines and the team has an on-car rotation like all of our critical platform teams. Members of the team attend training and they get certification. They learn how to identify types of crises and handle messaging for each kind. They learn how to plan in advance for handling crises and they learn how to identify key stakeholders and communicate with them. We aim to manage crisis communication without disrupting critical work and the way to do that is to have somebody act as a liaison between the first responders and the crisis communication. Keep conversations about communication separate from the firefighting. Use a different Slack channel or a hip chat room or whatever you're using and have plans and runbooks ready to go for how to get the message out. The effort can help with troubleshooting by ensuring everyone has the same understanding of what's going on. So taking a breath to answer a question can sometimes give everybody a little piece of perspective or help them spot a detail that they missed. So don't be afraid to ask questions. It's important to retrospect on the process, focus on was the communication accurate and timely and do we make the situation better or worse for our users? Is there anything that we can automate or better document or stop doing? Another great tool is cross functional incident post mortems. Having business stakeholders there can help raise questions from an end user perspective and the better the crisis communication team understands the platform as a whole the fewer questions they'll need to ask during the next incident. We use a chat ops tool set for managing both incidents and crisis communication. Slack integrated with VictorOps to facilitate conversation different channels for technical troubleshooting and for messaging. We integrate with status page IO for quick updates to component status. Information about your failures will become public. It's a question of who controls when and how that information gets out. It can be you or it can be somebody else. Having a plan makes crisis communication less of a burden on the firefighters and doing it right will build trust with your users. Thanks a lot.