 All right, welcome, everybody. My name is John Laban. I'm CEO and co-founder of Ops Level. And I'm going to be talking to you about leveling up your software standards with Backstage. So software standards, like production readiness standards, those kinds of standards, they often come through mistakes and failures and screwing up. And that might be in the form of downtime or incidents and really learning from those incidents, figuring out what you did wrong and learning from them. So before we start, a quick show of hands. How many of you folks work at a company that has an incident management process? When something breaks, they have processes and tools that you use to, OK, tons of you. All right, perfect. Awesome. So a little bit of background about myself. I do have some background in this area myself as well. I can cover a little bit of my work history. If you go back 15, 20 years, it wasn't that long ago. But I was carrying a pager at Amazon. I was going on call for the services and systems that I was building. So I was a software developer, but I was also both shipping and operating all my software. And then I went on to join a tiny three-person company called PagerDuty. I was their first employee and was there for many years. And we grew fast. And we were helping other companies with their incident management process. Now I'm the founder of Ops Level. And we're a company where we work to help other companies prevent incidents before they occur. So I was asking about your incident response process. I know you folks have one. Here's what an incident response cycle typically looks like. So how it works, it's a feedback loop. So it starts when an incident is first detected. And then next, a team assembles. They get together and they go fix the problems. They stop the bleeding. They fix the issue. They do all the remediation that needs to happen. They might use a bunch of different tools for it. And there's lots of good incident management tools out there. But what I've seen often in industry is that aside from a tool like that, something like an IDP, Backstage, for instance, is super useful in parallel. Backstage is often thought of as helping to help ship and build high-quality software. But it can also be used, and it is used very frequently when you're running an incident. So when you're trying to find operational dashboards or links or tech docs or everything else that's needed to operate a service, it's a great place to find all that stuff. So finally, after all that happens, sometimes a few days later, the team who's responsible for the root cause of the incident, they get together and they run a post mortem of some sort. That's where all the learning for this happens. The team gets together and they figure out what the root cause was. Maybe it was some buggy library dependency or some misconfigured problem with a client that causes thundering hurt issues or who knows what. But they figure out what the root cause is, and then they try to prevent it from happening ever again. So they'll go fix the issue in their service that broke, and then they might fix the issue in the rest of the services that that team owns. But then from there, that's often where the learning ends. That learning happens and it gets applied, but often it stops at the scope of that one team. So how many people here have seen this happen where you have a team that experiences some issue, they fix it, they have an incident response process, they fix the issue, they learn from it, they get better. But then that learning doesn't permeate everywhere, and then maybe a few weeks later or a few months later, another team has the same incident where it's the same issue that happened before. Has anybody ever seen this happen at their companies? It's extremely painful when it does happen. And what's needed is a larger feedback loop that covers the whole engineering organization. So in a situation where that team has the incident and they learn from it as part of their incident management process, what's needed is another team, often a platform team or an SRE team or one of these internal teams, they take those learnings and they decide and establish any kind of standards that might be applicable across the rest of the engineering organization. So over time they drive those standards over and upgrade other services and drive those standards across the organization and as those standards get adopted, it goes on and helps prevent that same issue and outage occurring again. So backstage today, it provides really good visibility into your software architecture, your software catalog, tech docs, that sort of thing. And it also has really great self-service capabilities that improve the developer experience like scaffolding, self-service actions. But another major pillar and one that I'd argue is really, really critical is standards. So ops level has created a backstage plugin that lets you measure and drive the improvement of standards across your organization. Our standards plugin, it lets you codify those standards. So this is what it looks like and you can build a rubric across a number of dimensions, reliability, security, scalability, quality, lots of dimensions like that and you can build scorecards and run checks against all of your services and all of your components in backstage. So these can be used to validate configs or package versions, let's say in Git or validate things across any of your other tools. So there's lots of reporting, you can see how you're doing across those dimensions in your rubric as well and it doesn't stop at measuring. You can actually drive changes and drive improvements to your systems through upgrade campaigns. So thank you, I hope that was a good use of your five minutes. If you wanna learn more about our backstage plugin I have a QR code up here, as well we have a booth outside and we're happy to answer any questions that you might have.