 OK. Thank you very much. Thanks for the kind introduction. I am David Chute. I am the founder of Rode. We do batteries included backstage with scorecards and zero maintenance. The developer portal journey starts for me in 2015. I joined Workday as an engineer and wrote the first line of code on a developer portal that we used internally to allow deployment on some internal platforms that we had. I became a product manager there. And eventually, that project became so successful that I was interested in starting a company. Quit Workday in 2020 during the first lockdowns of the pandemic and started a company. Backstage happened to be open sourced around the same time. And it seemed like a good mix. So I put the two things together. We've so far deployed more than 300 companies to production on backstage. And we are currently focused on making backstage easier to adopt and enabling people to measure the maturity of the software that their teams are building with scorecards and various other features. OK, so six months ago, I had this interest in learning about the best practices for adopting backstage. And I decided, let's go talk to some self-hosted adopters. Because there's obviously people at this point who have been in this community for three years, backstage is three years old. There's definitely some lessons out there that we can learn. And then we can share them like this to a wider audience. So I did what you would expect at the simplest thing. I went to the adopters.md file in the repository, started at the top, reached out to everybody and tried to set up some calls with people. Got a pretty good response and just kind of sat down with people and just asked them some basic questions, like how do you populate your catalog and have you tried to use the scaffolder? Compiled all of these answers together and resulted in a presentation that you see here. So a couple of potential sources of error in this presentation that I want to talk about for us. One, I didn't necessarily ask the exact same questions in every single interview. They weren't surveys. This was just me trying to have conversations with people and let them talk a little bit. So I had to kind of fudge the data when presenting it in this type of format. Also, the people that I was speaking to probably have some biases. If you've been working on setting up the backstage scaffolder and trying to make it a success inside your company for six months and then somebody comes along and asks you, is it a success? You're probably incentivized to say yes. So just keep that in mind. Secondly, I'm probably biased in the way I interpret these answers. There was some interpretation on my behalf and so that might show up in the data too. I also have some quotes here. Now I have anonymized the quotes. I've given some context on the size of the companies and the amount of experience that they have but I haven't bothered putting logos because that would require going through the legal and marketing departments of a bunch of companies and I just didn't have time to do that. So you're going to have to believe me that they are real quotes. Some of the people who gave them are in the room and they can verify later on. So just in terms of demographics and who I talked to, most of the companies were less than 500 engineers in size. The smallest was 80, good few in the kind of 100 to 250 engineers size and then there was some 500 engineer companies in there too. We had two companies who were 500 to 1.5K engineers and we had five companies who were 2,000 or larger. So pretty big enterprises. Quite a bit of experience. I was selecting four people who had a lot of experience with backstage. I probably did 26, 27 interviews in total and just kind of discarded some people who were really only getting started. The point of this is to learn how to adopt backstage, not how to deploy backstage and so I was trying to focus on people who had actually been working with that for quite a while. Yeah, so a lot of people had two years and more experience. Some of the earliest adopters actually were people I talked to too. Okay, so let's just look at quantitative stuff for a moment. I think most of the value in this exercise was in the qualitative data but there's some quantitative data that we can look at too and just kind of understand what that might teach us. So firstly, source of catalog population. For this I just asked a basic question like how do you populate your catalog? 12 of the 20 companies were relying on catalog info YAML files which I'm sure we all know and love. Four companies happened to have an existing source, an existing catalog that they were able to connect to and so they had a bit of an advantage as they were getting started. They already had a catalog in place, they just come in, connect backstage to it and voila, they have a beautiful catalog there. Some companies invested in custom processors and I'm gonna kind of book it a lot of things together when I say custom processors here. Really what I mean is connecting to some sort of source and scraping the catalog out of there without relying on engineers to create the YAML files themselves. So for example, in one case a company had built a custom processor for GitHub which would just read the repositories, it would look for a CircleCI config file and try and use that to figure out the annotation for CircleCI, things like that. Scaffolder was broadly successful across the whole cohort so out of 20 companies, nine hadn't tried to use the Scaffolder for various different reasons but everybody who did try to use it had reported success. Not a single company tried the Scaffolder and failed to get any value out of it. TechDocs, bit more of a mixed bag. Six companies reported that they had success with TechDocs. Six companies reported that they had tried to use TechDocs and failed to drive significant adoption of it. Six companies just hadn't really tried yet and two companies I forgot to ask about TechDocs so that's why they are there as unknown. We'll get into the details on this a bit later and why this might be but this is just the data up front. Okay, maturity metrics. So what I mean by maturity metrics just to explain is measuring some aspects of the quality of the software that teams are developing. It could be security, it could be reliability, it could be operability, compliance, rolling all these things up, I call that maturity. So interestingly, almost half of the 20 companies I talked to had invested their own engineering time into building some sort of maturity measurement features into backstage. They dedicated actual engineering hours to this. There's a couple of different ways that they do it. I'll talk a bit about those later but I think it's interesting that teams are actually investing in this area. Even of the companies, the half of the group who hadn't built anything, half of those were thinking about it and planning to invest in this area too. Plugins again, kind of a mixed bag like TechDocs. Eight companies reported trying to drive adoption of plugins inside their company and failing. Six were too early. Four companies reported succeeding and there's kind of different ways that they talked about it. Some, like half of them basically said, yeah, we put in open source plugins and they seem to be adding value and people are going to the catalog to use those. Two had succeeded with plugins but really felt like they'd only added most of the value with custom plugins for workflows that existed only inside their company. All right, cool. So that's the data. That's kind of like the pieces I was able to pick out but what have I actually learned from the interviews and we can talk a bit about some of that stuff. Firstly, templates can be a really easy and early win and probably is a good place to start with Backstage. There's a couple of reasons for this. One, if you think about templates, you can just write a template, put it into Backstage, send somebody a link, they can run it and they get value straight away. There's no cold start problem. This is kind of in contrast to something like the catalog where it's a bit like Facebook, there's network effects, the more stuff that's in the catalog, the more reason people feel like they have to put stuff into the catalog and so it takes a bit of time for it to build up. Templates you can just get started straight away. Second thing is that there's an easy to calculate ROI for scaffolder templates, right? If you take a job that used to take two months, you make it 15 minutes, you multiply that by the number of times that that job is run each year and you multiply that by something close to what an engineer costs and you can basically report upward as well, we saved X $100,000 with the scaffolder. That's good when you're trying to adopt a new technology because you want to get an early win, you want to have something to report success so that you can continue to invest more and more into other features of Backstage. Not all companies had equal success with the scaffolder. The ones who reported the most success kind of did two things. One, they tried to automate frequently run tasks, okay? If you imagine a job that's only run once a year, the actual steps that you need to run have probably changed by the time you get around to running it again and so there's not that much value in automating that with the scaffolder. So things that happen very frequently, the templates tended to stay up to date more frequently or more easily and those were there better. Obviously more arduous tasks are also better to automate. They just cost more to kind of follow all the steps and they're a good starting point. With the scaffolder, I think it's important also to encourage contribution. So some companies who come to me for sales calls and things like that, they kind of want to have this idea that they want to lock down the scaffolder and keep it for only some anointed people, architects or platform engineers who are allowed to write the scaffolder templates and other people will come along and run them. The companies who seem to have the most success with backstage, laptop just went to sleep, allowed anybody to contribute. And they had actually built ways to figure out which were the trusted scaffolder templates from the UI. So you could come along, let's say and see certified labels on certain scaffolder templates because they had been blessed by the platform engineers who were responsible for the overall reliability and the consistency of the scaffolder templates. And so we've obviously copied that ID into Rodean, we have that supported out of the box. Here's some quotes about the scaffolder. I won't read them all, but just picked out some points here that I've bolded. Scaffler is the king feature of backstage, I heard from somebody. We've reduced some tasks from a month to a few minutes. We have customers, Carbi, who were up on the stage here last year saying exactly the same thing. They took a two month task and made it 15 minutes with the scaffolder. The ORAY and that is obvious. And then Lunar have obviously publicly also said that they've locked down GitHub because they've been so successful with the scaffolder. There's no other way to create microservices there. Second big thing to think about, I think is making catalog population easy. This is clear in the data that this is important. Catalog is important because you can't get full value from backstage unless your catalog is correct, rich and complete. If you think about measuring software maturity, for example, can't measure the maturity of something that's not in backstage. And so this is a really important prerequisite. It's also really difficult, it turns out. So of 12 companies who tried to create a rich and complete catalog with catalog info YAML files, only two of them seem to be successful. One of them is kind of a special case that I'll talk about in a second. Other people seem to reach about 25% catalog completeness after a significant amount of effort and didn't get any further than that. There's a couple of things that you can do to help. So Twilio, who I actually did an interview, but I saw this recently and I thought it was interesting, are reporting catalog completeness numbers inside backstage in a very public way, which is helping them drive adoption. They have this feature called catalog health and it reports numbers like what percentage of components have a sneak annotation or a page duty annotation, stuff like that. It seems to be really helping them. They actually spoke about this at the Autodesk Developer Productivity Conference, which happened recently and it's recorded on YouTube if you wanna check it out. And so I think that that kind of points to the second point there. If you wanna have success with the backstage catalog, it seems that you do need some kind of top-down support. You need a carrot and a stick. Backstage can be great, but you will need some sort of a force, whether it's public metrics, which are a company goal that everybody's working towards, or even maybe locking down deployment of new services to things that you're in the catalog. Something like that will help drive success with catalog info yamlophiles. Yeah, I mentioned a special case a second ago, and it really kind of comes down to this. You might be thinking, well, okay, it's easy to create files. We'll just open an automated pull request against all of the repositories that we have. We'll put a catalog info file in there. We will explain what the benefits of merging this thing are, and then that's how we'll drive completion of our catalog. Now, I've personally seen mixed results with this. There's a couple of different obstacles. It depends on your culture. I've seen cases where companies have tried this path and some factions inside the organization have just opted out. They just say, we're not gonna merge that. We don't wanna be involved. We don't see what's in it for us. Other people will just merge it without actually looking at it or reading it, and you'll end up with a catalog which has a lack of ownership when it comes to the components in it. So I think the best thing to do here is to start small and take your time. You also need to consider the day two operations when you take this path. So if you have some success with the automated pull request and you end up with 800 components in the catalog and then you decide you wanna use the sneak plugin, well, now you need to annotate 800 components with their sneak service ID. That's not that easy either. You're gonna be waiting a long time for those pull requests to get merged. Mono repos are the special case where this actually seems to work. So it did speak to a company who had a gigantic Mono repo with 2000 components in there, and they had basically just scripted it so that it created cataloging for YAML files for every directory in the Mono repo. They got one really close to person to review and merge the pull request, and bingo, they had a populated catalog. So if that's your setup, this approach may be able to work. Otherwise, you're just gonna have to start slow. You're gonna have to take your time. Get a group of people who are early adopters. Get them into the catalog. Make their metadata rich. Install a bunch of plugins that work really well. Use those to evangelize backstage to the rest of your organization. And you just can't skip that evangelization step. You're gonna have to meet with a lot of people. You're gonna have to put people on the ground. You're gonna have to encourage people to take part. The most successful way to pop it in a catalog is by integration with an existing service catalog, which is maybe not all that helpful if you don't have one, or custom processors. Seven companies who tried one of these methods succeeded out of seven, right? So 100% success rate. There's a couple of different things that people did. I mentioned earlier on, one company had built a custom processor for GitHub, which scraped the repos. They had a fairly consistent setup, which allowed this to work. And they were kind of very upfront about that. They didn't think this would work for every company, but it did work for them. Lunar, again, built a Kubernetes operator, which sits in a production cluster, scrapes software from there, and then bootstraps the catalog from that. And in that case, all that engineers need to do is kind of claim ownership of things, right? Casper has talked publicly about that before. That's an interesting case that I think deserves a bit more attention. So in summary, I think YAML is a long and hard road. I think you need to be careful about setting off down that way. I think existing registries and automated discovery are key for a backstage catalog, completeness, correctness, and richness. I'd love to see more focus from the community in this area. We are doing some stuff in Rodey around decorating entities. Twilio actually independently did the same thing, but you can now annotate components in Rodey without editing the YAML. No pull requests needed. You just set annotations into UI, and we see success with that. We're also looking at ways to bootstrap catalogs from integrations with Argo CD, Kubernetes, and other places where you would have your production software deployed. Okay, TechDocs. So TechDocs worked in about half the companies that tried it. The main difference between those where it worked and those where it didn't was the lack of the existence of a technical documentation tool that people were already happy with. Confidence doesn't count, but there are tools out there where people are kind of happy to use them. And in that case, you will struggle, or the people interviewed struggled to get adoption for TechDocs. It also needs a bit of a culture shift in some organizations. It is a docs-like code approach. You do lose some things with that approach, right? It's a bit easier, or it's a bit more difficult to edit documentation. You have to open a pull request. You need to know how to do that. You need to be semi-technical. And there is a trade-off there, and some organizations just didn't want to make it, or couldn't make it happen, and so they struggled for that reason. Organizations who did succeed with TechDocs also seem to put some investment into reducing the friction of using TechDocs. So they would, for example, create a scaffold or template, which allowed people to easily add a docs directory and an MK docs config file to their repositories. So that's something we're looking at. API specs also can be quite successful. The best example of this is Zalando, and they've publicly said that they have 43% daily active usage of backstage, and one of the key use cases is basically engineers looking up API specs. Now, the thing about backstage is your API specs are not going to magically appear in backstage just because you've deployed it. You're going to have to invest in collecting those API specs, putting them in backstage, making them versioned, making them searchable. These are things that don't necessarily come out of the box with backstage. Zalando have built all of that. They have a pipeline for introspecting their services and producing API specs and pushing them into backstage where they are versioned and searchable, and we're doing similar things in Rody. Plugins then again, I mentioned, you know, quite important, a bit divisive in terms of the success that people have had. The thing about plugins, and actually two people I interviewed specifically said this, is you need to be intentional about which plugins you use, right? Both of these companies, I got really excited at the start, true in all the open source plugins that they thought that they might use. Confused people, they were kind of broken. The catalog wasn't ready for it, and they ended up taking them back out again. So I think it's worth thinking first from a kind of product standpoint, what are we actually trying to do? What are the use cases that we're trying to solve for engineers? And then install or build plugins around that. Custom plugins that you build yourself seem to add more value than the open source ones. Obviously the open source ones are quite generic. They just do basic functions for a lot of SaaS tools, like GitHub Actions, for example, where you can see your CI and retrigger jobs. Those didn't seem to drive a lot of usage because engineers would still prefer it to go to GitHub Actions, which they're used to using every day and has more functions anyway. So plugins best succeed when they enable engineers to do something that they can't do otherwise, or something which has really high friction. There's probably a lot of these things in your company because you're gonna have legacy or custom workflows which need to be executed fairly regularly and can be a little bit friction to do. Those are the best things to put in the backstage. And then again, like I said, empower other teams, right? The best companies who had the most custom plugins seemed to have a model where they would consult with other teams, but they wouldn't build plugins for them. So they would either empower them with scaffolder templates to create new custom plugins more easily, and they would consult and kind of embed with these teams to give them the skills, but they weren't building plugins on their teams behalf because those teams have the domain expertise to do it best. Lastly, I wanna talk about software maturity metrics. So this, like I said, half the companies in the interviews that I did had built their own software maturity metrics into backstage. Couple of different ways they do it. Two companies had basically built a custom plugin which would ask questions of the owner of the software directly inside backstage. You'd fill out, you know, kind of feels like does your software use, have customer data? What data store does it use? It's kind of stuff. And then they would record these answers once a month or once a quarter, some manager would fill it out, and it would certify the software and represent that inside backstage some way. So that was one way to do it. Second way to do it was a bit more automated. So some companies had basically built visualizations into backstage where you could see aggregate metrics of the versions of dependencies being used, right? So answer questions like what percentage of our catalog is using a certain version of a certain library. That was a popular thing. Also on a couple of companies I've done that. I think the thing, the interesting thing about this is that you can actually get a lot of value out of it without having really high daily active usage of backstage. So it does take time to build up that daily active usage and make it a place where people go really regularly. In the meantime, you can be getting a lot of value on the kind of compliance and governance side by investing in this area. It's also something that shows a decent ROI and people who bless projects need to care about. But like I said, it does require completeness, correctness, and richness in your catalog. You can't apply automated checks to software that's not in the catalog and you need to have high richness so that you can map page of duty to the backstage catalog if you wanna create a check which is like production software must be in page of duty. All right, so in summary, three minutes left. From this research and a lot of sales calls I've done, I see two paths that work with backstage or two paths that people wanna take. They're kind of mapped to the create, manage, explore nomenclature that we've seen before where discoverability is explore and increase software maturity is create and manage. And you can do both if you wanna invest in it but you do need to do some work regardless, right? You're not gonna just deploy backstage and on day one get really high value from either of these paths. You need to put some work in. If you wanna improve discoverability the thing to do is automate catalog collection. It's always automated catalog collection. You don't have to start there but you do have to automate it as far as I can see. Look at tech docs, look at API specs, make those things available. They will drive that kind of daily active usage that discoverability and then invest in custom plugins. Work with teams to find workflows where more automation is needed. Easier ways to do things are needed and empower them to create plugins that go on the backstage. It's gonna require patience. It's just gonna take a bit of time. Even companies who've been working at it for two years sometimes still feel like they're early on this journey but it can result in high daily active usage. We see this from Spotify, we see it from Zalando and other people who have been willing to put the work in to do this. The second path is to increase software maturity equally valuable just slightly different. You start with automating catalog collection. You invest in scaffolder templates because by giving people the tools that they need to create new software with the best practices of the organization baked into them, you have a good starting point for improving the standardization and henceforth the maturity of that software. And then you likely need to build your own tools or purchase one of the tools like what Rode has with Tech Insights, Spotify has with Soundcheck to measure software maturity with scorecards and other technologies across the catalog. This way won't see high daily active usage which I think is kind of a weird thing to say when it comes to backstage because we talk about it empowering developers and people kind of expect developers to wake up and spend their time in backstage. You won't necessarily achieve that with this path but it is valuable all the same and improving the maturity, compliance, security and reliability of your software is highly valuable. That's it for me. If you wanna give me feedback you can scan this QR code apparently and it'll let you yell at me. You can also email me there at davidrote.io. Happy to talk to people about backstage. It's what I do and thank you very much.