 A lot of money coming in, a lot of money coming out, a lot of initiatives. And the thing about employees at a corporation is they really like getting paid, right? And whenever you have budgets, you're going to have budget cuts, right? And the reason that metrics for your corporate involvement and open source are important is because if you're sitting at a table and someone is presenting the data on how the diversity initiative is affecting the workplace and how the U.S. design is improving traction on the site or payments or workflows and how the marketing campaign is driving up business, if you go up to present data about the open source campaign and don't have any numbers to back it up, you're a good candidate to get cut, right? It's a role in which you have to justify your existence or you're going to get cut and you're going to have to go find another job. So the thing about numbers is that we use metrics as a conversation macro, right? If you go all the way up to the executive level, they have a lot of things they have to keep track of and if they ask you, how's our program doing? And you say, I'd like to put a half an hour on your schedule to talk about it. They don't care. They just want the numbers. There's no time. Just give me the number and you're going to get pushed to the numbers a lot. So before we dig into how to define metrics that matter for your project we're going to cover some of the most common misunderstandings and missteps that happen in open source measurement, starting with community. So it's crucial to be able to differentiate between a vanity metric and a health metric because there's a world of difference between the two. So vanity metrics skim the surface, as you might imagine from the name, but health metrics get to the core functioning of the community. And the concept of vanity metrics is nothing new. I bet a lot of you folks have heard of it. I've seen articles on it that date back 10 years. But most of the metrics that I see folks using still involve a lot of skin deep ones. So there are a couple of main reasons why even though we know better we tend to use vanity metrics. And the first is that they are so much easier to gather. So just about any platform analytics or built-in reports that you're going to see, most of those are going to be vanity metrics. And the reason that companies do this is for the second reason that we like vanity metrics. And that is that vanity metrics make us feel good. If you're tracking the number of Twitter followers that your account gains over time, unless you're somehow losing followers, it's going to be a nice upward trend no matter what. So that's exciting. And it's okay to feel like you want to succeed at your project because vanity metrics do have this alluring simplicity. But that simplicity is what makes them completely ineffectual. So to give a little better sense of what I mean, here are a couple of the vanity metrics that I see frequently cited in the open source community. How many stars your repo gets on GitHub, how many subscribers you have on a mailing list, how many members are joining a chat room, or how many page views your docs happen to get. And each of these has in common a shallowness because it probably looks good in a chart, but it doesn't tell you much about the people in that community. What motivates them? What inspires them? For that, you're going to need deeper health metrics. So the patterns that we see in the corporate world are pretty similar. The thing is if it's easy to count, it's really easy to discount. It's also really easy to abuse. So stars on a repo, folks on a repo, watchers on a repo, doesn't mean a whole lot. And especially in the seat where I sit, I'm not looking at one repo or a few repos for one project. I've got 10 pages of GitHub repos to look at. And pages of repos, well, 10 pages of repos that sounds like it might be very impressive. Well, a lot of pages in a book doesn't make it good. It doesn't make your program good. It doesn't make your projects good. The numbers that you provide, numbers are facts. They don't tell lies. But you should be prepared for the people that you give those numbers to to maybe tell lies on their behalf. And anyone that you give a number to, is going to attempt to use that number to tell their own story. So you need to choose the numbers that you provide very wisely. This is a little strange, and if you haven't worked inside a large corporate world or inside a large corporate open source program, it's going to sound foreign. But the idea of creating an open source project and widely adopting it inside the company does not make it a successful open source project. You use it everywhere, but nobody from the outside is collaborating with you and contributing to it. It might be a very successful project. It might be a great solution for what you have, but it's not a successful open source project. I don't want to say the same goes true. One way that companies fall into this trap is they'll develop something and they'll drop a point, and that's not open source. You're not giving any community opportunity to come and provide feedback. You're not pulling in external collaborators. So metrics are rules. And you're going to tell people inside the company what those rules are. At least you should, because if you're using these rules to judge performance or to judge how a program is doing, and you don't tell people what the rules are, you're kind of playing unfair. So you want to tell them what the rules are. But now you've created this system of rules, you need to expect that people are going to attempt to gain that system. And this is doubly true when there's money on the line in forms of compensation and bonuses and performance evaluations. And it's ten times as true when those people are engineers. We love exploring the boundaries of rules. We love exploring how we can gain a system. This is maybe a different, I don't think this is what they meant when they said 10x. So now that we're fairly clear on what not to do, let's dig into how to determine some good metrics. When it comes to open source community, there is no one size fits all measurement plan. I know it's a bummer, but that's why we're here at this talk. So you'll need to choose the right metrics for your particular project. And your metrics should be fairly what a platform decides that you should be seeing. So something like vanity metrics and built-in reports. So before you even look at the built-in analytics dashboard or reports from a community platform, sit down and decide and write down what information would be the most useful for you to decide how to iterate on your current strategies. For example, Google Analytics makes it super easy to measure your page views. And it even gives you pages and pages and all these granular information on your page views. So you feel like you're going very deep. But ultimately, it's a vanity metric. So you know that a particular tutorial or blog post was popular, and that's awesome. But who exactly is reading it and why? The page views don't really help you make informed decisions about where to go next. So instead of page views, track unique visitors and returning visitors. Put people first. And try to gather data specifically on the kind of community members who end up being core contributors and really contributing back in positive ways, which we'll talk about in a bit. Because if you can figure out how and why these super community users joined, you can reverse engineer the process and pull in even more and continue to grow the house. So this brings me to the value of qualitative research, by which I mean actually talking to people in your community, maybe even face-to-face. Because no analytics tool can replace directly chatting with people. You can find out what motivates them, what they get out of participating in the open source project, and as a bonus, when people are giving you all this feedback and you're talking to them, they feel more connected to the project and more heard. Everyone likes to truly be listened to. So I think there's sometimes an aversion to qualitative data in tech because it feels like this wibbly sort of soft thing that is hard to define, but there are always to quantify qualitative data. With a large enough sample set which you hopefully should be getting through your research, you can find the patterns and track the commonalities among these qualitative observations. So in collecting your data, both qualitative and quantitative, I found it really helpful to differentiate three different levels of engagement. Passive, active, and champion. So passive, as you might imagine, means something like lurkers in a chat room, maybe reading docs without commenting or improving them. And active translates to someone clicking a link or maybe liking an update, giving a plus one. And champion status requires responding to a conversation, submitting a bug fix or maybe sharing an update with your own personal network. It's more invested. So it's fairly intuitive, I bet, that you should be tracking active folks and the champions, of course, but what might not be obvious is that passives are incredibly important to track as well. So folks might be logging in every day to your chat room, for instance, and be listening and engaged and really interested, but maybe they're just not chatty. Maybe they're introverted. Or maybe English isn't their first language and most of the conversations happening there are in English. Or maybe they're just new to the project and dipping their toes in and trying to figure out the culture. So either way, for whatever reason, it's crucial to track the passes. And in tracking them, you get this great opportunity to nudge them into more active or even champion users. So in this presentation, lurking doesn't necessarily have to be a bad thing. And as an overarching principle, don't forget to track the non-technical or non-code contributions to community because open source community encompasses far more than just code, as a lot of talks at this conference have been saying, which is fantastic. So questions on Stack Overflow, blog posts, talks, community meetups, tutorials, these are all things that you'll want to be tracking. When you're choosing how to measure community growth, it can be really tempting to prize rapid quantitative growth as the main thing to track. So it seems pretty logical. A community that's tripled in size in a month is probably triply healthy as a community that sort of just grows maybe 5%, right? So we decided to track just the pure growth metric of it and try to pull in as many people as we possibly can irrespective of who those people are. And I have nothing against growth hacking in other areas like sales or marketing. I think it can be a great tactic there, but you can't growth hack a community because you can't growth hack people. So the metrics that we choose have to reflect more than just size. We want to make sure that you don't fall into the trap of just growth hacking is to formulate your metrics using your community guidelines. Now obviously, if your guidelines are something like, be nice to each other, smiley face, this won't help you very much. But if your guidelines are more fleshed out as it's key to any healthy community, you can use them as a guiding ethos. So for example, the slide up there is a section from the Speller.org community guidelines that explores expected behavior. So instead of being, you know, you can't do this and you can't do that, it's what we're actually looking for. And the first sentence says, participate in an authentic way and an active way. And in doing so, you contribute to the health and longevity of this community. So right off the bat, I can construct metrics to capture this kind of expected, ideal behavior. And similarly, the open source citizenship section of our community guidelines says, if you see someone making an extra effort to ensure that our community welcomes all participants and encourages them to contribute, we want to know. So this tells us precisely what we're looking for in community leaders, and it lets us know that we can specifically track places like the onboarding process especially. So modeling your metrics from community guidelines steers you towards a sustainable way of growing a healthy community. Okay, so back to the corporate world again, and I want to remind everyone where we are. We've got overlords, they have agendas, there's money on the table, and they want something. So I'm going to give you the most important tool for digging down into the metrics you want in this scenario that I think you'll ever get your hands on, and that tool is why. Because if someone comes to you and asks you for a number, they don't want the number, they want an answer. They've already figured out what they think the answer is, and they just want that number. But when you get faced with a question where they're just asking for hard numbers, dig into the question that they're actually asking, because if you don't know what that question is, you're not going to recognize the answer when it comes along. So really spend some time trying to understand what it is that they're really trying to do. So you also want to know the answer to these questions before someone asks you. So nothing can be more alarming than having an executive kind of grab you in the hallway and ask you for symmetric data that you don't have, or ask you a question that you can't answer. You have to kind of dance around the problem. You don't want to have to guess at the problem because, again, as soon as you give a number, that number is a fact, and that number will be used to tell a story. So my suggestion is that you take some lessons from XD, from experience design. And so when you are starting to put together what things you're going to measure in your role, create some personas. Create a persona for that Vice President of Engineering and really think about who they are. My compensation largely comes from bonuses that have to do with how the company is doing. I have these considerations. Or a persona for your director of engineering who wants to have some very specific questions. And then once you have these personas, take another lesson from Scrum or from other agile methodologies where you create user stories. So as the Vice President of Engineering, I want to know if our participation in open source is helping our employees collaborate and continue to fund it. As soon as you've got a story like that, now you're starting to find some metrics that you want to use. As director of open source, I want to know which of my projects are responding to pull requests so I know which engineers need help. When you take this approach to figuring out the questions and the answers that you're looking for, then you can start to dig into some good metrics. You're going to get a lot of push to give a number. Give me the number. One number is never going to tell a complete story, especially when you're trying to look at a project's health. So take one metric and compare it to a few things and try to boil this down into a picture that you can draw for someone in 30 seconds. So you might take something like external contributors coming into a project and how quickly they're turning around pull requests and how many active works that are out there. And now you've got a better idea for how a project is doing. And the thing is, time is a free dimension in this context. So if you start tracking these things over time, you can tell, hey, our external contributors are coming up and going down. We look like we're losing adoption and so on. The other thing that you can do is use some bad metrics. Use some of those vanity metrics. And I know that's not what we said to do earlier, but we're going to do something sneaky. We're going to play a deeper game. We're going to go in an anti-anti-patter. So let us suppose that you've got a problem where your engineers, when they submit pull requests against other projects inside the company, are submitting 10,000 line pull requests and they never get merged. And what you really want to do is you want to help them understand how to drive down their commit size. So what you could do is you take some of those engineers, some of the ones who are the worst defenders, and you find a project that you use, an external open source project that you use, and you find some things that you need to have done, whether they're bugs or features that need to be added, and you send them off to work on that for a month or so. Because if you submit a 10,000 line pull request to an open source project, it's going to get kicked back to you and you're going to have to learn how to break it down. And you're going to have to learn how to take that feedback. And over time, again, you put a little prize at the end or you find somebody that you want to incent them, either with time or money or whatever the things that they are, what are they going to do? When there's an incentive, they're going to do everything they can to break those commits down into smaller, smaller, smaller pieces. So they think that they've kind of gamed your game there, where now they've divided these very simple rules and broke them and I want an Apple Watch, and at the end of the day, now they've broken their commits down into smaller sizes, which is what you wanted in the first place. So playing a different game with bad metrics can be very useful. Something else to keep in mind when you're looking at metrics for a project is you really have to account for project maturity, right? Given two projects that are about the same size, I do want to say mature. Something that has been pretty well worked out, it's pretty stable, there's not going to be a lot of action on that repository. For example, if we decided that we had an idea that was better than the ReactJS, right? And we put it out there in the JS community, it really, really latches on to it. It's going to see a lot more activity than something like Ember, right? Which is mature, it's been around for a while, it's very well understood, okay? So factoring in project maturity will give you a better idea than just a hard one. So we've talked a lot about the theory of these things, but to illustrate these concepts of choosing a metric that are good for your project in practice, I'm going to open up the hood on the ways that we measure community at Stellar.org as a kind of case study. So in particular, I want to zoom in on one of the platforms specifically that we use, so that we can get more into detailed specifics. So we're going to delve into Slack. First, I've got to give a big shout out to Stellar.org's educator, Vanessa Genarelli, who set up this framework for us of tracking and working with her has absolutely informed my understanding of measuring community. So thank you, Vanessa. So show of hands, how many folks are familiar with Slack? Okay, a decent number. So for those who don't know, Slack is a proprietary chat client that's asynchronous, synchronous, can be both. And it was originally built for internal workplace communication, kind of as like an email killer, although that didn't actually work. But it started being repurposed external facing public open chat room for different communities. So when we heard about that, we thought we use it successfully internally. Everyone loves it. What if we had a public Slack for Stellar? And about half the team was completely opposed to the idea because at the time we were using IRC as a lot of OS projects do. And to be totally candid, it was a little bit of a ghost town in there. And there were definitely days when I didn't see any activity on the chat room at all. So the theory was, well, I open another one if you're already kind of not seeing a lot of action. But nevertheless, we tried it as an experiment and we set out a couple of hypotheses and goals for the channel. And we hypothesized that it was going to be a deeper engagement than any other platform that we had at the time. And we also hypothesized that it was going to facilitate peer-to-peer interaction in a way that an email list or Twitter cannot. After collecting data from the past eight months, I think it's safe to say that Slack is one of our best channels. And the community drew to be this vibrant, active place with an average unique user growth of 12% month over month. But if you were listening earlier, you will know that that growth rate is not enough to tell us that the community is healthy. So we pair it with other metrics, like the level of peer-to-peer interaction happening, which is quantifiable by the number of direct private messages users are sending each other. We also pair it with some qualitative observations, which I'll get to in a second. So how did we gather that data if these companies sort of often make it harder to find things that aren't vanity metrics? Sadly, the stats that are available for free Slack users, which you're probably going to want to be free for $5 a month per user on an 800-user project. And having these limitations on gathering data is one of the trade-offs and drawbacks of any proprietary system. But we have figured out a couple of work rounds to glean the data nonetheless. So for starters, Slack's free plan does give you weekly reports on unique member growth, the number of direct messages sent between people, which you normally wouldn't see at all, of course, because they're private, and the number of users who go inactive in a given time period. So they send this data in emails every week, but the data kind of expires. So if you want to be tracking this, you need to save it immediately to whatever spreadsheet you're using or just save the emails in a safe folder somewhere. Another hacky workaround that we use is setting up an IRC mirror for Slack channel. So for free Slack plans, message history is truncated at 10,000 messages. And after that, anything prior is just gone. And as you might imagine, this presents some problems, finding resources you were talking about, and maybe for newcomers to catch up on the conversations. So we set up an IRC mirror, and now BotBot tracks all of our prior data, both the conversations themselves, and we have the data on that. Plus folks who don't like using proprietary chat clients, like Slack, they can participate via their favorite open-source IRC client, and that's worked out really nicely. We've also made qualitative observations about the conversations and learnings that are happening in Slack, and we found it to be the number one place that developers go to connect with each other on projects, through bugs, and to talk through design and implementation. So even if user growth were not that high, that kind of qualitative quality of conversation would influence my assessment of Slack's health. So to look at a couple ways that you can use metrics again in a corporate environment, let us suppose your corporation's hypothesis, employees who are able to engage in projects about which they're passionate are going to be happier. Most large companies run something like a Pulse Survey or some variant of that same thing, where every year you get either one page or several pages of questions about how you feel about working at this company. On a scale of 1 to 10, how likely are you to recommend working at this company to a friend or a co-worker? Have you seen that question? A couple people. Okay. It doesn't have to be that complicated. You can run smaller versions of surveys, and in fact, I encourage running smaller surveys more frequently versus one huge annual survey and basing everything off of that. The overlords might have a different opinion. But if your hypothesis, again, is that employees who engage in open source work about which they're passionate are going to be happier, you can go right down to A-B testing within the company. So you take a subset of engineers who look like they're giving about the same kind of feedback. You let them engage in projects by which they're passionate. And the next year you see if things have improved for them. It is a longer kind of measurement and thinking than you would typically be able to do. So another one is that your company's hypothesis might be that participating in an open source project will improve mentorship inside the company. So if you have a code review process where feedback that is going to engineers isn't good, or if a lot of it is happening outside the pull request, which means that other engineers can't refer to it later for learning, or if there's other problems with mentorship, or if you want to see if you can improve it in some way, again, you take some engineers, you send them off on a tour to fix bugs that are important in an existing open source project, they get mentorship from the project, and then you take a look at their iterations on pull requests when they come back in. So one of the metrics that we use to look at that is how many back and forth parts of a conversation happen before pull request is merged. So if there's one, hey, somebody review mine looks good, there's probably nothing happening in the code review. But if there's five or six iterations, then you can say there's probably some good feedback going on, and if there's 10, 20, 30 iterations, then it's something you'll have to look at manually. Maybe it's a really complicated pull request, maybe the feedback that's being given isn't useful. And if you can compare those iterations on a pull request before and after their participation in open source, you can find out if that's having the effect that you're looking for. So now that we have all of this amazing data, what do we do with it? So to circle back to the beginning of the talk, why health metrics matter, we need this data to make decisions. So once you have the data, go back to the hypotheses and goals that you posited before and see how it measures up. The most important question that you can ask and answer is where are the gaps between my goals and where we're at presently, and how can we work to iterate our strategies and close that gap? So set your goals, hypothesize, try out a strategy, analyze the data, and confirm or disprove the hypothesis, and then of course iterate again and again, repeat forever. The same thing is basically true with metrics in the corporate environment, except that you really want to know if there's a problem before you get asked if there's a problem, or before you get told that there is a problem. So if you're tracking project health across a number of projects and you see one project is again, somebody's no longer responding to issues and pull requests and you see the closed time going up on these things, you know you need to go investigate that. Maybe someone went on vacation and didn't leave a person to take their place. Maybe it's something the company isn't using anymore. But you can take those numbers and you can issue your own course corrections either by finding someone to stand in or getting involved in the project yourself before they become an issue for someone higher up. And now that you have these numbers, again, if the people above you are going to use these numbers to tell their story, you tell them, use those numbers to tell their story. Use your numbers to tell your story and use those numbers to fight for your budget or to fight to increase your budget or to fight to increase your head count and show this is what we were able to do with what we have and this is what we think we could do with some more people. It's really important to keep in mind that if your metrics do show that your community is unhealthier or not as healthy as you thought it was going to be, this is by no means a failure. You've just learned that that particular strategy doesn't work for your community. So it's an opportunity to try a different strategy and test again. So don't be afraid to use deep health metrics for fear that it might reveal something on flattering or feel like you're failing because it's far more important to get an accurate pulse of the community than to be successful 100% of the time. And whoever you're showing these metrics to, all the time saying, oh yeah, A plus just always upward, everything's perfect, I think they're going to get suspicious anyway. Not every open source project deserves to succeed just because it's open source. And if you're working in a company where you have engineers who are spinning up their very first open source project, maybe letting it fail is the right thing to do. So to talk a little bit about a vetting process that we use, if someone comes to me and says, hey, I have this thing that I think is going to make a great open source project, the first question I ask is who do you talk to outside the company? Because if you haven't talked to anybody outside the company, then you don't really know. Engineers tend to think that the things that they create are amazing because they created them otherwise, why wouldn't they have? And everybody else is going to see how amazing this is and this is going to be better than what's already out there. And if they haven't talked to anybody outside the company, a good place to send them is go write a blog post about it. And your company's policy may differ, but just giving them permission and clearing permission for them to do that is really useful. You might have to mentor them through the process of writing a good technical blog post, but you might not be able to share code, but you can say, hey, at this company we have this particular problem, here's the solution that we created and this is what we think about it. Now if you get a whole bunch of hits on that blog post, what's the vanity metric? It doesn't really mean anything. You start to get some comments on the blog post where people are engaged enough to have a conversation about it. Okay, now that's something useful, but when somebody from Netflix calls and says, hey, I'd like to see the code on this, I think we might be able to use it and let's collaborate. Okay, one external contributor from another company like that, I think is a reasonable option to say, hey, let's try this as an open source project and see where it goes. And if you write the blog post and you put it out there and you get crickets, who cares? If you go to a Meetup and you talk about it and nobody gives you a business card at the end and they didn't care enough to come up and talk to you about it afterwards, maybe the project isn't interesting. Or maybe the other thing to do is let them put it out there and watch it fail and say, okay, what did you learn and how do we go forward from here? So we hope you've learned a couple of techniques or ideas and frameworks for measuring your project's health. And if you do end up implementing any of these strategies or anything else, we would definitely love to hear about it. Twitter handles up there. And I know that it's Sunday, like last day and afternoon. So thank you so much for coming out and listening. We'll take questions if anybody has them. We've got a little tiny bit of bonus content we could talk about if nobody has questions. Well, I'll hand the microphone down here. You know, Benny, what would be a good source to get, like you said, advice or feedback on your possible renewal? Is that what was your name? Ryan. Thank you, Ryan. The question was, what's a good place to get feedback about the potential open source project that you have? My recommendation is to start with local meetups that are focused either around the problem space or the technology that you're working on. They're always looking for people to come out and talk. It's a great place to start going to conferences and throwing in a lightning talk about the project that you have and seeing if anybody comes up. It's also another good option. And again, I think writing a blog post, getting it out there, getting some people to spread the news about it. Because again, if you're talking about it and nobody cares, and you're blogging about it and nobody cares, something's wrong. And it might be that you're not crafting your message very well, or it could just be the text, or it just isn't interesting to people. Thank you for your question. Anybody else? Okay, I'll repeat it. Really good one. Let's say that you do do a test run with a new open source project, and it does fall a little flat. The community is not very into it. Does that hurt the community health? Does it maybe hurt the perception of how good your software, your company is? In my opinion, absolutely not. Because it shows, I mean, for one thing, if it's really that unpopular, people maybe didn't even see it. And it was maybe a couple, maybe a hundred people saw it and were like, ah. For another thing, my favorite thing that corporate people do on blogs and things is to talk about their mistakes and what they learned from them. So I would love to see a blog post called We Failed. And here's why, and here's what we learned. And I think it sort of makes it feel more human, and people instantly feel more connected when people are really candid about their mistakes or they look suddenly human. So I actually think that it's handled correctly. It can be a really good thing. Handled incorrectly would be something like shutting it down, not saying why, pretending like it never happened. But yeah, I handled right. I think you guys should go for it. To echo what she just said, we do not blog enough about our failures. And I think that blog posts about projects that didn't succeed and what we learned as a result of them are the single most useful thing that companies could be doing and they aren't. To poke at your question a little bit, was your concern that it would negatively affect the company's community or the project community? I'm sorry? Okay. Yeah. Thank you. Any other questions? Yes, sir. Drawing our content from which? Academia. Oh. So the question was, are we drawing content from academia for these subjects? I didn't. I drew my content from my own personal experience. I partly am through, as I mentioned before, Vanessa Generale, who works on our team as an educator. She is from MIT. She was part of the team that worked on Scratch. If anyone's familiar, big into education and learning design of the theories that I've been imbued with for the past year and a half have absolutely had groundings in academic theories. But I wouldn't say that they're not like word for word or sort of, you know, by the book, but definitely influenced. Awesome. Excellent. The comment was that Purdue was doing research in this area. Thank you for the feedback. I'll definitely check it out. Thank you. We have one more slide we can talk very briefly about. No live demos, but no, we're fine on time. Sorry, I was checking. I forgot to start our timer. We'll talk a little bit about tools. And we wanted to do a big section on tools, but the reality is there's not a whole lot out there for determining good metric solutions for open source or for community. These will be on the slides. These will be up online. So I actually put these in the reverse order. So Metrix, Grimoire, and Visgrimoire are both put together by a company called Vitergia or Dashboard. They're both fully open source projects. You can take them and install them and build them for your own project. It pulls in data from a number of different source code repositories. It will pull in data off of mail lists and give you some interesting charts on how your project is doing over time, bugs open and close, close time, and committers, and some demographic information about that. They just recently, and I believe it was just a few weeks ago, released something called Cauldron. And that is something you'll find at Viterge.io. And Cauldron, I haven't had a chance to talk to them to see exactly what it is, but it looks to me like it is a version of that Metrix-Visgrimoire solution running as a service, specifically GitHub focused. So it doesn't look at any other source code repo that I don't believe it does any analysis off of mailing lists or anything, but you can sign up and you can put it in an organization and a repository, and it will give you some interesting information. I think they're still dialing it in. I expect they're probably going to talk about it at FOSSTEM this year. And the one in the middle there is an open source dashboard that was released by Amazon that is more focused toward someone who sits in my role, so runs an office with open source and a large company. The needs in that role are very different from the needs in someone maintaining a single project. If you're trying to get a sense of the health for something like Express or something like Node, there are a few repositories that you need to look at, and that's about it. If you're in a company who might have three or four different organizations because you've bought companies and several, several pages of repositories that you need to look at and you're trying to see how everything is doing, that's really hard. And there's no tools out there. Well, there were no tools out there until last year. Facebook had one that they had built that was proprietary for PayPal as a way to try to force the conversation, release it as open source, and Amazon, in response, released their open source dashboard, which is great because it's got a lot more features than mine does. And it will tell you things like time to close the average in a repository, but it will also give you a look at things like do users in my corporation who have public GitHub profiles have two-factor authentication enabled because if they don't, I have to go bother them. You want to talk a little bit about the other couple things? Sure, yeah. So good old spreadsheets. I know that it seems probably kind of intuitive or not really a tool, but it's the thing that I use definitely the most in terms of tracking community and having it all in one place. Usually these things are really siloed and the things that do pull them all together are incredibly expensive. I don't know if anyone's heard of Lithium, the software, not the element, but Lithium is fantastic if you have $50,000 to spend just on the setup alone. It is not for most folks, but in terms of cobbling together something that doesn't look especially pretty, but that does have all of the things and analysis on it. You can automate your imports there. It's simple and it's time tested, so super recommend spreadsheets. And if anyone's curious on how to set yours up, I'm happy to talk you through it. I like to also extol the value of spreadsheets, especially when you're creating metrics for the first time. I have an engineering background. My instinct as an engineer is if I want to learn something, I'll spend four hours to cook something that I could probably put in a spreadsheet in 15 minutes. And the advantage of putting it in a spreadsheet in 15 minutes is I can immediately give this number to somebody and say, is this what you want? Because if they say no, I save four hours. So even for things that you know you can look at, you know you can write a tool to do it, and engineers instinct again, I'll write a tool or I'll find a tool, sit back, do the hour or whatever it's going to take for you to get pulse numbers together, plug them into a spreadsheet and see what comes out. It's useful and if it's not, you save yourself a bunch of time. The last one is lithium, which I mentioned. I really doubt anyone here would be particularly interested, but just in case if you have all the money in the world to spend, I highly recommend it. Thank you everyone. Thanks everybody for coming out.