 Welcome. So it's a little bit after 2.15 so we're going to go ahead and get started with the presentation today, which will follow pretty closely the outline that we had on the program, which is about assuring quality on Drupal.org. So it'd be a little bit of history about testing. I'm going to talk for, I hope, not more than 10 minutes and then we're going to split into a couple of groups and so I'll tell you about that. Most of this should be conversation on some very specific topics. So to get started, what we did way back when is worked on BeHat tests for Drupal.org and so this is just, if you've never seen what BeHat does, it allows you to describe functionality in relatively stylized but plain language that project managers and product owners can read and then turn them into executable tests either using a browser or using a headless browser. So that's the basic concept and the reason that we started this project at all is that Sam and Howard and I along with a lot of other people worked on the Git migration and a big part of what we spent hours and hours doing was making sure every time we integrated code that things were still working and we did all of that manually and it was pretty much me and Sam and then opening it up to other people to repeat the work that we'd done and neither of us ever wanted to do that again in our lives. So this was something to learn to allow us to focus on other functionality. So this testing project had a fairly long history and we've had a lot of people who've been really supportive of the process. We started it off long after the Git migration in April of 2012 as a volunteer project with just some of us from the Git team who wanted to do the testing. We actually received the donation of six months of developer time from Capgemini for three developers. They were available to the community to participate in something and they landed with us and so we did a lot of learning in the process about B-hat tests. We wrote features and automated them for the Drupal 6 version and one of the things that we learned and that we're going to talk about more is that actually no one person had any idea what all of the functionality of Drupal.org was. More than that they didn't know who had the permission to change that functionality in the process of moving from Drupal 6 to Drupal 7. So it was a question of having four of us really and a few other people look at Drupal 6 and describe what it did, whether that was right or not and then get that stuff automated and as the Drupal 7 site became more available to then apply those tests to Drupal 7 and where they didn't work anymore, update them. So we did that and we worked on them until about September of 2012 for Drupal 6 and then 7 was announced to be imminent at that time so we took all of our effort and put it into porting the test to Drupal 7. We completely abandoned working on any of the Drupal 6 testing in January and only lightly worked on the Drupal 7 tests. So in the last couple of months funded by the Drupal Association I've been making sure that those tests are reliable and robust and functional in order to support the D7 upgrade among other QA tasks for the D7 upgrade. So these are some of the people who worked on it. Jonathan Headstrom who's the author of the Drupal extension which came out of this project that's an extension to the testing tools themselves. Folks from Capgemini and folks from a company called QED42 who are helping now. So these are all people who've been involved in the effort and it's currently back to a volunteer driven effort. So that's kind of the history of the testing. The scope of the test is that we were able to describe and automate 139 features with 781 scenarios and over 4,000 steps that it clicks through on the site. We have a lot of reliability at that. I'd say they're about 95% reliable and there are some of them that wear environment issues, make them not as reliable as we'd like and there's room for improvement, but we have a ton of tests that let us know that the upgrade is succeeding, that things are working as they're expected to. So if you're ever interested in poking around understanding there's a lot of learning that went on in this process you can take a look at the URL here and then they'll post the slides for bddtest.druple.org and what you'll get if you go into that folder is all of the output and you can just see what it does and since it's in English like as long as you're an English speaker you can tell what it does without actually having to read the code you'd be able to walk through and say yeah that's what that feature is supposed to do. But the thing that is really important to us is that behavior driven development isn't really actually even about automated testing. We use it synonymously with automated tests. They're only a byproduct of a much larger philosophy. So the real goal of behavior driven development at the end of the day is to build software that provides value. And the tests are a byproduct of that. So what happens is you end up describing you go through a process that philosophically says the whole team should have a ubiquitous language for talking about what your website does. You should have a measurable identifiable value that every feature on your website provides and that you should take every opportunity to reduce your ignorance in the course of a project like those are the three main pillars of behavior driven development that our greatest constraint is what we don't know. And in some cases they think that that's true. I do sometimes wonder with druple.org whether it's what we don't know or whether it's something a little bit more systemic than that. So in the process of trying to maintain these tests and support the upgrade we've identified three process and or technical areas that make it really difficult to do quality work. They take a lot more effort on the part of staff and volunteers than they really should and they start with things like we really don't have clear priorities regarding the business value of features on the site. It is very difficult to know that. And it's something that we're going to talk about in the smaller groups today. This is more technical but it is also processed to get there. We do not have a fully function of a functionally complete replica of the production environment anywhere not for developing not for integrating not for staging work. We're getting in for triple six. This is definitely true. We have a fairly close environment on triple for triple seven when it launches. So that's a problem in terms of doing QA. You can't run your tests against a version of the site and see that it works. So that's an issue and then finally and this one's really important to me as well. We have only murky visibility into where work is at any time in the system. So there's a thing. There's some code. Somebody wants that code to get deployed someday. How close is it to getting there? Where is it? Which issue Q is it? It's tagged but and we get into this thing where we don't have strong enough process to know that something is struggling or that it's ready and it's waiting for review. The system that we use for core like to move things through that process. Is there a really work for the Drupal dot org process and getting things out there? So these are three areas that we're going to talk about today. I had mentioned on the write up that we would do some analysis of past and present processes and I did a lot of talking to people and looking at things in the past and I think that these three URLs are fine to review in your own time. What I took away from that is that all of the things that have happened in the past are not necessarily informative in the sense that they were people doing the very best they could with limited resources. And so we do know that we want to improve that process and I think focusing on these three areas moving forward is the most constructive use that we can make of this hour. But if you're interested in how it has been, there are links here that will let you know. Our areas of strength presently are that we actually do have a governance structure moving into place. So we have that nearly complete staging environment for Drupal 7 and it's huge in terms of being able to say we feel confident about doing an upgrade to Drupal 7. I don't know where we would be if we were where we were like three years ago. And we have a more clearly articulated decision making process and empowerment for both our current work and our future process. So at this point in time Neil gets the double wide picture because Neil's the one who is really empowered to make a lot of decisions that even a year ago no one felt like they could clearly make. He's also a member of the Drupal.org software working group along with Angie Byron, Tatiana, me and Kim Pepper. So the five of us are going to be stepping into a role with a charter and we have some specific duties. At the bottom I'll give you a link to the full charter if you want to check it out. But they fit actually these areas where we really need improvement. So one of that is team leadership. The working group is to create and remove teams for each major area of the websites and have the authority to make software and feature decisions within the scope. So those teams will be empowered in their areas. Defining and appointing leadership roles, technical lead, product owner, that's a thing. And that's about having the decision making process clarified. One of the things we just recently went through was the ideation process. We did a very short, quick kind of temperature check. And we got a very clear temperature check that there are some pain points right now with Drupal.org. But the idea is to do a good job and a better job maybe of understanding the needs of the target audiences. And so the process has been open an issue, a bunch of people put a bunch of opinions. We did it this time kind of the way we did over almost two years ago. We don't have a way of taking that information and acting on it in a regular manner. So that's another thing to talk about today. And then finally, in the development and maintenance, and that responsibility for the working group, things like identifying, I'm pretty sure that a community the size of Drupal, and a site the size of Drupal.org should have a complete staging environment. We're feeling pretty comfortable that that would be a step forward, and we want to make progress in that area as well. So these are all things within the charter that we are trying out the processing. Well, we're empowered to do this. So let's get moving on doing this. And the full charter is visible there. And you can get it from the slides when they get posted as well. So if you're interested, and there are many other working groups that are having conversations during this conference. So these critical areas of improvement, ideation, how do we collect and prioritize website features for appropriate audiences, environments, where do we develop and evaluate code, and visibility, how do we track work in the system are the three things that I want to turn it over to people to talk about in groups because they we won't get through all three of them as a single group today. One of the things that I will do a really quick run through of and we can choose if we want to spend time with it or not is that we need to do some review of work and we need to talk about what stages it moves through. Once we decide that there's a good idea out there, which is not what this address is. But once we say, Hey, here's work, it needs to be done, needs to be implemented, move through QA to deployment. So we need some checks. And this is where those teams come in and the team leadership comes in as well. And I've been trying to explain super clearly that in the past, we have expected volunteers to hunt down people to provide them with feedback. And what I feel like we need to switch to is a supportive model that is meant to enable and promote motion, not to throttle and not to bottleneck. At the end point of that, like right before you deploy, if it's not performant, it's not going on the site like it can't. But in the early stages, getting the right feedback at the right time can help people save time and shape their projects. So these reviews are the things that ultimately any change to Drupal.org has to be okay. We need to know that the UI and the UX is reasonably consistent. We need to know that the code is maintainable, which in an early part, when you don't even have code, maybe difficult to do. But if someone looks at it says, Oh, you know, if you're going to plan to use that module, that module hasn't been maintained for you, you're probably not going to want to do that. It can save them having to recode later. So it's a supportive check, not a prove to me that all your plans are going to be secure, right? Social maintenance are the features that are being introduced, going to require someone to monitor spam, to make sure content is flowing. We need to know about that and think about it when we consider something for deployment. Test plans, which do not have to be behat tests initially, in my opinion, and this is all proposal, but which we really do need you to, like if somebody is proposing a feature, to tell us how we know it's working, if we've never seen it before. So that's a component that you would need to think about it deployment time and then performance security and coordination with other site or feature maintainers as needed. So having these checks and getting people in place to really make this happen is one of the things that we'll be working on, as well as moving things through a process where there is an implementation plan. So again, this is separate but connected to the, this is a good idea. So that you have some idea, this is what we're going to do and people can talk about it. It's a lot like a request for comment period where you've given people enough to sort of bat around the ideas on an implementation level. So the idea has been vetted, but not how you're going to actually implement it. Where it would move visibly by setting an issue or a field on a content type to under development, where it would be developed in the modules and places where it belongs. It's a tracking issue for getting things out there. Where it would be reviewed for deployment, integrated with any other code for deployment, go to stage, get deployed. And obviously it would iterate in any of these places where there are issues with things like performance. So what we're planning as the software working group and people who do a lot of work on Drupal.org is to try this process out with three of the ideas that came out of that recent idea process, bat that around intellectually, work with through those processes, see what doesn't make sense. And I'm feeling like we have a pretty good handle on it, but I would love if this looks like not possible, didn't think this through. I can already tell you there's going to be a problem. I would love to have that conversation with people during this time. I know we'll adjust it before we say this is the process. So that's knowing where work is in the system, which is really important to predicting when work is going to get deployed and be available to users. So you feel like we're getting some concrete progress on that. And what I'd like to do is take a couple of any clarifying questions that you have and split into two or it's up to you three groups. So the conversation will go like this. Sam and Howard, who are right here, you guys can, yeah. So Sam and Howard along with Neil and Rudy and there are a bunch of people who are prepared to take a very concrete step about environments. And the other folks can come with me and we can talk about process like how do we surface good ideas? How do we get more voices involved? Or we can talk about that other process. How do we make work visible in the pipeline? We're going to do that for 30 minutes. Yes. There will, yeah. So ensure that you have a notekeeper and a presenter to summarize the outcome of your conversation in one of those two groups. So if environment people want to go on this side, anybody who wants to talk process can come over here with me. And we can talk about it. These are the questions that I'm hoping to focus us on. Environment people, the very first next step to that staging server that I dream of is actually knowing what components of production need to be on a staging server, identifying them all in order to turn that into a work plan. And if you knock that out in the first five minutes, there's just a couple of follow up pieces. So if you have that this is what's on production is all of it necessary for stage or what's different between stage and prod and then going backward to the development environment trying to get that complete replica. Anybody who's interested in talking about process, I'm going to talk about how we can best engage stakeholders to assess their needs, kind of frequency and timeline that might be involved and then we can circle back around to how do we build review teams to ensure that we're getting feedback to people at the right point in time in the development process. So we can have that conversation. And if we get question number one components of the production environment out, I think we're going to make a huge step forward. So I'm really excited about doing that. So go ahead and join them if you want to talk about environments, if you want to talk about process, come over here and chat with me and I'll keep track of the time for about half an hour.