 So, we like to start by thanking everybody for coming to our talk today. Why contributing to OpenStack makes you a better developer. My name is Stuart McLaren, and I work for Hewlett Packard Enterprise. I've been a core glance contributor since the Essex release. Hi, I'm Alistair Coles, I'm mostly with Hewlett Packard Enterprise, and I'm a core reviewer on the SWIFT project. So, for around 10 years I worked exclusively inside the corporate firewall. I worked on internal projects, internal proprietary closed source projects. And why is that happening? So, typically I worked with a relatively small number of co-workers, people who maybe had a similar background and similar experience to myself. And that felt fairly comfortable, and it was a fairly familiar experience. This happens to me, I suggest we just run it. We'll take that out, I thought taking out the batteries would do it. So, as I was saying, typically I'd work with a fairly small number of co-workers, and it was a fairly familiar and comfortable environment. So, it felt a little bit like being in your hometown, which is why I have that picture there, that's where I'm from. And then about five years ago I started to work on OpenStack. So, that felt very different. There was a lot more people for starters, and it could feel a little bit intimidating at times. There was strange new customs to get used to, and the culture was very different as well. So, I took this photo when I was at the OpenStack summit in Hong Kong, and while there are a few Irish bars in Hong Kong, it definitely didn't feel like home. It didn't feel familiar. It felt a little bit like getting started with OpenStack. My experience is very similar to Stuart's. Prior to working on OpenStack, most of the projects I've been involved with were a lot smaller, and usually involved teams that were co-located. OpenStack is the first time that I had contributed to an open source project. So, getting involved in something as large as OpenStack with its processes, and ecosystem, and huge community actually felt very challenging, both technically and culturally, but ultimately was very rewarding. So, Stuart and I talked about our experiences, and felt that it would be good to share them with other people, and let you know how we feel we have benefited from being involved in OpenStack, but also how we think that you and your organisations will also benefit in similar ways. So, we have learnt a lot during the time that we've been involved with OpenStack. Some of that has been specific skills. Some of it has been just general good practice. But the big idea, the thing that we really want to communicate to you, is that the things we have learnt are actually transferrable skills. So, working on OpenStack hasn't just made us better at being OpenStack developers, that we feel it's made us better at being developers elsewhere, that we can take what we've learnt here and actually apply it in other roles downstream on internal projects. So, what we hope to achieve today in this session, first of all, is to encourage other developers who may be getting involved, or thinking about getting involved, to go for it, and go and do some work upstream, because yes, OpenStack is going to benefit from your contribution, but we think that you will too, beyond just the time you spend working on OpenStack. But it might be that actually there's also some managers in the audience, and we also hope that we can encourage you to consider making a greater investment of your developers in upstream contributions, or even a fraction of their time. Because we think that ultimately that means that your organisation and your products will benefit in other ways that may not be immediately obvious. So, there's lots of reasons why we choose to make that investment upstream. For me it was because my employer wanted to explore adding a new feature to the Swift project. For others it might be that there's a bug that you need fixed, and you want to avoid carrying a patch downstream. Those are really good reasons to get involved, and hopefully we see the return on that investment as features and bug fixes come downstream and improve our products. But there's some other benefits too that we hope to open your eyes to. We don't just get our bugs fixed. Along the way, our developers also get some free training and some mentoring from other engineers. So, I guess questions we want to raise are, if you're a developer, do you want to write fewer bugs and become a more effective coder? If you're a manager, do you want your development team to produce higher quality code, to learn to collaborate effectively across cultures and geographies? I guess, like, do you want your team to be as capable as your competitors' team? Because if you do, then we think that OpenStack can actually help you with that. Okay, so what are we going to cover? We're going to spend a small amount of time looking at some high-level feedback from the community. And then we're going to focus on three areas that we've decided to take a closer look at. Code review, feature design, and test. So, the first thing we did in preparing for our talk was we reached out to the community and we asked them for some feedback. First, we wanted to make sure that they agreed with us that contributing upstream could be beneficial. And luckily they did, so we didn't have to cancel the talk. But we also wanted to find out exactly how they felt it benefited them. So, let's take a quick look at some of their responses. Working on OpenStack, you encounter different perspectives that you can't find in smaller ecosystems. Well, it's certainly true that OpenStack has grown and become quite a large ecosystem. When you think of all the various projects working across different areas, trying to solve different problems across networking, compute, storage. So there's an incredible amount of technical breadth and whatever you're interested in, there's an opportunity to find out about the technologies that interest you. And because there's quite a lot of people contributing, that means there's technical depth too. So whatever your area of expertise happens to be, chances are that you'll encounter people that know even more than you do. And as you start to work with them, you'll find yourself becoming even more of an expert. Working on OpenStack helps you understand what's going on in the network for lifecycle process, besides writing better code. So I thought this was quite an insightful comment. Sometimes, especially maybe in large companies, people's roles can be quite siloed. So you might have a development team and a separate dedicated support team. And you probably have product management deciding which features should be targeted for the next release. OpenStack is a little bit different. Teams are largely self-sufficient. So that means that they're responsible for pretty much all their lifecycle processes. So as a contributor, you can pick the stage of the lifecycle that interests you and help out there. So if you're interested in something like stable branch management, you can help out with that. Or in my case, I worked on security for a couple of releases. OpenStack is way more diverse than people's own organisation. So one of the things that I found surprising about the feedback we got was that some people chose to emphasize non-technical things, like here with cultural diversity, which we actually got a couple of times. And it seems that contributors enjoy the challenge of working and communicating across cultural boundaries. So while in one culture having a vigorous and heated debate about something might be seen as healthy in another culture, it might just be seen as bad manners. So if you pay attention while you're contributing upstream, you can become a little more adept at communicating with people from all around the world. And as software becomes more and more an international activity, that's maybe not a bad skill to have. And finally, confidence. Again, this was a non-technical piece of feedback. And I think it is the case that as you become more experienced with OpenStack, you do grow in confidence. And this can help you in terms of getting your voice heard and what can sometimes be quite a noisy environment. Thanks, Stuart. So one area of work that we got a lot of feedback from on was CodeReview. CodeReview is at the heart of all of OpenStack code development. In fact, not just code development, but also documentation and process development. So upstream contributors spend a lot of time involved in code review, and it's actually the context in which most interaction between contributors occurs. Now, for me, my experience of review prior to OpenStack was, let's say, informal. We had really good processes in place for version control, issue tracking, continuous integration testing. When it came to code review, it's a little bit more haphazard. Maybe we'd be checking in branches, checking out branches, looking at diffs, and giving some verbal feedback to our colleagues. So I have found the experience of getting involved in a more formal code review process quite illuminating. I actually shudder as I stand here and describe to you how it used to be in my experience, because I can no longer imagine engaging in code development without having documented auditable code reviews supported by online collaboration tools, which is what we use in OpenStack. So just in case you're not familiar with it, every change that's proposed to any OpenStack project will undergo review. Typically, that requires two code reviewers to approve the change before it's merged into any repository. So core reviewers leave votes, plus two votes. Anybody can leave a review and is welcome to leave reviews, but it must be core reviewers that actually approve merging a change into a repository. Those reviewers are experts in their domain. Each project within OpenStack will have a distinct set of core reviewers that have expertise in that project. This is all facilitated using a tool called Gerrit. Here's an example of a review that has been proposed to the SWIF project that I work on via Gerrit. So the top left there you'll see a commit message that describes the change, and at the bottom are links to all the files that are being changed or the proposed changes to those files. Then, in the top right, it's the names of people that have chosen to leave reviews and start to vote on this particular proposal. Actually just pause there and refer back to something that Stuart mentioned about the diversity of the OpenStack community. So I think I'm right in saying that there's actually people from four different continents and quite diverse cultures have started to get involved in this change by leaving reviews. So for many people, myself included, the prospect of publishing my change for public scrutiny by unknown other people was actually quite scary at first. In fact, I spent days checking and rechecking the first patches that I proposed to SWIFT in fear that I would offend the gurus of SWIFT and be forever tainted in their opinion. What I've actually come to learn is that it's not a process of judgement. Yes, the reviewer's job is to prevent bad code being merged into the repository, but the way we achieve that is by encouraging good contribution and not by suppressing all newcomers' contributions. So in this case, the author Christian, who by the way is a very smart guy, he now has five other smart people working with him on this particular change, the reviewers. And they've joined together in a collaborative process, not a judgmental process, a collaborative process. And the result of that is going to be a better outcome. So let's just step through a few of the benefits of that collaboration during review. So the first outcome is going to be feedback on process. This is particularly helpful maybe for newcomers to the community. Before even looking at your code, a reviewer will check that your proposal has a good commit message, one that describes what you're changing and why you're changing it. And actually there's some other nice bits of process that we check for around commit messages. If we just go back to the example I showed you. If you can see just below the bottom of the commit message there, there's three lines that start with closes buck. So these are tags that authors can put in their commit message that the Garrett tool will use to automatically generate hyperlinks to other resources. In this case, it generates a link to the bug reports that this patch is fixing. So this first step of review will be checking and perhaps educating the author in these processes and good practices. Actually, this is an example of where something that we've seen happening upstream in OpenStack has actually influenced and benefited us in our own internal projects. So we actually now require that every patch and every change set that's made to our own internal code development has a tag that links it, that will be automatically linked to a ticket number in our internal issue tracking tool. Okay, so perhaps the most important piece of feedback on our code itself and importantly, does it work or does it not work? Again, I emphasise this isn't about judgment. The reviewer is going to be working with the author, perhaps to help them understand why maybe their code doesn't work or why perhaps it has a subtle interaction with another part of the system that the author has yet to become familiar with. But assuming the code works, another aspect of the feedback will be what we might call quality feedback or comments on the style. Now, for me, OpenStack was actually the first time I'd worked in the Python programming language. So I have benefited enormously, particularly in my early days from this kind of feedback during review. It's often the kind of gems that you actually pay good money for in advanced programming classes or it's stuff that just isn't written down anywhere, it's specific knowledge that you get from reviewers. And I'm kind of repeating myself, but it's not judgmental, it's collaborative. So the author, in this case, has actually had their brain focus on fixing three bugs. The reviewers have the luxury of stepping back and seeing that maybe the code could be more compact or maybe there's a slightly better approach to an interface. Maybe there's some pattern that's been used elsewhere in the code base that could be applied again in this patch. So the two working together, the author and the reviewers, in collaboration, produce a better patch. And then finally, reviews are going to focus on tests. And I'm going to talk more about testing later, but suffice to say that if you start to work upstream in OpenStack, you're going to learn about writing tests. And reviewers will be reverting your changes, checking that your new tests fire and fail and reveal regressions. They're going to be checking that you haven't taken tests out that were already in the code base. And this is a really important part of the review process, too. And one more, I think, we would say, and a lot of people have learnt an awful lot from their involvement upstream. So here's a comment we got from somebody in the community. Pre-OpenStack, I felt bad if a code review of my stuff turned up a bunch of suggestions. Now I expect revisions and often learn something. And I like that. That's a really constructive experience that this person's had from working upstream. And actually, I echo that sentiment, so I'm no longer so fearful of putting and proposing my own changes. In fact, I actually welcome the collaboration and feedback and input from my fellow contributors. Obviously, I still strive to write the best code I can, but I think I now understand that my patch is just my best starting point. And with the reviewers, we will improve it and add something into the repository that is much better. So coming back to our earlier comment about us realising that things we've learnt in OpenStack have benefited us elsewhere, in the case of review, I think it's that I've kind of internalised this culture. First of all, that reviewing is far more collaborative and less judgmental. But actually, I think I kind of have taken on board what we've dubbed downstream guilt. So now, if I'm working on a patch for an internal project, I approach it in the same way that I've learnt to do for an upstream project, as if it was going to be reviewed with the same rigor as an upstream change. I'm not going to allow myself to cut corners just because this is something I'm doing internally. I think I've also learnt to make my patches more reviewer-friendly. So that means things like, first of all, making sure the code I write is understandable to other people. But also that I've broken my changes down into more manageable chunks. Just fix a single issue with a single patch. And finally, I have learnt to take the criticism, the constructive criticism that comes through review with more tolerance and allow myself to be crushed by it. And actually to learn a bit of patience too because what we learn is to really focus on the non-negotiable issues that you're trying to tackle in your patch. And to accept there's some things that can be traded off. There's some battles that just aren't worth fighting, particularly when it comes to coding style, for example. Stuart. Okay, thanks Alistair. So I would say that code review is something that's stayed fairly consistent certainly since I've been working on OpenStack, whereas feature design is something that's maybe evolved a little bit more. So around the Juno timeframe, most teams adopted a new way of doing feature design. They adopted specs, feature specifications. So specs are really nothing more than a written description of a feature that you want to go and implement. And it's in a restructured test, text format. But you don't need to worry about that too much because the OpenStack infrastructure, which tries to automate all the things, will check the format for you so you can concentrate on the content. You and the reviewers can concentrate on the content. And it's a great place for people that like bike sharing, it's a great place too. So most teams keep their own particular spec template. So this is basically an empty spec that's got several sections that need to be filled out. So typically it will contain a section for you to put in a brief description of the feature that you want to implement. You'll also be prompted to try and describe some of the alternative ways that the same problem could be attacked. And that's useful because it helps you think not just about the first way that comes to your mind, but all the alternative ways that you could solve the same problem. And it also prompts you for some of the maybe corner cases or edge cases that you may not obviously think about. So things like impacts on the database model or maybe changes to the REST API, that kind of thing. And the templates are really helpful. They're helpful for maybe somebody who is putting up their first spec so they know pretty much what's expected of them. And then it's helpful for the team because as they put the template together they can then ensure that everything that they need is presented. So finished specs are published to specs.openstock.org and so that's a good resource for where you can see basically what current teams are working on. So here's an example spec. So this is one that's put up by The Glance Project. It's basically about a new way to do image sharing. And here you see the same spec what it was under construction. So this is actually a pretty nice example of OpenStack collaboration. So it's actually using the same tooling, the same Git and Gerrit tooling as a code change would. So you don't have to go and learn a whole bunch of new tools. And this spec had reviews from 15 different reviewers. It went through 23 iterations and in total it was about 150 different comments. So a spec basically acts not only as the final document but it also acts as the forum where people who are interested in this particular feature can go and basically collaborate. So why are specs a good thing and why can they maybe make you a little bit better of a developer? Well, most importantly, they help eliminate design flaws. And really it does that because it separates the design phase from the implementation phase. So you basically have a time and a place where you can think about the high-level issues, the high-level view. And by forcing people to think of the different alternative ways the feature could be done, it also means that you consciously think through the advantages and the disadvantages of each approach. So that helps prevent sort of shoot first, aim later type of behaviors. And it's also more efficient because it's a lot easier to rewrite a couple of paragraphs of text and to re-implement a whole bunch of code. Upstream, it helps a PTO with planning because they can basically designate which specs they'd like to see implemented for the next release. Finally, they act as a record for future maintainers. So if somebody comes along in a couple of years, it can be difficult to grapple with the code to understand what people were thinking at the time, but they can go back and look at the spec and put the spec and the comments that people left on the spec. It's really got a feel for what was going through people's minds when they were designing this particular thing. So here's another quote from the community. So many in the audience might recognize this one. Specs are great for proving to people that they haven't put their solution through with a nice smiley face on it. So that's it exactly. And a little bit like code review, we've adopted specs occasionally for our internal projects. Not for everything, but we do find them useful, particularly if we have something that maybe impacts a few different teams, maybe that are distributed throughout geographies. They can be a good way to reach a rough consensus. Right, so we talked about design process and code review. The last topic we just wanted to talk about was testing. Again, it's an area where we feel that we've benefited a lot from our upstream involvement and also the feedback we've had has confirmed that. A lot of energy is expended in OpenStack on testing. And the basic reason for this is that our code has bugs in it. That's not because we're bad at writing code, it's simply because we are writing code. So we need to embrace that truth and accept that the way we will find those bugs and avoid regressions is to add test and use test in our coding. Now, hopefully we all have good testing strategies in our downstream development processes. I think what's really impressed me while I've been working in OpenStack upstream has been the degree to which testing is both expected and enforced. So the expectation comes, if you like, through a culture. Or I describe it as being a culture. There's just this culture of expectation of extensive testing when you're working upstream. And in fact, in the SWIF project, we have actually codified that expectation. So we have a document which describes guidelines for reviewers of the SWIF project. And in there, we actually state what we're expecting to see. And in fact, so you see there, basically any change that introduces a behavioural change to the project should be accompanied by tests. Not followed by tests, but accompanied by tests at the moment that they will be merged into the repository. And in fact, we do actually have a ratio of approximately twice as many lines of test code in the SWIF repository or real code. So it's a really strong culture around expecting tests. And I mentioned this before when we talked about review. But reviewers will be really rigorous about looking for tests and assisting on tests. And the reason for that is because, first of all, those reviewers are likely to also be maintaining the code perhaps after the original author has moved on. But also because if your code is buggy, then you don't just break your product, you're going to break our product too. Here's some feedback we got from other members of the community. Since working on OpenStack, I write tests at the same time that I write code, not as an afterthought. And I write code with testability in mind. This has changed the structure of my code and generally is now more modular and reusable than before. So I think those quotes reflect really positive behavioural changes in the developer themselves that I hope they will actually go and apply elsewhere, even when they're not working on OpenStack and not working upstream. But it's not just culture that I've been impressed by in OpenStack. It's also the automation of enforcement of tests. So we have this test enforcement and application framework. I don't know what it will be called. It's got a name. It's Zool. And basically Zool's job is to run multiple test jobs against every single change that's proposed to every project within OpenStack. And the results of those tests that Zool automatically runs are published on that Garrett review page you saw earlier. Alongside comments from human reviewers will be the results of all of these automated tests. So in the case of Swift, here's an example of the output that you'd see. We actually have 14 different test jobs. Now each of those test jobs is running hundreds of individual checks and tests. So we have style checking. We have unit tests. In fact, unit tests that run in multiple versions of the Python language. We have functional tests. So these are verifying the expected behaviour of the Swift API. And we actually run those functional tests against four different configurations of Swift being deployed. And then we also have integration tests. So here we check that Swift has remained compatible with other OpenStack services even with the proposed change. And then just at the bottom of those results there I've highlighted some in the Red Square. And what you're seeing there is a really interesting feature of Zool which is that third parties can also plug in their own test infrastructure. And Zool will notify third party test infrastructure of every new proposed change. And that third party test system can then run their own bespoke test jobs and publish their results back to the Garrett review. So in the case of Swift we have, we just have one example of that running in the community. But for other projects this is even more important. So Cinder is a good example where there's multiple storage back ends for Cinder provided by different vendors. And each of those vendors can, if they choose, be notified of every proposed change to the Cinder code base, run checks against their own test systems and publish their vote back on whether they consider that change to have broken their particular product or not. Again, I think the quotes, you know, sum things up really well. So the outcome, the benefit of all of this testing is I spend less time and find fewer problems at integration time. Generally the unit tests expose bugs at an earlier stage. And I think we know that that's a good thing and we think we know that's going to save us all money. So just to summarise how will OpenStack or your experience of OpenStack testing benefit you as a developer? Well, you're probably trying to write more tests. You'll probably find developers start to actually demand a similar automating test infrastructure in-house as they experience upstream. And then this leads to greater efficiency, improved code quality. One really interesting piece of feedback we got was that this leads to more predictable release schedules. And that's something that makes our product managers very happy. Two other things I would mention may be less obvious. One is that tests actually help you to maintain the code. Tests act as a supplement to documentation. Quite often a test is a really good starting place to understand what the intent is of a piece of code. And that becomes more important when authors move on. And then finally, if you have embraced and taken on board this culture of testing, and you have your own downstream automated test infrastructure, then you'll find that your downstream code is going to be much better shape when the time comes for you to contribute it back upstream. Thank you, Alistair. So, just to sum up, we hope that we've given you at least a little bit of a flavour as to why spending some time working upstream can help either you or your team to work efficiently, collaborate effectively, test comprehensively, make friends and influence people and hopefully write better code too. And finally, we'd like to say just a quick word of thank you to everybody that helped us with the presentation by providing feedback. So, that's it. If anybody has any questions we'll happily take them at this point. Can we ask you to use the microphone just to help with the recording? Thank you. Nick, you described developing internal projects. Sometimes small change is proposed and we experienced you know that a huge overhead needed to merge it into the master. I mean all that. A description of this change sometimes even tests are not necessary because you know previous tests were not bright and they were fixed. So from your experience to draw the line maybe you can describe the core features that must be implemented developing internal projects. So, I don't know if I was touched on it but I mean we have benefited by our experience with using the upstream infrastructure. In fact we've actually pretty much taken all of the upstream infrastructure and moved it in house. So, when we're developing our internal projects we're using all the exact same tooling, all the exact same CI as we're using upstream. So, I think you're right that it can add a little bit of overhead if you're just trying to submit one change. But I wonder if that's outweith by some of the advantages in terms of your code is always release ready. So, that's one of the things that Alistair mentioned as well in terms of being able to hit your release schedules. So, I don't know how you would go about having a system whereby you'd have small changes but maybe past a subset of tests. I'm not sure. I think I would just support what Stuart said and maybe we didn't emphasize it enough is that the benefit of coming upstream and seeing the infrastructure and the tools that are available and then applying them internally also means that developers that have worked upstream with that tooling. So, we're not actually having to work in two different contexts. It's like it's the same context of at least of tooling when we're working internally as externally. And I think that actually then improves the efficiency. If I understood your question you were saying how do you deal with when you have a tiny change in this overhead. I think that partly answers it because we're familiar with the process then the overhead is diminished. Obviously, there's a time when you're ramping up and learning this stuff and maybe you feel more of the overhead then but in the long term, I think it pays back. And I think some areas of that haven't been entirely worked out yet. So, when I was talking about feature design I would say that specs are really good when you've got a pretty big new feature that requires a whole bunch of people to go through it. But there seems to be a little bit of a grey area where if somebody has something that's not quite a bug but not quite a really big feature that specs can maybe be a little bit of overkill in that situation. So, Glantas experimented with sort of many, many specs or specs light. So, the community continues to evolve in terms of its approaches to some of these things and we're always open for suggestions. I think we're probably out of time but please, if you have questions come and grab us after us we'll be happy to chat some more to you. And thank you again for coming along.