 Collaboration is one of our values at GitLab and is a key to success in the modern business environment. Increased collaboration is often at the heart of many digital transformation and DevOps adoption initiatives. One way to increase the collaboration is by adopting open-source best practices and encouraging shared problem-solving between internal teams. This solution is known as Inner Source. The next few talks will show how GitLab customers are leveraging this solution. Starting us off is Peter Tiggs. Peter's talk is full of great insights into how to build a foundation for your inner-source program based on his experience doing just that at Intel. From an emphasis on discoverability to managing contributions and even sharing your backlog among teams, no matter where you are on your inner-source journey, you'll find valuable takeaways in this talk. Enjoy. Hello there. Welcome to GitLab Commit. Let me introduce myself. My name is Peter Tiggs and I'm a principal engineer at Intel's client group. In the last 20 years at Intel, I've been focused on a variety of software projects from driver development to my current focus area of DevOps. Today, I want to talk to you a little bit about how we're moving into a stage of inner-sourcing, going from a transition of every single software team doing their own thing with their own source code and delivering binaries within the enterprise to the environment where we have open-source-like practices within the enterprise and source code moves freely throughout the organization. Let's go back in time a couple years. At that time, Intel was still delivering software in a binary fashion to the upstream teams within Intel. This worked okay until we needed to change with the changing environment. We needed to increase the number of client SKUs that we had available to us and to deliver to. At the same time, the OS vendors, Microsoft and the various Linux vendors were increasing their SKUs. Our validation platform just became multiplicatively increasing, going from a couple of platforms every year to 20 times that on a regular cadence that we had to mix and match between the hardware and the software that we were validating. Our old practices of having individual software teams doing whatever they wanted within their own CICD systems just didn't work. It didn't allow for us to quickly adapt to change. It didn't allow for our platform integration and validators to quickly debug and address issues that maybe were found during validation. We introduced this project, which we called One Windows Recipe. In One Windows Recipe, we looked at modernizing how we delivered that software across Intel. We focused on delivering some areas where we improved and standardized the CICD processes. We made sure we had consistent portals for the teams to deliver their software into. And related to inner sourcing, we made sure that the source code was available to all of the engineers that needed access to it in a shared source fashion. This led us to a common delivery mechanism that reduced the complexity of how we delivered our software into our platforms for validation and integration. And the shared source model was the beginning of our steps into inner sourcing. We got to the point where source code became normal to have access to. And this is important because previously it was very difficult and there were long turnaround times on debug. And you needed to go upstream to the driver team to figure out how to fix this. Now a system integrator could look at that source code and identify a challenge or a problem and have the driver team fix it quickly without doing a lot of debug or triage or fix it themselves and submit a patch. And we'll get to that in a little bit. So as we've modernized our software development practices, we started to grow. And one of the things that we did with this shared source mechanism was bring all of this source code into a common location. In this case we started with a GitHub group. All of the source code for our drivers, primarily for our Windows drivers got all put into this single source code location and that allowed for some level of browsability of the source code. And we were able to connect and relate the binaries that were delivered or compiled for our platform to the specific source code or get shaws even that generated those binaries. But this was just the start. Even though everybody had access to source code in the same location, we still had teams that were wanting to isolate their source code and not share it with other teams. And so the next step in this modernization process that we had to go through was making sure that we had the right set of roles and responsibilities. So we created a role-based access for the software. So with a goal of all of the software engineers that were employees of Intel being able to read and fork any of the source code under this common shared source code location. This still didn't quite get us to the point of open source, but it gave us read permission. Now each team still controlled their permissions for creating pull requests or for submitting to a branch, but we'll get to how we've sort of evolved beyond that, at least in some checks. So with this source code all coming together and granting this role-based access, we started with the software for a platform, which was about a hundred different source code repos. And that has since grown over the last couple years to now over 7,000 source code repos in our inner source repository, and it continues to grow. Now as you might imagine, this is a lot of source code repositories. And if we want to get to the point where we can utilize this source code, find out what's useful to reuse in other parts of the company, we need an ability to find it and discover it and catalog it. So what we did as a program that we had called one source is really establish a governed taxonomy of where source code lives within this directory structure of the source code repositories. And we wanted to make sure that first it wasn't based on any team names or something that would be modified by a reorganization. We didn't necessarily want it to be named after a project or a set of requirements, but really focus on what is this source code trying to do. And so we created a top-level hierarchy of things like drivers and applications and firmware, and then a second tier to that hierarchy to get more precise. And we set this as a governed set of terms that each piece of software that came into our shared source repository would have to classify itself within these terms. Beyond that, certain pockets of software had the allowance to define their own subterms underneath that and also have those managed at a team level. And one of the things as we were building out this taxonomy, we wanted to try to make it as obvious as possible of where software landed. And as you can imagine, many things are hard to classify in such a way. And so we had the ultimate arbiter of where a particular software component should live to be the creator of that Git repo or the team that originally created that Git repository. And as I said before, we wanted to be able to make sure that everybody had access to the source code regardless of where you are in your organization. So within that taxonomy, we built out a web portal that allowed us to not only have basic search to find source code based on tags and this directory structure, but also to browse through this set of initial terms and subterms. And here you can see the little web portal that we've created that allows us to kind of shows the top level terms, applications, containers, etc. And the sub level terms for our drivers, graphics driver, GPU and IOT drivers, etc. And this allowed teams to very quickly go in and find a particular source code that they may be interested in or may be wanting to use. Now we've had challenges really bringing this to the forefront on a catalog because one, we're still growing out. And we have this sort of catch 22 situation where we need to bring people in and give them access to this catalog, and we need to bring source code in and have it to the catalog. And while we're still doing our software projects and our platforms and products, so it sort of gets to the point where you may have something sitting out there that's useful, but we haven't yet put it into the inner source mechanism for us to discover. Regardless, this is sort of our first step in getting to the point where we can actually share and rediscover software that could be usable. So once software is discovered as we've gone into inner source, we've been able to take a few of these projects and truly make them inner source where there are contributors from around the organization that make it happen. And one of the key things that we found important is to set up the expectations of how to contribute into these projects from the get go. And so one of the key recommendations that we have building out an inner sourcing environment is establishing your contribution guide, contributing that markdown file is available in a common way of putting that in your repository and setting expectations up front is really key to getting good contributions. Make sure you know what you're going to expect your contributors to deliver to because as we pivot away from a traditional enterprise software where the only contributors to the source code are the folks in your team. We want to make sure that it's available to the folks across the organization and they may not be in the same team culture that you're in. So be upfront about what your acceptance criteria is. What is your pass rate of your unit tests? Are you expected to increase the pass rate or make sure the coverage stays the same? Be clear on what your styling guides are and how to make sure that the source code that's going to come in from contributors will look like it's part of your project. You want to make sure that any other acceptance criteria that you may have say security checks or other things are documented upfront in the contributing guide. So it's very clear for somebody else with outside of your immediate team that they know what to do when they're delivering software. We had a project related to this software modernization that we called the abstract build interface and what this project did was really give us a set of software that allowed us to standardize our dev sec ops pipeline. And we had many teams needing to do multiple things with different dev sec ops tools, specifically things like SNCC or BDBA, Black Tech Binary Analyzer. And there were multiple ways that teams were integrating that into their CI CD pipelines. And so this abstract build interface allowed teams to have a standard mechanism. But our single team within the client group wasn't able to keep up with all of the demands that the various other teams across the organization that we're using our abstract build interface needed for introducing new tools. SNCC, for instance, was a tool that our team did not use. And so by open sourcing this or inter sourcing this project, we allowed our platform security team to actually create and introduce new security scanner tools into this dev sec ops pipeline library very quickly and without allowing or without requiring our team to intervene and set up that. And so this was a key advantage of setting up that inter sourcing. And one of the things that made that work was we were very upfront with this other team about what our expectations were going in to the contributing guy. So the other key thing that I think is really important and sort of what I deal with on a very regular basis is making sure that your continuous integration solution is in place consistent and giving you useful information. So that when your contributors submit a patch to your project, regardless of where they are in the enterprise because they may not be part of your direct team, they can see the results of that build and that test run and get that feedback immediately. And you as an owner of a particular software project or a maintainer can see the results of that patch and decide to accept it. In many cases, this allows us to quickly streamline patches from across the organization or from across the enterprise into this solution versus where if we were just our own team in isolation, we wouldn't be able to do that. One of the other projects that we had was a Python library that and Flask web service built on top of that Python library that allowed us to integrate and submit these various different ingredients for platform integration and validation. This was another project that we had started to do some level of inner sourcing with. In this case, we had the server team adopt it and do the initial development of it because the client team didn't have enough bandwidth to work on that project. And so by having this other team start up and we were able to combine both the needs of what the client team needed and what the server team needed into a single solution that the server team worked on and setting up this CI CD pipeline now as we've established it now anybody who needs to make any adjustments to that Flask library or to the underlying Python library can quickly put in a patch and submit it through a CI CD pipeline. So we definitely want to make it very easy to submit. We want to match our contribution guides so that the checks that are in place in our continuous integration process are the same checks that are expected in the contribution guide. And we want to make your CI CD pipeline very, very visible. If somebody is going to consume your project, they should be able to go see what the results of those scan tests are so that they understand, you know, regardless of whether it's coming from your team or submissions coming from another team, that this other team can consume and use your software. And contributors need to be able to see what their project is doing and how it affects your project and maintainers can see how contributions come in. So let's talk a little bit about the role of a maintainer. A maintainer is a key role in our practice of seeing intersourcing work. The maintainer needs to set that architectural consistency. When you shift away from an enterprise software team where you've got maybe a team lead that's setting that expectations, and it's only that group of five to nine people that are working on a project, you can have really tight architectural consistency. But as soon as you start bringing in sort of these contributions from everywhere else within the enterprise, you need to make sure that the maintainer is well versed in the architectural consistency of what is expected and can provide that feedback to the contributors as they go. The maintainer needs to take that ownership of the quality in the project. And this can't be reinforced enough. That maintainer or those maintainers are the most important for making this successful software project with contributions coming from around the organization. And one of the hardest things they will have to do is say no when it's appropriate that this contribution can't come in at this time or this contribution is not yet ready at the right quality level to come in. Because as we get broader and broader with this intersource concepts and other teams are depending on your software that you are delivering with patches coming from other people, you need to be able to hold that quality level. And finally, the maintainer of course has the role of accepting and managing the merge request. And so whoever is in that maintainer role needs to make sure that they have enough bandwidth to take the time to really study those merge requests and those CI CD results so that they can accept and reject the patches as they see fit. So overall, have very clear acceptance criteria and the maintainer role is key. Alright, so one last thing I do want to talk about a little bit is how does this fit into your agile projects, whether you're doing Kanban or Scrum or have expanded out to safe or some scaled agile or Scrum of Scrum situation. How do you fit this into your schedule? Maybe you're even doing waterfall. What do you do about your backlog your program managers and the schedule commits that you have within your enterprise. So it's easy to say hey this is an open source project or an intersource project and therefore schedule doesn't apply, but that really doesn't hold water as we go out into the enterprise. So there's a couple approaches that we've tried and we're still really learning about what we want to do. So the first approach which we've tried is to actually make your backlog available to your contributors and do sort of maybe bounties or highlight items where your contributors could contribute to that you're maybe not going to get to in the near future. This is what we did with the abstract build interface project and we knew that there were a series of scan tools that we wanted to integrate into it and we knew we weren't going to be able to get to all of those as fast as certain teams wanted them. So we made that known and teams that did need them to integrate were able to pick up and do that. The other thing that I think is important within that aspect so that you don't have multiple people maybe taking different approaches to the same problem is also make it very clear in your backlog what your team that's the primary team that's delivering this intersource project is doing. Again this will prevent anybody from trying to do something that's the same thing that you're working on and do it in a different way. And then when you've got multiple contributors who may be trying to contribute to the same thing you will probably want to make those other contributions and own as soon as possible so people aren't wasting their time. One of the other approaches that we've taken is actually integrate the contributors into the backlog make them known what they're working on and make them part of a virtual scrum team. You know give them some allocation of capacity within the mechanisms that you're doing for agile planning so that you know and have some expectations around contributions coming from these external teams. Now this works in some cases and it worked in our case where we had very clear agreements from a management level and from a planning level that certain teams were going to deliver certain parts or certain contributions into our source code. And in other cases this may not work and we're still sort of trying to find that right balance. The last case is sort of the ideal case is just making the source code open and making it very clear on how somebody can contribute and when they can contribute. And just having it much like the current open source community where contributions can come in at any time and be able to adjust that and make that available through your capacity. But I would love to hear in some of these conversations or on Twitter about ideas that you have around how do you integrate inner sourcing into a backlog because this has certainly been something that's a learning area and something we continue to explore as we walk on this journey to inner sourcing at Intel. So just to kind of quick recap, I talked a little bit about how our modernization effort at Intel that we ran a couple years ago kind of got us to the advantage of shared source code and what that brings to the environment. And I talked about some of the challenges that we had as we go from 100 shared source repositories to now 7000 plus source code repositories and how do we find those source codes that we can contribute to and learn from and utilize. I talked about the importance of maintainers and setting correct contribution and acceptance criteria up front so that everybody knows what they're dealing with when they're contributing to your source code and what they're dealing with when they're receiving your source code. And then I also talked about a little bit about how we're dealing with schedule in our scrum and backlog and other agile practices to make sure that these contributions coming in sort of from left field can be integrated and made valuable without necessarily having to derail your sprints or screw up your capacity planning. So thank you again for listening. I'm looking forward to any conversations that we may have as a result of this. I hope you had a chance to learn something.