 My name is Jamie Plower, myself and Evan are here from Fidelity Investments to talk about our software delivery platform. And we'll go into some of the details. So Evan, do you want to kick us off? Absolutely. So thank you so much for joining us. Quick disclaimer, I'll click it. This is just our experience at Fidelity. We are not endorsing anything. This is just here to tell you about the fun stuff and opportunities we've had enhancing our software delivery platform. Quick summary on Fidelity, I want to make this super fast. My job is to get the base foundation and then pass it to Jamie for the fun stuff. Fidelity started in 1946 with each generation and enhancement in computers. Fidelity has had a key point and we really took off in 2016 with our first application in the cloud. And even now, today, we have over 6,500 applications in the cloud. So when we went through this transformation originally, our software delivery platform was fragmented. We had individual departments with their own type of standards, automation, and nothing was consistent. So when we went through this and enhanced our software delivery platform, we wanted to redo it from the ground up, focusing on the user experience. Our goal with this was to have the key objectives, the first one being developer focused and focusing on the developer experience. So we switched to an inner-source model where all of our DevOps automation is focused on input from the developer. Anyone in the entire enterprise can contribute to it. We have great automation that has given us the ability to allow the developers to drive all of these types of capabilities from self-service onboarding and release orchestration. From this, though, we knew we needed to have platform stability. When we give developers control, we're going to have more and more deployments. And so that was the other big key objective. From this, though, we also realized a lot of pains developers had with security and compliance. And so we also built in the working with the security audit and compliance teams. We built into the platform the ability to call their CLIs and APIs to give developers more input at a faster rate. So instead of having to go open tickets and wait for weeks for a security officer to come back to them, we now can deliver it faster to them. And from this, we wanted to also deliver to our business partners and to the developers themselves key data analytics. Having a central capability, we were able to drive more data insight, giving DevOps metrics and giving developers insights. The whole goal, though, we wanted to help remove some of the pain points. So the ones that you'll see, the first two are all about the developer's experience and how each BU would be fragmented. So not having copy and paste capabilities and not having a standard governance really was a pain point for a lot of them. The other one was security. Developers would write code and then have to wait weeks to get input and then try to remember what that code was. From that, we also saw that maintaining the software consistency and maintaining these capabilities were limiting for the developers, such as the log4j. And so when we went through this transformation, we realized we could have a lot better enhancements and standardizing going from this legacy application. You can see it was highly fragmented where we had multiple enterprise level audits, friction for developers not having a consistency. And these BU DevOps teams that would have to maintain and support these deployments as they scaled and as we got to the cloud. And it just was costly. They were unable to maintain all of these independent and fragmented capabilities. So I'm going to pass it to Jamie to talk about our newer platform. Great. Thanks, Evan. How we went about solving this problem is, as Evan mentioned there, we worked with our overall DevOps Council leadership and agreed some sort of key standards that we wanted to bake in from the start. So we focused on the key domains there. You can see it's broken down into five across source code, build, security, validation, and obviously production, logical access checks, et cetera. As you can see on the top, we have our key platforms. Like the goal was to shift left, not bake too much into the pipelines themselves. Let's use the platforms for the power they are so we can actually, I suppose, let the developer in the middle make sure that we're, as they're going through their journey of releasing code, that various checks and balances could come out. And then by simplifying then our processes around the pipelines, we can achieve and bake in standards that we want to achieve. And just an example of some of the checks that we want to build is making sure the approval process is being done correctly, everything, the way the artifacts are being built, consumed. We want to build a lineage right the way through from concept to post deployment that we can enable and trace for every single artifact that we're building within Fidelity and make sure they're consistently built around the standards and be able to update that as we go forward. So the whole idea, move faster, move safer, and enable a sort of key developer experience. So what this diagram is explaining on the left, you can see some of the key assets that we're working with across our DevOps tool chain. And what we're doing is, as I say, we're using key lineage points and data points across our STLC process that we're capturing and linking together that sort of steel threading. This is super important because it's not only for all the compliance for security, it also provides many benefits around specific metrics and KPIs that we can hand out to our overall engineering managers and development community back. So there's a carrot for adopting this process as well that they get huge insights that would up till now be difficult to transcribe. So how are we enabling this? And we've been partnering with CDEvents team. We work with Andrea and the team there as part to understand, first of all, it's a fantastic project. Like, why is it important to us? It's really enabled. Apologies. What's important for us is being able to link together a standard language that we're building. Like, we have many different platforms, many different systems, and without being able to talk to each other in a consistent way, CDEvents was something that allowed us to sort of model consistency across our organizational groups. That's been really, really powerful. We started off with cloud events, a lot of customization, but having a sort of standard as we ever or future adapt our tool chain platforms, we have a standard contract that we're going to work within. So this was, again, borrowed this from the guys, but it helps explain how the various, if we're dealing with multiple SEM systems, like specific events, like a change merge event, how we can tie that to the specific artifact that's being published, and then using the SHAs and the various data points that are being provided, we can then link that, for example, to deployments. So this allows us not only to, again, tie that lineage back to who committed it, when was that deployed, and where, and what different systems it will be going. It doesn't matter how many targets we have, to us it's a service deployed to service updated events. And again, as they're maturing the specification, there's going to be S-bomb linkages, instant, as was mentioned today, test queue with their test events. So it allows us to sort of work in that overall ecosystem really, really well. So this is just a sort of quick example. Evan, you want to talk to this a little bit? Absolutely. This is a template catalog on Jenkins. So it's immutable, and it's easy for developers to onboard. All they have to do is provide a properties YAML file. And what you'll see is at each stage, as we're kind of going through the data, we're able to collect the properties that are passed into the function. And you can see right here, these are the properties, all the properties from their Docker to everything that they want to do. And we're able to gather insight into each individual build. The changes that are going, we're able to track the commit ID, and through and through we can see in the pipeline what they executed and what changes were tested in the pipeline. Yeah, and this is really powerful because a lot of stuff is dynamic, and obviously this provides us the context on top of actually leveraging the platforms themselves. We have standard pipeline libraries. Now we're getting real down to the detail of the specific commands. We can actually append dynamic data, for example, the shas that have just been pushed up for the artifacts and passed them through our backend. And this just talks a little bit, just stepping back again, like just summarizing a little bit about where the platform lies and where the pipeline lies. Like, as I mentioned earlier on, we want to meet developer where they are. So it ensures consistency as part of our legacy, some of the challenges we had is that it was sort of a trust-based policy. We were hoping teams would basically buy in, use these tools. Whereas now we can actually enforce it in a standard way across the actual pipeline, leveraging the webhooks, leveraging the various data that's actually coming out to actually make real-time contextual feedback. What also is important is we can actually bake in and adapt and evolve our policy as it goes. Because it's centrally managed in the platform, these controls can be abstracted and injected in. So we have a way to evolve as we go forward. So one of the key net effects of this that's really beneficial is is that it allows us to simplify now our pipeline constructs. So as we mentioned before, we have standard libraries. That's a challenge in any firm, especially a size of fidelity. So what we've provided is like standard templates, composable item called segments, then these fine-grained capabilities that bring it all together to allow a sort of common marketplace of capabilities. So as new tools come and go, as technologies evolve, we can update these fine-grained capabilities to move forward from there. And this just talks to that a little bit more. If we step back and look at the overall platform, right on the left-hand side there, you can see the core platform assets that we work within. We can swap in new tools as they go. Using our catalogs just allows off-the-shelf immutable templates that teams can just pick up and go that have the prescribed golden paths. Segments is something that we've built out that sort of takes the idea of a phase in your tool chain that will allow more of our power users, I suppose, to scribe their workflow, but you still use a standard set of capabilities. And then Fidelity Pipeline Library is basically the LEGO blocks that provides all this. As you can see, we use all this data, gets pumped into our evidence store, which allows us to do a lot of really interesting things. Not only collate specific data points that we can use from an operational context back into the pipeline itself, our audit and compliance teams can directly query this attestable data. So at the moment, audits are much more streamlined and the data is available for them and uncover a lot of potential data points that we've never seen before. And as I mentioned, the net effect of this is that we get a lot of real-time correlated data that we can actually use for engineering managers, for insights and so on, that we can publish out to dashboards, reporting, and general fitness checks for the applications themselves. So as we're trying to say, the key thing for the platform is essential adoption is that the right way is the simple way, and therefore teams don't even know this is going around, but they're constantly being kept up to date. So how are we achieving this technically? This gives you a little high-level view on how we're approaching this problem. So on the very left-hand side, as I mentioned, anything that can publish an event in fidelity can effectively integrate with this system. And then as we go along, we've created these collectors, some are plugins. We've actually, as mentioned, we've contributed back the CD plugins to the Jenkins that's available now for everyone to use. Or we've got various collectors, which basically allows us to, again, if it's coming from multiple SEM systems, we can provide a standard sort of template around that, and we use the CD events to allow us sort of tie together this sort of meta model in a consistent way. In the middle there, you can see the context data stores. These are our immutable ledgers. These capture the raw events and provide the raw evidence that we can use, and we can put APIs on top of this, or we can publish data out to other teams, for example, that may want insights data around the various stores that we're collecting. And then on the back end, what we do is we leverage some servers, hooks to be able to sort of clone a copy of the image and have a fan-out pattern effectively where we do some interesting processing where we can actually target specific controls we're interested and combine data from multiple different data sets to answer those questions. And that's what we publish into our evidence store, which is a roll-up, and I have an example query in a few minutes just to describe for that. But again, the idea is that in the pipeline, this is a real-time event, so we can actually query the evidence store, so we can actually allow teams have smart gates across their tool chain, or a production hard gate which will enforce the whole list of controls that we want to do. And because this is a nice fan-out pattern, we can obviously funnel that data into a more analytics-type data store where we can roll that up for other uses. And as can be seen here, this is a slight example of a snippet of a sample GraphQL API that we provide back to our development community where just by passing in a SHA of the... there's a few different access patterns that we provide, but this one's demonstrating by providing a SHA from a merge request, for example, how we can provide that lineage right the way back from the SCM data to the pipeline data. In this case, the sonar scan and the artifacts produced. But as we mentioned in the earlier slides, this would be decorated with the security data, all the test data, any of the additional audit controls that we want to do, container rehydrations, anything like that can just be continually added into this bucket. So the real benefit for the end user is that we are ensuring they have control over the data that they're getting, but it's clearly very powerful from an audit standpoint that they can just access the data in a simple manner as they want to. So this is this that isn't shown in the data. It's behind our ledger table. The key driving point for this that drives the end-to-end capability is actually the CDEvents plugin. Without knowing the start and stop of the pipeline in the individual stages, we wouldn't be able to collect this data in a seamless way. Our pipeline is critical, and so without the CDEvents plugin, we wouldn't be able to kind of create this ledger. Good point, yeah. So we evolve forward. The use cases aren't just necessarily what we describe there in the pipeline. There's the overall usage. We can obviously get a complete metamodel of what's going on in the pipelines, whether it be the container rehydration that they're using, if there's anomalies going on. So we have a prescriptive area of mining that we can go along, which we touch upon in the next slide a little bit. But the stakeholders who use this system, obviously there's the end engineers and they can benefit. So we create central teams, have full control over the templates that they provide and offer out to various teams. So it's very flexible from that point of view. The composable manner, we obviously have currently over 6,500 applications gone out. They're in different flavors, different tools. So the segments model provides huge flexibility for our teams to focus on handling the needs that they have. If there's a gap, they can contribute into our process. And all this is automatically just going to suck the data into this framework that we're showing. So just to tie this all together and allows us to make some really smart decisions, as you can see on the very left-hand side, we're just summarizing that the core platforms and the pipelines themselves are providing our data foundation. As we mentioned as well, anything that can produce an event can emit the data points into this and they're modeled in our ledgers, as we mentioned beforehand. The Data Trust Foundation is where we're focused specifically on the ledgers and the evidence store. That's our source of truth. There's no way that anyone can come in and it's basically the core foundation for any of the other data points that we provide out to the wider organization. Other than evidence and compliance reporting, one area that we're focused on is data for smart analytics and insights that we can provide. And this is an evolving space for us. We're not fully there, but it's been very exciting for us to actually answer some critical problems that we've had that I talked about at the bottom. But by funneling off the data on the back end there, separate to the evidence store, we have the ability to pre-process that, roll up the various data points maybe around the domain itself, or cross collate data from, for example, SEM test or pre-production data to answer some complex questions that we can then gauge from our pipeline point of view. But just to give you some examples of how we can use this data, like we have different standards of application, some are tier 0 mission critical applications, some are less so they could be up to a tier 5, for example. So we want to have a similar approach to how we look at policy but allow the actual contextual data allow us to make decisions around the posture that we can actually apply to those controls. Like everyone, root cause analysis is one key area that can be a pain point for everything else. So on the back end pulling these different data threads together and getting input from our engineering managers and stakeholders alike we can provide key data points to what the change was that potentially broke something and actually provide them the lineage back to the actual commit that it helped with. Again, the beauty of this is that we're providing the key data points so that these engineering teams can build solutions on top of and work with our SRE teams for example that allows them for much fine grained evidence across the wider enterprise which in a large regulated firm like Fidelity has not been traditionally easy to achieve. And again, like I mentioned beforehand Evan said there with CD events and getting that meta model of the pipelines itself it's like water when it comes to data engineers have fun trying to obviously get data points so we can actually understand key things like the containers being pulled in if there's anomalies if they're injecting for example bad code or doing some bad practice within the pipelines themselves this will provide insights to how they're using it and what we can do to remediate. And then in general I think this comes down to just looking at historical analysis of the data that comes through understanding build scanning successes just general meta data that's really important to understand trends of usage for a various tool chain that's available. So it's a high level overview of what we're doing but you can see how the data is king for us but allows us to as I say answer questions from a security audit standpoint but also provides these roll ups in a way that will allow us answer quite technical challenge that we can allow the tooling read in and make decisions based on and that's the future of where we're going with this. So in summary that's pretty much where we are today and I want to make sure that there's plenty of time as long as there's anything else you want to add, Evan? No. Okay. So thank you. Hope that was insightful. Maybe we have a question. Let me grab the mic. Absolutely. I want to make sure there's enough time at the end for... You're going to take the questions. You can go around. Oh, now I have to answer questions. You took my job. Absolutely. So how did you... Fidelity is very much about proprietary value, right? You've got competitors and you compete with people. How did you ever persuade them to open source the CD events plugin? Why not do it proprietary? Why not hide it from the rest of the world? It's impressive. You've open sourced it. What was it like to do that and what was the experience like changing the company that way? Well, in fairness my boss, Jerry, and his boss, Joe are big proponents of open source. And there's a big... I think it was mentioned in the previous talk there about providing a culture for the developers to get involved. And again, we're set up in a way in Fidelity that every Tuesday is a learning day. So there's basically a physical day per week where we can focus on projects like this. I personally had a need, and we used these as part of our toolchain. There was a gap out there, so we've been involved with the sort of event sig for some time, got in touch with Andrea. And as we understood this project, as I said, we went down a custom route for cloud events and it just did not make sense that we were reinventing the wheel. And obviously this specs maturing but allows us to infer our domain problems. It's really interesting to see how other enterprises are. It's not like we're special, this is a common problem that's happening across many different enterprises, so we're encouraged, we've baked it into our actual work and delivered that pretty quickly. They're a fantastic organization to work with regarding very clear contribution guidelines. And again, we find it really important because we learn how other firms or open sources do that. So we can bring that shape back into Fidelity and improve our own processes because nothing's perfect, but we can evolve and evolve. So I think we see open source as a clear way to sort of decouple ourselves from potentially some corporate proprietary systems. That's one of the really important issues and it's being pushed. And that's just one example where we're involved across our whole ecosystem. There's a lot of involvement. So I think we've already made around too many contributions in the last quarter alone to open source within our group. Hi. No go for it. We've got loads of time. You talked about the difference between some of the capabilities between a platform level and a pipeline level. Can you explain a little bit what you mean? Be closer. How you think about the pipeline as part of the platform, uses the platform, where do you draw the line and how do you think about those two separate things? I think in the area that we work with in the central platforms itself, we're focused predominantly on the developer journey. So up till now, a lot of these checks were, as I said, put into the pipeline itself. So it just made sense that we simplify that and focus on actually using the platforms for their core purpose, apart from the functionality they provide. They provide this ecosystem that we can not only extract data signals from, but we can enforce and guard. The pipeline challenge was pretty much, as you can see it as the sugar on top, because the platforms are consistent and central. They're non-negotiable. If someone wants to release code, they have to interface and go through these platforms. And we can invoke scans from that, for example, as artifact pushes. High complex workflows that beforehand were prescribed in pipelines itself. So it allows us then to have the discussion with the development community about the core building blocks that they require. Because really, they want a clone, they want to scan, they want to learn, they want to push. They want to pull down, config, deploy, test, and push out to code. And it also allows them to simplify. It was amazing understanding how common it was a matter of dealing with database deployments or microservice deployments. They all had the same challenge, but they were just pushing that complexity into the pipeline. So it allows us to have much richer discussions around the core capabilities that we want to have in the pipelines. So we built a dictionary around that, a domain where if it's APIs or whether it's data, we can actually now describe tool agnostic, what are we trying to achieve and codify those in. And that allows us to have the composable model with our segments that we're building. So that's been a bit of a game-changer from how, because they're now involved in solving the solution. And that's where we're trying to marry simplified but has to do their core job but pushes much to the platform. So the real hidden value is handled there. But the core thing is the data I think the pipelines provide a much richness of context that may not get necessarily from the platforms itself. But it's very clear, that's again when the engineers see what they can get from that. It's no brainer to adopt. So that's where we're getting a lot more traction around that. So I hope that answers your question. But it's a bit of the spine, for example, is coming from the platforms itself. But there's just jobs that are no better done than the pipelines itself. But the point is, there's a common marketplace that we use and share across all the business units now that were originally built with duplicative capabilities. No governance, no insight. And that's where we're losing. Sorry, you had a question? Oh, thank you. Can you hear me? Okay, cool. So I had a question more about the adoption element. Fidelity clearly has a huge history behind and you mentioned the fragmentation of the ecosystem, which is very common in companies of this scale and history, right? So I was going to ask, what sort of scale of changes would teams need in order to opt into your pipeline? Because sometimes I hear people saying, well, we have to change our architecture to be able to use the nice stuff out of the box. And also because there's also a tug of war between delivering for products and delivering to customers and delivering value versus improving the delivery the SDLC generally, the software development. No, it's a great question. I guess the question is, how did you persuade the engineering teams to get that work done? Also the management, right? Absolutely. So I came from before I joined the central capability. I came from one of those fragmented DevOps teams who supported a specific department. And what we wanted to do was we gave first off, we said, this is inner source. You have the ability to contribute. And we also wanted to meet them where they were at now. We understood certain departments are more mature than others. And so we work with them to say, okay, what is it going to take to adopt? We're not going to force you to say, okay, by tomorrow, you're done. We're shutting this off. We work with them and meet with them to say, okay, what pains are you having? What can we solve now? And how can we help you adopt this in the future? Because, like sometimes they might say, hey, for Q3, I have five VPs who are trying to push me in. And we'll say, okay, we'll work with you. You have, like, a two-year time frame. We try to give them, I'm just making up, you know, several-year time frame. Let's work with you. Let's build also a lot of learnings. So we try to really drive and go to these specific departments saying, let us teach you so that you can be engaged, so that you can get involved, and then plan out what it's going to take and do it small steps at a time. And so it's really a balance act on, this is where we want to go, but we have 10 different departments or 50 or however many. One area that's really helped us, and I think it's going to be the crux, your question is, is that you'll notice it was the next generation platform. So as we modernize our tools and consolidate, we traditionally had multiple orchestrators. So with our next generation orchestrators, this paid path was built. So they had to migrate off these anyway. So the fact that the capabilities were pre-baked and they're the, as I said, simplified, they were rationalizing as this part of the work was going on, the convergence to the new platform, the pathway was there. And then the fact that they were actually involved in a discussion, inner-source has been really, really powerful because by breaking down the problem into core capabilities and discussing it, teams don't want to naturally build their keen to ship value. So for them, the fact that there was something off the set, was a lot more reliable because a lot more eyes plays much more quality. And then as we propose some of the actual value that you get off that, that sort of was one of the key draws into it. So a lot of, as long as we sort of met them halfway, we sort of, you can't solve this, you can't come out of a cave and go to da, like you have to bring them in and just a bit of communication. And that's the sort of role that we play, but it's really the convergence to the new platform strategy was one of the key points. And they weren't going to rebuild Rome. Again, they had it there. So that's the sort of, so there's a combination of a few different factors, but that's been the key point of it, but engaging having that path there. And then as I said, once they shift, they obviously want to adopt the new platform capabilities because of the, maybe some legacy issues they may have up beforehand. So for them, it was a win-win. But being involved in it is how we've grown and we have some key partners that we've built strong relationships from the gecko so that they feel part of the model that we're doing. And then the platforms itself just adds that sustainability and security that they're getting. Yep. So you used a phrase inner source. Could you illuminate that a little further? What inner source, I know what open source means. Tell us more about what inner source means and how you use that. So for example, like the whole pipeline strategy that we mentioned there, so we call it Fidelity Pipeline Library, but really, we engaged our business unit community, the 20,000 developers out there. And as opposed to creating multiple ways to do scans or builds and this and that, we created a framework that was very, very lightweight, very, very simple on purpose that we weren't negating this really complex tooling solution. So the capabilities, multiple lines of code, we have some rules. If it's something richer, we put a tool behind it. But the idea was, is that anyone in the firm can submit a pull request and within an hour, their change would be available to the whole firm. So this is how we were breaking down barriers for and again, you have to understand, like, they're very smart individuals. We want to make sure that they're brought into the process as well, so everyone's getting the benefit. And then we built in some sort of key design decisions just around, like we use maps a lot so we can extend, for example, optional parameters and the whole point of it is, is that we don't have breaking changes and we bring the teams into it. But by doing that it's not something that's open to the wider, like potentially these projects could go out to open source but they're currently within the fidelity construct, hence why we call it inner source. On the note of inner sourcing, my experience was we tried inner sourcing but there's a PaaS team that sort of maintains and owns that platform for the company. And that became a friction point because they felt uncomfortable getting all these full requests from for snowflake codes and then having to maintain all of these. How did you solve that? And again, I think this domain applied really, really well to that. Not all projects are successful inner source projects but this was a problem that affected everyone. So I think having key partners where you have that trusted committers within your group that you can actually have, when a pull request is put up, people swarm in it straight away. So a lot of discussions happening across time zone and very quickly if it's a sort of an interesting new point you might have a long thread around it but the point is that discussion is really valuable because you're solving it there and then but it actually provides more confidence in the overall community because they see the input, good discussions happening for smaller changers, they're self-organized but our group actually took on the mantle, like it has to be managed by some respect. So again then Evan can attest to this. It's more the certification process, the validation process, making sure that that is still really optimized but as we found teams, beforehand central teams that were doing fragment were just throwing code out and it was breaking so the community was effectively testing it this way. It's a lot more robust and it's a very clear model as to what they're getting and we've automated the whole process, the documentation, everything so teams can just see what's there and commit to it. So it's worked very well for us, I'll be honest with you and I think we do have some robust discussion at time but overall the fact that everyone has a voice and it's democratized process, the cream rises to the top and I think like anything we want to get the input and I think just alone that gives a lot more confidence to the end users. We'll add to that. So one thing that we do have fidelity that was just started a few years ago is called the DevOps Council. We're all across the different departments, the CIOs came together and agreed to certain standards so we had this opportunity to approach it from both sides where upper management agreed saying yes this is a great opportunity and then working with the individual developer to say absolutely we agree with this and so tackling it from both sides really helped lead for adoption across the enterprise and so I think that was the other big key initiative. Yeah, no, it's definitely the values undeniable. I think just the fact that I think in over 18 months we've I think we've got like over 250 capabilities now that have been built and scaled and it's just even the chatter that you see that the community gets involved and works with it but we're just using this framework to hive off the data for our own needs and then they're getting the functional beneficial because at the end of the day teams don't want to build something that's already there that's robust and solves the need and if there's a gap they can contribute it. Look, thank you so much. I know we're over time a little bit but really appreciate your time and effort and hope you enjoyed that. Thanks so much. Thank you all.