 Hello, everyone. Welcome to this panel, where we'll be discussing how to influence developer productivity through tech products. We have an accomplished set of panelists. I think collectively, we are like 80-plus years of experience in tech product. All women, so shout out to that. And I'm sure many of you have delved into this topic and have opinions about it. So while we go through a set of pre-canned questions, feel free to chime in. It's really an informal discussion. I'll start with my introduction. My name is Neetu Chan. I'm an executive director at JPMorgan Chase, where I look at cloud network security products, provide them as a service, infrastructure as a service, to all lines of businesses at JPMC. My products being network, we are the first one to go into cloud for a cloud transformation journey and we're the last one to come out whenever we exit. So you can imagine how crucial they are and any improvements there, how massive impact they have on productivity. Prior to this role, I started my career as an engineer and did EDA products and network security products. And after about a decade, I decided to move into product management. Most of my products until now, until then, were B2C and B2D products or B2B products. But business-facing or customer-facing products. It was at, when I moved from IBM to USAA, I actually moved into SDLC products. That's where I realized now it's not about market share or revenue, it was more about now productivity and agility and cost optimization because for the first time I was managing products that were internal-facing IT products and how I was indirectly serving the customers. So that's when I started getting interested in this topic of where you changed the holy grail of developer productivity to expedite your cloud transformation journey. I'll stop now and we'll dive deeper into this topic but I let other panelists introduce themselves. Thank you. Very happy to see you all and I'm Janani Rajendra and I'm a principal product manager at Quiet AI. It's a startup company in the application security space. If any of you deal with the static analysis tool or SCA open source vulnerability tools, then do check us out. We were formerly called Shift Left. I think that might be a much more familiar name that you'll recognize. We're just going through a rebranding. So that's Quiet AI anyhow. So I started out as a developer. That's where I spent most of my career as a developer and then past four years I switched to product management. One thing, it's pretty exciting for me here when we're talking about developer productivity. We as an organization, my previous roles, we had adopted the OKR framework. I know yet another framework, but it's all about implementing it. The devil is in the details. So anyhow, so the one thing that I was really excited about the OKR framework is it was a good tool, especially for breaking the silos, getting an alignment and visibility across the organization. An organization could be anything with more than one person. More than one person, we need to have visibility and alignment. That's where I define it. So that's how I got started and that's how me and Neetu got interested in this topic and we want to discuss. Interesting fact, we started our career journeys together at the same time, many years ago. We're not saying the number of years, right? Hi everyone, my name is Samantha Carvalho. I'm a senior technical program manager at Developer Experience Group at Roku. And I, like esteemed colleagues, have been a developer most of my career, recent foray into program management. I started off at Netflix, where there was lots of initiatives with the Spinnaker and things like that and it blended itself really well. Being an engineer, you're trying to optimize your stuff. You want to push code faster, you want to deploy faster, you want to have test running, all passing and making that process easy. So it lends itself very much towards being productive, things that you do for yourself and for your team as a whole. So very passionate about this space and currently at Roku, we're trying to move the needle on developer productivity and some of the initiatives that we've recently started is implementing more rigorous CI CD, shift left initiatives, which is different than quiet AI, but it means having tests, more upstream versions of tests or even post-merge test automation firms. So that's where we're trying to move the needle towards developer productivity at Roku. And I am Tracy Reagan. I am the CEO of a company called Deploy Hub. I am pretty busy around the Linux Foundation. I am on the CDF TOC. I served on the board of the Open Source Security Foundation. I helped start the Eclipse Foundation. I helped start the CD Foundation. My background is also a developer. I started, I cut my teeth on Wall Street working on mainframe systems and something called OS2 came out and I got really excited about it until I found out I had to write my own compile JCL and I was like, what? You have to write your own compile JCL? You have to write your own processors to do deployment? How stupid is that? And I have been saying that for the last, almost 30 years to be quite honest. So to me, developer productivity was sorted out on the mainframe. I discovered that we had a lot of work to do on what they called the open systems as soon as I started trying to deliver my OS2 code. And to this day, I'm still preaching that. To this day, I'm still preaching, why are you writing these scripts people? Why are we doing that? I'm a big fan of CDEvents, which really speaks to this particular problem in the DevOps space. We have some folks here that are on the, we have Andrea Fitcholi who is on the CDEvents team. So while you're here, please, please learn about CDEvents. And I'm also a community organizer for Artilius, which gathers data. Okay, I'm gonna start the ball rolling with the first question. So based on your experience, how have you folks measured impact of your products? And my follow up question is that, given these metrics, how have you involved the developer community in collecting these metrics and how has it become collaborative support? Okay, so I share my experience when at previous employer, I was providing a platform as a service, which was basically meant to be used by the developers of the company to push their code to cloud. There we initially, and this was before Dora came out, Dora Matrix came out, and intuitively we had divided our matrix into agility matrix, which involved velocity, code to deploy, line support, et cetera, matrix and then security matrix, quality matrix and reliability matrix. We had basically, at a high level, these metrics. And then we realized as we started using these metrics, some of these were output metrics, not really outcome metrics. So we started diving deeper into it. It's around the same time when Dora Matrix came out and we adopted them more religiously. And over the course of time, we improved our matrix or KPIs, et cetera, to focus more on the outcomes. Their lines of code might not be, but what is the value at deployment? And what is the cycle time? These were all developer matrix. After that, after SDLC products, I also worked in InsureTech and Martech, where we had different matrix, but they were also internal products driving productivity. And there it was claim adjustment cycle time in insurance claim work. It was about number of tasks and steps reduced to automation. In Martech, it was about measuring, breaking down of literally silos when there's a lead with sales and then marketing is pursuing it when they are actually converting and not. And when customer retention can really retain it when there's a flag, these all become different matrix that we would track depending on use case to use case. But I think the common theme was tracking, getting sidelined by whatever matrix we had instead of finding what are the matrix that we want to drive. Where do we want to move the needle on? To focus there, focus towards moving the needle, OKRs and not star, finding a not star between the noise of matrix that's became our focus. I think it works. Okay. Okay, so I started out as a developer. So in my previous organization, yes, we used to have something called waterfall. I hope you all remember that, right? So yes, that's where I started my developer journey with waterfall and where developer productivity is all about LOC line of code and number of bugs you're fixing, right? That's how the developer productivity was measured then. And then we, of course, like everyone else, we adopted to agile and then it just switched to number of story points you're doing, right? That's how it was being measured. But anyhow, eventually, right? We evolved and we moved to a SaaS product. We adopted DevOps principles and then as I mentioned before, we took on the OKR framework, so that one really had in having, you know, measuring the developer productivity in the right way, which I think, right? As developers, we all take great pride in our work and we want to work on the right priorities, right? Which makes an impact, right? We want to make, right? Do something that everybody's using it, right? So that's where the OKR framework held, which is like, you know, all the stakeholders, the leaders, right? It's just not the PLMs who are defining it and then handing it over to the developer team or to the ops team, right? It is a collaborative effort and we define very specific metrics, right? Very specific key measures that we want to focus on for that quarter. So for example, a specific quarter, we were, you know, we just wanted to focus on increasing the customer adoption from stage two to stage three in our onboarding funnel, right? That was a specific objective for that quarter and we identified three key measures for it, right? Along with my development team and then we had a dashboard set up, all the slack, all the different integrations, wherever the developers are there, right? And then we had the visibility, right? So when a developer, when they check in some feature or check in a code, right? They get to see the impact it's making, right? With those key measures. So that, you know, that was that really held, you know, at that time I was not a developer but when I spoke to my development team, they really, they appreciated that visibility that they're getting, right? From their work, from their code to the business impact it's creating. It's all about the data, right? So metrics, we're talking about data, KPIs, OKRs, it's all about the data. So when you have an opportunity to build something, think about factoring in how you can collect your data and that's something unique. They think that we've started to instill in our process when we come up with a feature guide, we're like, oh, you need KPIs. How are you gonna collect that data? So we've incorporated it as part of our feature guide. Now you must be asking yourself, oh, but then development time for the feature and then development time for the data. Yes, that's true, it would increase that time but hopefully you're thinking about building telemetry that doesn't extend to one feature but could be used by multiple services. You know, it's single storage for logs, you know, able to grab something or it could be even simple things. So the measurement should be, you know, having machines collect this data and cycle the data and give you the answers, not like looking at processes and scripts and things like that. So useful data, it's about how you collect the data, what data you collect and sometimes, you know, when we haven't had the opportunity to build in this telemetry from tooling, we've resorted to having surveys or, you know, working with focused user groups, just key people that, you know, where we're like, okay, this is what we wanna collect, this is how we're thinking about collecting it, what are your thoughts, you know, what do you think we could do to measure this process for a particular feature? So that's how we've sort of approached it. I guess we've come a long way, right, in collecting data and in KPIs. I think that some of the first ones I can remember, of course it was how many lines of code, right? How many lines of code? And I can also remember a KPI of a rubber chicken sitting in my desk, because I broke the build and I'm gonna tell them, well, I wouldn't break the build if you didn't ask me to write so many lines of code. But that's how I'm getting paid. So I would make my code really big. But I think that, really, I think Git has changed us in the way we track a lot of data and I think that Git still can be, we can actually leverage the Git data even more than what we already have. And I think that what Git did for all of us was centralize all this information in one place and as we're talking about, data is the key, right? To track those metrics. Linux Foundation, LFX Tools, they have some interesting tools that sits on top of Git data. You can get LFX Insights that gives you some really good information. And the thing about when you start using the data and you start measuring your metrics, developers start measuring their own. And that is where we really need to get, where developers can self-service themselves. They start understanding how well they're performing against others and I think Git has done that. I mean, even today when we do, when the CD Foundation does voting, it's based on how many commits have you done, how much progress have you made, how much, are you a doer? And that kind of data becomes critical. I think some other KPIs that we have looked at in the past that I think are maybe getting a little long on the tooth are the doormetrics. We talk about doormetrics often, but doormetrics may be kind of like how many lines of code, especially as we move into decoupled environments and microservices. Salsa is something that maybe we should be looking at and salsa is a practice, but potentially we could create some salsa metrics so that we can start improving beyond where we came from. But I really believe the key is for developers to be able to have that information and write in front of them. I know from my own perspective, I want to know how well I'm performing and I want to always do better so the more information I can get about myself, I improve myself. And so I feel like metrics need to be more self-service and kind of in every developer's face. Yes, I think all of us would agree, right? It's a developer empowerment, right? It's just not coming top-down. It's the developer's feeling empowered to do that, yeah. Okay, so I know we just discussed about, how we measure developer productivity. Let's get to the meat of the problem, which is what challenges we have faced in adopting these ways to measure the developer productivity. Let me speak off by saying that, and yes, we always heard this fear of phrase that change is constant, right? But in organization, it's not, right? Yes, it's constant, but there is so much of, it's not easy to bring about a change. There is a lot of resistance, especially anything that touches the developer tools or environment, right? There is resistance. So how did we go about it, right? One example, again, continuing on the OKR example that I've been talking about, right? So the key thing, what we started out in the organization was just start small, right? So we started out with a small feature or a product that was just moving from POC to it's becoming a product. So that one, we started out small. We defined those, the OKR metrics that I measured about. And then once we saw the value of it, right? The team and the developers became the champion of it, right? And other teams started to see the value of it, right? What was shared by this champion team's experience. So it became more of the team owning it rather than it being forcing down, right? So forced adoption. So I think that was the key for us to bring out, bring this new change in measuring the productivity. Sounds about right. I'll share more, like if we double click into OKRs and all that, I'll share my experiences with this topic. So as I already touched upon, there was this problem of output matrix and outcome matrix. When we started driving the OKRs, that what is exactly the objective? What is the key results? It became very clear that adoption was not taken into account, especially with IT products. Most of the metrics were just go, deploy, provide that feature, release that feature. And most of the timelines of the OKR, the key results, what about the delivery? Instead of why are you doing this? Why are you releasing it? What is the end result that you wanna achieve? So I started diving deeper into why is that? And it turned out there were a couple of things that were preventing us from getting into the real objective and the key results. One was cross dependencies. Oftentimes when we were releasing something, there were many other cross dependencies that will have to come together to ultimately go to customer's hand and customers will adopt it and actually we can see the needle moving. So this cross dependencies was making everybody feel nervous. Like I've done my part, what if they don't do that? And then I get dinged for the OKR. The other was actually the culture of, I guess, resistance to failure. In CICD, we're supposed to fail fast, chip and learn, chip and learn. But still, if it gets tied to the performance of the engineering team, so the engineering team was hesitant to put that on their OKR as a matrix because they feel they've done their best. They're external, like first of all, adoption. There was another team driving cloud adoption that was responsible for adoption. And then there were other dependencies. So they didn't feel like they need to share the burden of failing on this OKR and them impacting there and that OKR impacting their performance. So it was an intricate game. And on top of that, I agree, as Samantha touched, there is extra effort needed to build a telemetry in. So your feature delivery will get delayed. And in agile fashion, where you wanna meet, I delivered X features over this time, oftentimes telemetry is pushed under the carpet and sacrificed. I'd like to extend something that Janani said. So getting developers to change is hard. And we acknowledge that that's a problem. They're used to doing things a certain way and they're like, okay, things are working just fine. Why should I adopt this new process? So we started at grassroots, right? We're like, okay, we're gonna get a set of ambassadors to our new process change and get the buy-in there, prove a POC, work with them, get them to be advocates, show that the POC works and then have the developers be the advocates. And so we call them ambassadors because they're gonna speak towards us. After we get that buy-in from the developer committee, then we go to higher level management and say, this is, they seem to be happy with this group of folks and we are seeing this work, do you get buy-in? And some of these people will be in the same orgs and so you have now two groups. And we've seen that process seem to work better. I mean, it's not a panacea for everything, but it seems to be like both groups are gelling and it's not like from one group saying, oh, this is how you do things, now do it, no. It's like both sets of groups are talking and you feel that they're part of the conversation. So that's potentially a way that we've seen success in some effecting change in a positive way. And obviously there are gonna be naysayers, there has to be, that you have to prove someone wrong somewhere down the pipeline. So when the product ships or you see this and you collect it, I don't know if someone highlighted, you only see that change or that net positive effect after it's released. So you have to wait a while to get that data and then prove that, yes, this was successful. Would you come on board for the next initiative? So that being said, there are certain industries also that we know about if you're doing finance or if you're doing healthcare or even when you're doing in a chip industry. There's different levels of productivity each of those things. You have audit trails, you have security, you have compliance, which normally in certain web-based applications you don't have to think about those. So depending on your industry, you're having a different set of problems, a different set of challenges, but it's just that thinking about those things and not that they can be solved, but thinking about those things upfront at a different angle, yeah. And I think when I look at, as I work in the open source world and I work in enterprise and I have a small company and I look at those three kind of cultures, open source innovates really fast. And why is that? And why I think it is is because they're not afraid of failure. Failure is not a bad thing. Failing slow is a disaster. What we have to learn to do and what the open source community does really well is they fail fast. They fail almost, it's so immediate when you realize that we went down the wrong road. It doesn't take us long to figure it out. Oh, I think enterprise organizations should really embrace the idea of failing fast. And as we had a discussion about the champion and doing things faster, the reason why you have a champion team is because they figured out how to fail fast. That's why you have a champion team. And that champion team then teaches everybody as a mentor, right? So mentoring is important too to change the culture. But failure is so scary for some people. And a lot of our open source contributors, when they first start, they're afraid to do a poor request. They're afraid to check something in because they're afraid to fail. And we tell them, please, please, I challenge you to break it. Break it. We don't care. Everything can be fixed. And I believe in the enterprise world that that culture needs to be fostered. Yeah? You know, I mean, OKR is our, I think kind of ER and et cetera, where it's kind of, it's more Darwinian than anything, right? And so we have an almost expressed itself in a much different way. And do you think there's gonna be this kind of a genius moment where we get off of the, like, the traditional tooling and more representative of the open source model? I believe that we are in a state of transition here. And one of the things that's driving it is moving away from a monolithic application. Microservices are going to change a lot of things. In particular, we're gonna be building Lego sets, right? That you can use different pieces and parts not just from within your organization, but you might be buying a set of microservices, right? Or you might be borrowing open source ones. And I believe that once we stop thinking about ourselves as application silos and instead we have feature teams that we're gonna think more like an open source community. And I think, I believe our tooling will be disrupted pretty big because of that. Yeah, I feel it right. Yeah, you don't, you don't wanna get me started on object oriented programming and the failures there because that was the perfect, we're doing it all over again. Microservices are object oriented programming basically and we just don't have one mass of build libraries to link together. But we're still consuming each other's products and it's more obvious that we are ownership of those microservices, who created them, what's in them. All of that becomes more important in the process and it's more open source like. So the enterprise I think is starting to act more like open source. Let's hope so. I'm seeing it, right? In my previous drills, right? As I said, I started out in Waterfall where we had releases every two years, right? You've checked, you've done a feature and you forgot all about it. By the time the customer adopted and opened a bug, it's like, what, did I write that piece of code two years ago? Who knows, right? But then with the adoption of DevOps, right? I'm super excited. I'm very positive that, you know, the change is happening, right? Maybe as not as fast as we want, but I definitely say change, right? Because I've been in one of my previous organization for 14 years. So I've seen the dinosaur developers evolving, changing and moving on to the DevOps and adopting it, right? And they say, you know, developers are amazing. I mean, we have a lot of pride in the work we do, right? So we want to see the value it's creating, right? So, and I think we all have enough experience to feel empowered and challenge these old metrics and old way of things, right? That's been happening. I'm good. Yeah, so what do you think are the barriers to adoption of these, all these cool new tech products that claim to improve productivity, but they're still barriers? What, I'll share my experience. One is upscaling. There's so many new things coming out. Developers have to upscale. Apart from their business as usual, roles and responsibilities, there's not enough time. And then as we already touched, the fear of failure. But you guys want to take a dig at it? Well, I think that, you know, when you have a flood of new tools hit the market, it's really hard to understand which ones you're supposed to be using. And we're facing that right, especially around the security tooling. There's all, you know, we're like the little kid who just likes to look at the shiny new object and we run to the shiny new object and we run to the shiny new object. But right now we have so many shiny new objects that it's extremely confusing. There's really no easy way to solve that problem. It requires a lot of time and consideration. Most, I think a lot of the larger organization I see have somebody who's trying to look at all the shiny new objects and decide which ones that they should add to their list of tools to use. I sometimes think that is stifling because another shiny object comes in to replace that. But innovation can be hard to foster and it goes back to the failing fast. If our organizations would allow more tools to come in to the process and then you find out which ones float to the top, then these new tooling that's coming in will have a better adoption process. And I think that tool vendors like Deploy Hub and Open Source Project like Ortilius or any of them, we need to do a better job of providing our customers an easy on-ramp for these tools. Whether it be in a SaaS environment or things that are already set up to be testing easy, to test it out, POCs can be way too long. And seriously, how many people here were using Jenkins and did a POC? Okay, how many people have used Jenkins in their organization? Yeah, all of you. So that's my point. POCs aren't necessarily a good thing. Let a team go try it out. And if it really works well for them, let them tell the world about it. It's kind of like our other, you know, the champion team, right? Let the champion team that found the tool and started using it. That's what I think. Interestingly, at one of my previous employer, that's the role I had. I had the role of strategic innovation director and I got to play with cool new emerging technologies and try to understand how to use that technology in the insurance world, in the claim space. I had a lot of fun in that job, you know? But the only problem was a lot of my POCs did not go to production because of lack of resources, et cetera. So let's talk about the future. What's beyond developer productivity? What are the other industries and what is the future of influencing productivity? So I'll go ahead and share mine. So as I shared with my experience, for a long time I did B2B, B2C products and then internal products. I started with SDLC products that changed the developer world but after that I went into InsureTech and Martech world and I've been observing a lot more how traditional, traditionally manual intensive industries are being transformed. For example, insurance, you know, people like you can go and visit a house on Zillow with 3D point mapping, you know, and you can actually walk into the house. That's a great example. You can already see how Martech has become such a big thing. Now, marketing knows everything about you, sometimes even more than what they need to know and even what you're thinking. So to me that is interesting in that now they are so powerful tech products where know your customers a thing and it is not only just changing the developer world but also the legacy or probably legacy is not the right word but traditional industries where they're not used to using technology as much but they're now becoming more and more tech savvy. Some of the, in my experience working in the finance world, some of the, they're very process driven and they're meant to be these industries, healthcare, you know, privacy, security compliance. So in dealing with developer productivity, they're just from my past experience, there's a finite set of problems and you look at the big picture and you try and solve as many of them as possible. You can't do it all because there needs to be a human there to approve things and that you need those audit controls but you look at the tangible things where you could optimize and it could be automated review processes for external auditors, you know, deploy times that we could, initially we automated using CI CD which we even got the auditors to approve because we showed them the process. So we look towards, we look towards innovation. Yes, they are slower than other industries in absorbing but it's a matter of seeing what can be, you know, what controls can be put in place and seeing the applicability and tackling each one as a finite set of problems without trying to say, okay, I'm gonna do this. It's going to be just like, you know, push a button and everything's gonna be fantastic. It's not real, not in certain industries where you have all these constraints and going forward, even more forward, I mean, not those industries but AI is like, you know, chat, GPT, BARD. I mean, those are cool new shiny tools, right? And they could be awesome for developer productivity. I mean, once you're not sharing your training data with the rest of the world but you know, there's, you know, once organizations figure out how to get their enterprise model but like think about it, someone mentioned in the keynote, you know, writing doxygen docs or, you know, templates for unit tests. I mean, how cool would that be if they just started those out and you have the basic ones? Doesn't eliminate a developer's job. Someone still needs to go look at those tests, fill in the things and you know, but it just makes it easier to, as a stepping stone to get into the right path. I've been, I've played with co-pilot, haven't you played with co-pilot yet? It's kind of cool, right? It's really fun. And think about it, they query, they go pull all this data out of the GitHub out of all the open source projects to generate that code. Imagine that for developer productivity and also imagine that for people coming out of university. You could literally have a two year programming certification at a junior for college and use co-pilot to be doing pretty good. So I think that the future is going to have a lot of AI and we're gonna see a lot of, we don't have to write every line of code and we're gonna borrow of microservices from each other and it's gonna be a very different, it's gonna be a big Lego set. That's what I really believe. We're gonna be putting together our Legos. Any questions from the audience? What is your experience being around? Matrix and productivity? No cares. Work in progress. It is for all of us. As with all of us, it never stops. It never stops, doesn't matter where you are. Yeah, it never stops. Sure. Tracy, we have to do it next year and see what their views turn, right? Oh, one year later. I know, it seems to be our case. One year later and see where we are. Yes, they're where we move the needle. Yes, maybe a little. Yes, yes, yes. Yeah. Thank you. I know we are standing between lunch and you. Thank you. Thank you for staying. Thanks a lot for coming and... Good luck. Yeah, they'll be fast. Yeah. Thank you.