 Well, my name is once again Jasmine James. I'm an engineering manager at Twitter. I lead the developer experience pillar within our engineering effectiveness organization. What that means is that I get to engage with internal developers to understand their challenges and pain points specific to documentation, internal tool support, and the local development environment. Over my career, I've worked at multiple companies where I've implemented new tooling and capabilities to solve very specific problems. Everything from version control, artifact repository, CI processes using tools like Jenkins. The whole point of doing that was to speed up the business or delivery of business value. As we implement a specific capabilities, some of the problems remain the same. The interaction with the tools had not changed. Individuals had what they needed to get the job done, but that experience was not all that awesome. So it was essentially like we were playing a very long and annoying game of whack-a-mole. You solve a problem, but you still have the experience problems. So what I realized is that until you considered the entire workflow experience, anything we did would have limited impact. So change the approach. Instead of focusing on those small issues, focused on what really mattered, the entire development workflow and its impact to people. So today I'll be sharing some of the learnings that I've gathered as an end user supporting developers through tooling capabilities. I'll highlight some key methods that you can use to create a more holistic and people-centered developer experience. But before I tell you this story, I'd like to highlight that. Although these scenarios might sound familiar or even feel like I'm talking about you, they are purely based on my personal observations. So with that, I'd like to introduce you to Janice. Here we have developer experience defined, as well as some of the project reference that Janice used in her day-to-day life as a developer. So as we saw, it's not going all that well. Janice is having issues finding the right direction in her new environment. At her old company, Janice was a lead that helped define that developer workflow by collaborating with the many infrastructure and platform teams that own those tools and services. So she definitely has an idea of how things could be better. Like Janice, over the past year, the one constant that we've all experienced is change. This may have manifested in many ways. Maybe you're working at a new location, perhaps you're working in a different role, or even a different company. Or you could be like me and be doing all three of those things. There's also those of us who are working in the same tools and the same environments. But either way, I think we all can agree that the pain points within the developer workflow have become quite amplified given the level of changes that we've experienced. So these issues that Janice is facing within her environment may have resonated with you. So the main question that we want to answer today is what can Janice's organization do to create a seamless experience for her to execute? I believe that the answer is creating a people-centered developer experience and taking a holistic approach to solving these problems. And there are four pillars that will help us get there. Before I highlight the pillars, I want to introduce you to the Developer Happiness Meter 1.0. As we work diligently with Janice's organization to improve her experience, we'll be able to see how it impacts her in real time. Cool, right? So let's dive right in. Here are those four pillars. Discoverability, usability, capabilities and the ever-important stability. I love acronyms, so I was very pleased that it made one of when I was as I was making this presentation. These topics are probably not unfamiliar with you. Constance just talked about how we need to look at the tools that developers use as a product. As products in general are developed, the user's experience, or UX, is considered greatly because it could be the differentiating factor between them using your product or our competitors. And a UX isn't necessarily a GUI. It could be a command line, the configuration of a tool or even documentation. Since this concept is so important to external-facing products for customers to drive revenue, why not take the same approach to developer experience? So as I walk through these, I'll be highlighting some methodologies you can use to get a better understanding of the current state within your environment, metrics you can track as you improve it and core deliverables that I've seen with my own eyes improve this area. I'll also call out some CNCF projects that can contribute to improvements, but it's important to note that tools cannot be the only thing to consider here. First up, discoverability. So it looks like Janice is having trouble finding the right software and best practices within her environment. The main question about discoverability is how can we get better insights into how she works and improve discoverability for her? The first thing we can do is get a better understanding by conducting, there we go, conducting screen recordings, user interviews and even looking at search analytics. These are great things that will put us in the right direction and can give you useful data when solving for discoverability. The next thing we can do is track metrics. Metrics like onboarding time, customer satisfaction or even how quickly folks are deploying things are great ways to measure improvements on discoverability. Some core improvements that I've seen work are single sourcing, which means that you're having one point of reference for guidance. Templates and hubs are great ways to do this. This reduces the toil within the developer experience of discovering new and relevant information. Another thing that I've seen work is centralized support. Although this is not self-serviced, it's a great way to have a one-stop shop for getting information within the environment. There's an amazing CNCF project that can help with this too called Backstage. It provides discoverability of software within an organization without compromising autonomy. It provides all of these capabilities that would improve Janice's experience. Backstage was contributed by the very active CNCF end user Spotify. I highly encourage you to read their end user journey report released last month and check out their project on GitHub. All right, so it looks like Janice is improving. She's no longer angry, angry and frustrated about her environment. So let's take a look at usability. It looks like Janice is working on a YAML file. She was looking in her environment for a clear example of how she could have deployed her machine learning model into the Kubernetes cluster. She couldn't find one. So she did what any developer would do. She got one from the internet. As she moved it into her environment, she unfortunately has a typo. She's trying to figure out why it won't deploy. When it comes to usability, it really comes down to being able to fulfill a goal or goals with effectiveness, efficiency and satisfaction. So how can we make this better? The first thing that we can do to get a better understanding of usability is run usability testing. So usability testing means defining tasks and giving them to a participant to complete. There are two types of tasks, open-ended and close-ended. Open-ended tasks are flexible and designed with minimal explanation. These are good at identifying bottlenecks within your process or elements that confuse users. So a good open-ended task for Janice would be to, hey, scale your deployment to three replicas. And Janice would go about doing that during the test. Close-ended tasks are very specific and goal-oriented, and they are based on the idea that there is only one correct answer. These are great for testing specific elements, such as use these steps in cube cuddle to create two replicas, very specific, right? For usability, there are two main metrics that I'm identifying, the success rate, which is what is the percentage that the user is having, or what is the percentage of success that the user is having when trying to use that, when trying to use that tool for a specific purpose. Ideally, this would be at 100%, but that's not the case for Janice right now. The next one is time-based efficiency. This is the average time it takes to complete the task. As far as improvements, core improvements to implement for usability, golden paths are a great way to reduce the barrier of entry. Investing in automation is an obvious one. This greatly improves the success rate. In error prevention, introducing things like linters are a great way to prevent users from failing to accomplish what they set out to do. Helm is an awesome tool to leverage that allows for reuse of a single Helm chart, which improves time-based efficiency, and automation, which greatly improves success rate. All right, so Janice is out of the red, which is a good thing. We still have a ways to go, though, so let's take a look at capabilities. In this next scenario, Janice is trying to find capabilities specific to machine learning within her environment, but everything that she's seeing is targeted for a back-end developer. Not a good feeling. How can we improve this? The first thing we can do is journey mapping. Journey mapping is a process that is used frequently for external customers, and in the book Developer Relations, How to Build and Grow a Successful Developer Program by Caroline Lucco and James Parton, highly recommend, a developer journey map is defined as a visualization that identifies the path a developer follows and experiences. The goal of the map is to move the developer from left to right as quickly as possible, but as you go about that, you can identify gaps within your offerings. To measure capabilities, customer satisfaction is the best signal. As far as core improvements, the obvious one would be to just provide persona map capabilities. Another improvement for getting real-time feedback from developers is making sure that you have the ability for them to tell you when they don't have an offering that satisfies their persona. For Janice's situation, Kubeflow would be a great thing to introduce into the environment that would provide her the ability to deploy easily from a machine learning perspective. It's also important to note that some capabilities can be used across personas, which also helps Janice accomplish what she has to do. An example of this is LinkerD. LinkerD decouples services from having to know about the network and provides abstraction of network code from the business logic, which means that Janice won't have to worry about the things that don't really matter specific to her business logic. All right, so we're finally in the green, which is great. The last area we're gonna talk about is stability. We all have seen how important stability is for your product, and that just doesn't ring true for external customers. The same should be true for developers and the components within their environment. In this scenario, Janice is attempting to deploy, and it looks like the pods are not spinning up in her Kubernetes cluster. Unfortunately, there was an upgrade last night that did not go so well, so she's trying to figure out what's happening. Every time the build is failing, her confidence in the tool decreases. So let's look at ways to further define what reliability looks like in Janice's environment. The first thing we can do is take a look at incident management data, postmortem, surveys, and even conduct focus groups. Focus groups are a great way to get qualitative responses that can be used to assess developer confidence in the workflow. This real-time engagement can also serve as a great starting point to probe deeper into specifics on what events lowered the developer's confidence in the tool. Metrics you can use to track stability are, as I'm sure you all have gathered, tool and capability uptime. Another interesting metric is mean time between outages. To improve the stability within the environment, intentional postmortems I think are very, very key. How many of us have had incidents that have resulted in things that we know could fix the problem, but they never get prioritized and actually fixed? So prioritizing those things are very important. Centralized support so that there's a channel to understand what's up and what's down also improves developer confidence in tools. One CNCF project that you can use to improve stability is litmus. This brings the ability to design, orchestrate, and analyze chaos in their cloud-native environment. When the infrastructure team in Janice's organization uses a tool like litmus to validate Kubernetes upgrades and benchmark resilience, the developer experience improves for Janice by way of greater mean time between outages and more uptime, which means she can deploy when she needs to. All right, so we've done it. Janice is very happy with her developer workflow now. I couldn't end this talk without referencing a tweet. And the one thing I wanna convey to you all today is don't seek to solve problems in the silo. Connect and collaborate continuously to make your developer workflow as joyful as possible. The organization that Janice is a part of approached each of these challenges with an empathetic ear to gain understanding, established metrics to measure improvements and implemented capabilities that have directly improved Janice's experience. External customers, developers have one big thing in common. They're all people. So as you think holistically about improving your experience for developers within your organization, think about who you're solving for before the how. Before I end, I wanted to call out the book that I referenced. If you're going to solve this problem, definitely read this book that came out in this past September. And also the one thing I love about this community is the ability for us to connect and share best practices. So don't hesitate to reach out via these two methods below. Thank you.