 Hi, everyone. My name is Jasmine James, and I am an engineering manager at Twitter and a KubeCon Cloud Native Count co-chair. I'm super excited to be chatting with you all today about methods to discover, understand, and improve developer workflows. This is a continuation of my talk that I gave in KubeCon North America a few months ago. I really wanted to take the time to dive deeper into some of the methods that I alluded in my presentation on stage. So that's what we'll be doing today. All right, so let's get started. During my last keynote, I talked about Janice, who was a machine learning engineer that had joined a new company and was facing challenges with the developer experience. We talked about some of the tenants that her organization could lean on to improve that experience for Janice. I wanted to dive deeper into how you go about deriving the information in order to make the right decisions to improve developer experience. So I'm going to walk through all of those tenants today and double-click on those methods like I talked about. I still believe that the key to creating a great experience is engaging with the individuals that will use it. And you always have to take a holistic and people-centered approach to solving those problems. You have to always remember those connections to the people and I'm going to talk about those pillars that you can use to do that. To level-set, what is developer experience? It encompasses all of the interactions with the development workflow, tooling, and capabilities. During my talk in October, we covered the workflow of machine engineer Janice, as I mentioned, who used Kubernetes, TensorFlow, and Python, within her developer workflow. As we talked about those pain points, we talked about these A, B, and C, D. Completing tasks took too long for Janice. She was not able to find the right guidance initially. A lot of the functionality available within her environment was not targeted for machine learning engineers. And the stability and reliability of tools left something to be desired. We walked through all of these challenges and we sought out to improve her experience by discovering more about her issues. And then implementing changes, whether it was tooling or processes, that can make that better for Janice. All right, so these are the four tenets of developer experience. They're really borrowed from user experience. So I feel like you can apply them to developer experience in very meaningful ways. There are discoverability, usability, capability, and stability. If you see my talks before, I love a good acronym. So we're going to use these four tenets to keep our ducks in a row. So let's talk about discoverability first and foremost. For Janice, it was very difficult to find the right software and best practices, as I mentioned. She often relied on individuals to give her that guidance, which meant that she was interrupting their productivity. It was not always simple as presenting options for Janice either because without context or guardrails, she often had to do rework in order to make it work for the environment. So we answered the question of how we can gain better insights into the needs of developers to improve discoverability. The first thing we had to do was to create more understanding. We had to figure out how users went about finding information in order to solve for that. Some ways to do that were screen recordings, user interviews, and search analytics. We're going to talk more about those. We also talked about what metrics you can use to track improvements for discoverability. Those methods are listed here. My favorite one is customer satisfaction. You always have to ask your customers how they're feeling about the right tools and finding them in the right time. Lastly, we talked about some core improvements that you can implement in order to improve discoverability in your environment. Single sourcing and centralized support were a few that I mentioned. If you want to hear more about this, you can always look back at my talk from KubeCon North America. But for now, we're going to talk more about some of those methods to create further understanding. Here are the three, screen recordings. I believe that it's very important that you record the entire developer workflow and our interaction because it serves as a great starting point to actually see the full experience end to end. Many times when you individuals are facing issues within one part of it, you won't get the context of how they got to that point. It's also important to capture your local configuration. That local configuration is going to be key in understanding what contributing factors may have made that experience not so great for the customer. The next thing you can do is leverage search analytics. Search analytics are great because you can understand exactly what a user was looking for and whether they found it within that search. You can identify all those common artifacts that are searched across all of your developer personas to figure out what artifacts are causing the most pain points. Who can find them? And if they can't find them, how can we make it more accessible? Lastly, interviews. It's great to partner with those vocal customers that are telling you about their discoverability issues. These interviews are key because you can get the qualitative information and also hear about their sentiment firsthand. All right, moving on. The next thing we want to talk about is usability. Usability, in Janice's case, comes by way of this YAML file. She's trying to deploy her application, but she's having issues because there was no template that she could use in order to figure out how to get it deployed successfully. Usability really comes down to being able to fulfill a certain goal or goals with effectiveness, efficiency, and satisfaction. All right, how do we define the current state of usability? To create rather understanding about how people are using things and if they can use it effectively, you can implement usability testing. Usability testing gives tasks to a participant and they are expected to complete that in your given feedback based on their interaction with that. We'll talk a little bit more about that in a minute. Some metrics that you can use to track usability are success rate. What is the percentage of success that a user is having when trying to use that tool for a specific purpose? Ideally it would be 100%, but in most cases, especially if there are issues, it won't be. In Janice's case, it definitely isn't. Time-based efficiency is another great tracking method here because that is the average time it takes a user to complete the task and you want that to be as low as possible. Some ways that you can improve this are golden paths. This reduces the barrier of entry for significantly for most new users, especially those like Janice. You can invest in automation and error prevention, providing linters is a great way to prevent users from failing to accomplish what they set out to do. All right, so usability testing. I found this great workflow from mlops.org on a machine learning engineer workflow. The first thing that you have to do before you embark on usability testing is define that workflow. You have to know from start to finish what it is. Usability testing refers to evaluating a product or service by testing it with representative users. Typically during that test, users will try to complete a task while observers watch, listen, and take notes. The goal is to identify any usability problems, collect qualitative and quantitative data and determine the participant satisfaction with the product. So you're getting a lot of value out of this interaction with customers. How do we go about doing it? Defining the task is the next step. There are two types of tasks, open-ended task, close-ended task. Open-ended tasks are flexible in design with minimal explanation. And by leveraging these, you can identify usability bottlenecks or elements that confuse users as they interact with your product. An example of this would be scale your deployment to three replicas. Janice could do that anyway that she thought that she could do it. The next method is close-ended task. Close-ended tasks are very specific and goal-oriented. They are based on the idea that there is one correct answer. Close-ended tasks are great for testing specific elements such as use these steps in KubeCuttle to create three replicas. See the difference? Janice is giving direct information as to what to do and how to do it. Next up, we have capabilities. In this scenario, Janice is trying to find capabilities within the company's cloud-native environment. That addresses the needs of ML engineers. There are many features for back-end engineers or even front-end engineers, but none available to optimize the ML engineers workflow, such as model and experimentation versioning. So how do we learn more about that environment? First thing that we can do is just one of... I love this activity. It's called journey mapping. Journey mapping is a process frequently used for external customers, but in the book Developer Relations, How to Build and Grow a Successful Developer Program by Carolyn Lucco and James Parton. I referenced this in my talk last time. It's defined as a visualization that identifies the path a developer follows in experiences. And the goal of this map is to move the developer from left to right as quickly as possible. Survey is another great way to learn more about the capabilities and what the gaps are. Here are some of the metrics that you can use to track. NPS, we talked about that, is a great way to figure out what the gaps are and capabilities. And then a core improvements would be Persona Map capabilities. As you learn about these things, introducing capabilities, specifically Personas, would definitely improve that for Janice, specifically. All right, so journey mapping. Let's talk about what that actually means. So journey mapping focuses on the experience of a single Persona in a single scenario with a single goal. Once that scope in scenario is agreed upon, you then identify people that know and experience that regularly. The next thing you have to do is build a backstory. Why would that Persona be on this journey in the first place? This can include the outcome of going on the journey. For example, Janice wants to deploy the model to staging for testing. The next thing that you'll need to do is map the feeling. What does the engineer think? What is the engineer thinking and feeling as they go step by step? So you're gonna plot that journey. And if there are multiple channels of interaction for certain steps, it's important to capture these differences, especially if they're not widely known that the Persona participated in these. If there are multiple decisions that could be captured as a part of the journey, you should also chart those out. After you have that initial map, you can map the pain points. Looking at that complete journey and capture the frustration, error, bottlenecks and where those things are, where capabilities are not functioning as expected. It's important to dive deep into what becomes of these pain points. Does the customer just deal with it, find a workaround or abandon the journey completely? The next thing you can do is pretty optional as sentiment lines. Because not all pain points result in a huge amounts of frustration and 12 for developers, but by charting the sentiment line, you can see how a few pain points being experienced right after one another could result in a rapid drop in sentiment, which means change in expectations or frustration. Lastly, you'll have to analyze this full journey and expectation and figure out what you wanna focus on first in improving that developer experience and offering the right capabilities for that user. Surveys, I think we all are aware of these and they're widely used within multiple environments. It's good to limit the number of surveys and also establish a regular cadence of checking in. If you have company-wide surveys that you can lean on, use those. So it reduces the level of noise that engineers are hearing all the time. All right, reliability. Here we have Janice who's experiencing errors as she tries to deploy. Every time the build fails, Janice is losing confidence in the tool. So we're gonna look at ways to improve this reliability experience for Janice. To find the current state of stability, you can look at incident management data, postmortems, surveys, and participate in focus groups. We'll talk more about those in a second. Metrics that you can use to track reliability and reliability within your environment are tool capability uptime and mean time between outages. Key improvements, intentional postmortems is always key and also centralized support, which improves the communication channel that Janice has to use in order to figure out what's going on within the environment. All right, the two methods for understanding we're gonna dive deep on right now is postmortems. I'm not gonna dive too deep on this because this is pretty common within engineering environments. So I'm gonna focus on focus groups. So what is a focus group and how does it difference from a regular group? The main difference is that a focus group has a specific focus discussion topic and has a trained leader and facilitator. The group's composition is carefully planned to create a blameless and non-threatening environment so that people are free to talk openly about their experience. Why would you choose to have a focus group? Because it offers depth and nuance that a survey cannot capture. You could have a customer impacting incident where you can't derive the sentiment of a customer and how it really impacted them in a survey, but you would get those specifics within a focus group. You'll definitely get qualitative data for focus groups, but you have to make sure that you find a good leader to conduct this. They have to be very non-biased and offer psychological safety so that people will feel free to talk about their feelings and about what they experienced within the stability issues. Define your goals. You definitely have to make sure you have clearly defined what you want to accomplish, why you're doing it and what do you hope to learn from the focus group? That should frame the conversation. Finding who should participate is also key. It should be a representative sample of those opinions who you're concerned about. This could be a group of customers who have always offered their opinion, but maybe never done that in the focus group method. Lastly, you'll need to analyze the data. As you ask questions, dive a little bit deeper and gather this information. Take that back, and it needs to result in some actionable items that you can implement to improve that experience for the customers. The one thing not to do in this case is to gather all that information and not action upon it. You want to make sure that you define at least some number of clearly actionable objectives as a result of that focus group. All right, so we have completed and improving Janice's developer experience and dive deep into some of the ways that you can create better understanding of improving the developer workflow within your environment. I always am looking to connect with you all. This has been an amazing experience co-chairing and talking to you all about your developer experience and how you would like to improve it. I highly recommend this book that I referenced both in October and today. It's a very great way to make sure that you're building bridges with your developers within your community so that you can work towards improving their experience in moving the needle on actual value being delivered for the business side. Please reach out on Twitter at Go Jasmine EE or on LinkedIn. I would love to connect. Hear what you think about the methods that I talked about. Maybe there are some things that you do that I'm not aware of. I would love to hear about it. Thank you so much for listening today.