 Hi everybody, and welcome to studying Kubernetes users with SIG usability. Today we have Gabby, Tasha, Josie, and Carl, and we're going to quickly introduce ourselves and then dive into the content. Gabby, you want to go first? Sure. My name is Gabby Moreno Cesar. I'm a co-lead for SIG usability and I'm also a product designer at IBM. Hi everybody, I'm Tasha Drew. I'm director of product incubation at VMware. And then Josie. Hi, my name is Josie. I'm a product designer at VMware. Hey everybody, I'm Carl. I'm a senior UX researcher at UserZoom. Cool. And together we've been leading a bunch of various efforts in SIG usability. So some quick housekeeping just to kick things off. If you're wondering how to get involved in our upstream SIG, our chairs are myself, Gabby, and Himanshu. I've included links here for ways that you can quickly find interesting content. So our homepage is on GitHub. So just SIG usability. And so this area has basically a little project plan and more information and more links to documentation. We also have sort of our landing page under Kubernetes SIG. If you want to see what our latest meetings have been about, links to YouTube's, all of our meetings, all of our public meetings are recorded. So that's this Google Docs link that you see right here. We also have a Slack channel that we're all in in the Kubernetes Slack. And so you can easily join our Slack channel, which is just SIG-usability. And finally, the most important starting point to get involved with our SIG is to join our mailing list. This is a Google group. So if you join the Google group, you will see all of our calendar invitations. And so we have monthly meetings where we all get together. We kind of talk about our latest research. If anybody has a new topic, you're very welcome to add it to our meeting agenda. And then we can discuss it as time permits. All of these meetings are recorded. But the single easiest way to get started is just to join this Google group. You'll get the calendar invitations. You'll start getting links to YouTube and comments about our agenda. So yeah, these are the quick, easy ways to get started with SIG-usability. And now over to Gappy to kick off the meat of our talk. Cool. Thanks, Sasha. I'm actually going to pass off to Carl to talk a little bit about our jobs to be done study. You know, Carl will tell you what our jobs to be done study is all about. But this is kind of the first project that SIG-usability kind of like took on. And so last year, when we were at KubeCon Europe, we showed y'all kind of where we were at the very beginnings of it. And so now we're going to present just a little bit about what jobs to be done theory is about and then where we are in the project. All right. Awesome. Thanks, Gaby. Yeah. So we're at the kind of middle of this process right now for jobs to be done framework, which is a kind of research where we look at who our users are and what they're trying to get done. One quote that I think sums up the high level theory very well is that people don't want to buy a quarter inch drill or drill bit. They want a quarter inch hole. They're looking for the outcome of the situation, not necessarily the tool that gets the job done. So to flush that out a little bit more, the reason people are interacting with products and services is not necessarily because they want to interact with that product or serve. It's because they're trying to get a specific job done with that product. So what jobs to be done is focused on is looking at what these jobs are and boiling them down to their most essential pieces. So even as the technology changes, people are still generally trying to get the same core job done, even though the names and kind of underlying technologies might change for how they go about that. So by looking at the job to be done rather than the technology, it lets us get a really good picture of what users want and it makes innovation a little bit more predictable and a little more scientific in trying to understand what exactly people are looking for rather than just kind of hoping or guessing. So jobs to be done can be used in a lot of different ways and it's kind of a longer process. And like I said, we're kind of in the middle, maybe the beginning of the middle is the most accurate way to put it. So first we start with defining our users, which is part of a survey that we launched some time ago where we looked at people's general backgrounds and how they use Kubernetes and some kind of basic facts about their technical and organizational environments. And we use that to feed right into recruitment because we actually need to talk to real people, to real users of Kubernetes, to understand what they're trying to get done with it. That's a big challenge with user research and Gabby's going to get into the details of that challenge a little bit down the road here. But so we found all those users. And then the next piece is to uncover the needs of those users. So we started with one of the most basic tools in a user researcher's toolkit is just interviewing users. And then we have a couple ways mental models and job mapping to analyze all of that data. And then when we have a sense of what the actual jobs are, we go into the next phase of prioritizing to see which of these jobs are most critical for us to look at. And then finally we apply those learnings. So I'm going to walk through the details of these steps right now. So like I mentioned, the users were somewhat already defined, we're just looking for people that use Kubernetes. So we started by collecting some background information and this contact information to get these users to interview to figure out what their core jobs are. So the data collection process is very qualitative to start. So we use semi-structured interviews. We don't have essentially, we're not reading off of a questionnaire, we have topics that we want to follow. But we really try to let the users lead us down what's most important to them. So we're not imposing what we think is important, but rather finding organically what's actually important to these users when they're using Kubernetes and their organization and their work on their own, however they might be using it. And we look at not just the technical pieces, or could you scoot back for one sec, not just the technical pieces, but also how Kubernetes fits into an organization, how it fits for their personal goals. So it's everything beyond just the command line or visual interfaces they might use, but everything that surrounds Kubernetes entirely. So we get all of these user interviews and then we transcribe these interviews for coding in this qualitative coding process. So we have kind of a two-part process. The mental model piece is not necessarily part of the traditional jobs to be done approach as other user researchers might know, but we wanted to include this so we could really be sure that we're focusing on user behaviors and not just attitudes. So when we go through these interviews in a first pass, what we're doing is we're extracting all the things that people actually do, not just what they say, but how they actually behave around Kubernetes and whether that's directly interacting with the interface or dealing with other teams, onboarding Kubernetes could be anything. So try to stick to this core tentative of user research that is to focus on the behaviors over the attitudes. When we get all of these behaviors extracted and we'll end up with hundreds and hundreds of behaviors, this is kind of an example version here from the author of this method is you get all the tasks down below. So this might be something like start a cluster or create a new user role, something like that. And we sort of organize them together and stack them up to get a sense of kind of bigger areas like security or onboarding or team communication, all the things that might go into using Kubernetes and get a big map of everything that people do when they're using Kubernetes. When we have all of these behaviors, the next phase is to bring them into this kind of specific framework that is the jobs to be done framework so we can map out what exactly the job is that people are trying to do when they're using Kubernetes. So they're employing Kubernetes to get a certain job done. So what is that job? So to do that, we have to create job statements and then map those job statements to job steps. I know that's a lot of jargon, but I'll show you what that looks like right now. So let's say for example, we have a behavior that we have in the mental model map like onboard developers that are new to the team. So that's a good behavior, but to bring it into the jobs to be done framework, we bring it into this sort of table view that is something that's a little bit more actionable and structured. So we always have a direction involved with the job, whether you're trying to minimize something or maximize something, I generally stick to minimizing something because it's always easier to move something closer to zero. If you're trying to maximize, it's harder to know exactly how high is high enough. So you have a better end point with minimize. So from onboard developers that are new to the team, we might change that into minimize the time to onboard developers that are new to Kubernetes. So just creates a very nice clean way to take what is often a really messy set of behaviors. I chose a really easy one, but there are some that are a lot more complicated to be sure that they're fitting into this jobs to be done framework that we can then use to put into the job map, which is the next part. So we'll have many dozens of these job statements and then we put them into this typical map here. We might adjust this a little bit that are associated with everything that somebody does. Every job has these eight steps. We're there defining the problem, locating what they need to do the job, preparing for the job, doing a final confirmation before they actually execute the job, then they have to execute that job, monitor how it went, if anything went wrong, modify it, and then conclude that job. So we have some adjectives here underneath each of these that kind of show common phrases that might start for each step. So for example, prepare is often something like set up proof of concept or organize documentation and something like that. And monitor has verify, track, and check. So it might be, you know, check pod health or something like that. So you can kind of just imagine all the ways that the many tasks involved in Kubernetes will kind of fall in under this map. So we'll have a really good sense of kind of the cyclical nature of all the jobs involved with using Kubernetes. So the next piece is we have all these jobs. And we're sure that some people have these jobs and that they're real, there are things that we actually learn from users, we're not just making these up, this is empirical data. But it's qualitative. So it's hard to say which ones of these are a problem for most people, which ones are really important to focus on. So this next part is where we quantify this. And we use this by creating a survey that takes in each of those job statements, all those little pieces that go together. And we try to rank them with a survey. So imagine each one of these dots is a job statement. So something like, you know, create a new cluster or something like that, or on board a new developer to Kubernetes. So each of these job statements will be ranked on satisfaction and importance. So we'll put the job statement in front of somebody will say, how satisfied are you with this at your job right now? And then how important is this for your job right now? And what we get is this map of all these job statements. And when we look at the lower right corner, we can see that's where there are really important tasks that are really unsatisfying to do right now. So that kind of creates a high level and empirical, theoretically back sense of the jobs that are what might need to be targeted, what people are thinking is very important, but aren't able to do very easily right now. So again, it brings innovation into a more sort of scientific space than how it typically might be. And that's all how the process works. And of course, with research, kind of half of it is doing the research, the second half is evangelizing that research and putting it into action. So these findings will yield these really interesting and useful jobs that are prioritized so we can see what's really important and might need to be focused on. But this is typically as far as I'm aware, pretty much using a for profit context and not really hasn't really been used in the open source space yet. So the way that we will implement these findings and the way that people will use these the way that we can make them more shareable will all be brand new. So there's a big question mark here because this whole process is pretty fresh for the open source software space. So we'll be kind of writing the rules for this. And obviously, just when you're communicating the findings only half of it is the researcher that's communicating the other half is the audience. So it'll be a very collaborative process. I'm really excited to see how we share all these findings out and the way that we can make sure that they get the most use that they can. So I'm definitely excited to report back on this when we have more findings to share and then also how the communication process goes as well. Cool. Thanks, Carl. So I'm going to go ahead and talk a little bit about, you know, the tools and just the research process that we've been following. Carl gave a great overview just about what jobs to be done is. And so now I want to share with you just kind of like what it looks like for for an open source special interest group. So at the beginning, there were very few of us maybe like one or two working on this study. This was about a year ago. And the cool thing is that the team working on this has actually grown as contributors kind of hear about it and want to help out. And so right now this is, you know, where our contributors are, everything, you know, no one is really in the same time zone. And so, you know, when we were thinking about some of the goals for our research process, obviously, you know, we're an open source community. And so being open about the process and open about the opportunities, no matter where you live, was really important. Also, you know, for me, I'm a UX designer, and so it's my first time contributing to open source. I have a lot of research background, but, you know, maybe Kubernetes, and I work on a Kubernetes product, but, you know, the people coming in, if you don't have experience with open source, that that should be okay. And we'll kind of like support you through the process, or maybe you don't know as much about Kubernetes. That should be okay as well, because if you have a really strong research background, we can use that for this project as well. So that was some of the things that we also wanted to consider. And then also the fact that our contributors, you know, some of us are able to contribute part time to this project, but some of us are only able to contribute on 10% or 20% time. And so just taking that into consideration as well. So, you know, however, research studies start, we had to do recruiting because we wanted to talk to people that use Kubernetes for their jobs. And so for this, the CMCF Twitter was able to help us out. And, you know, we developed a screening survey to kind of look for these people that are working in companies that use containers that use Kubernetes on a day-to-day basis. And so through that, we were able to kind of like build up our list of people that were willing to speak to us. We try and templatize everything. So, you know, everything from our outreach email, you know, like, hey, we'd like to chat with you, you filled out our survey. We tried to also automate as much as possible. So we're using Calendly to allow participants to book time with us. Initially, we were only reaching out to North American people, and now we've added more slots so that we're able to reach out to European folks. One of the things that this team also worked on was this data, data consent form that was generated by this tool that a really cool group called the Research Ops Global Group, basically like they're kind of like this like Slack community are really talented researchers that are really passionate about just how to operate operationalize user research and design. So they built this really great tool that, you know, if you put in the goals of your research project, they will generate a form that basically tells participants how their data will be used and all that good stuff, you know, that it will be recorded and what happens to the recording and who can access it and everything so that they kind of have informed consent. So once they review the form, and this is all through Calendly, they review the form and they sign up, then we'll coordinate with some of our, some of our leads, someone that can interview them. And then we also leave room for one to two shadowers. So that's also kind of like allows people to sit in, take notes, and also just like learn from the process. So before we, before we sent out emails, we had developed a kind of like a standard just to be done discussion guide. This helps to just kind of like document the list of questions that we want to go about asking, but I think practically speaking during the interviews, we're really interested in just having like an organic conversation with people about how they use Kubernetes and just kind of like discovering with them what that looks like in their day-to-day jobs. So the, but this is a great kind of like guideline. You know, we think them, you know, free Kubernetes t-shirt, who doesn't like a Kubernetes t-shirt. And then this is kind of getting to the part of the process where we are at. As we interview people, we're also at the same time, you know, transcribing and analyzing. So we're using Tetra Insights for this. Tetra Insights, if you upload a recording, it will transcribe it for you, which is really cool. And for all of like the infrastructure jargon or like, like Kubernetes terms that come up, those need just some lightweight cleanup. But for the most part, it does a pretty good job. And so we just go through, we clean it up. And then we're in the process. This is where we are right now. We're in the process of analyzing some of the interviews and just kind of like distilling through what people say. And you know, if someone has an opinion, kind of like trying to like dig deep into what behavior that opinion is reflecting. And so we're, Tetra Insights is awesome in that we're able to kind of like read through a transcript and just tag behaviors and just like make notes and all that good stuff. So that's, that's where we're right now. And like Carl said, we're about halfway, halfway-ish through the process. It's ongoing. And what I'm going to do now is I'm going to pass it to Josie. And Josie will walk us through opportunities to get involved with the SIG. Thank you so much, Gaby. The usability SIG is composed of a lot of really wonderful people with a ton of industry experience from different companies. Gaby, next slide. We meet virtually once a month and you can get a calendar invite by joining the Google group. So come say hi and introduce yourself. If you're interested in getting involved, you can come share your ideas or submit a usability issue that you've seen or you've run into yourself. If you're interested specifically in the jobs to be done study, we have a large range of opportunities to get involved depending on your comfort level and experience. Shadowing and taking notes is a really fast way to onboard. As Gaby mentioned, we'd love help cleaning up the auto-generated transcripts. It usually does a great job most of the time, but then sometimes the program will translate Kubernetes into Goober Daddies. So any help we can get there would be really valuable. And if you feel really comfortable and have a lot of experience, you're welcome to help analyze and tag the interviews or lead a user interview. I was really nervous when I started. Kate can be a pretty technical space, but it turned out to be a lot easier than I expected thanks to the support of the other designers as well as the pre-work that was done by Gaby and Carl. Now we know that Kubernetes is a pretty technical community, but the usability SIG is a bit different. We're mostly researchers, designers, and PMs, and it's not that we're not technical. It's just not a prerequisite to get involved. If you're here and you want to pitch your own ideas, you're more than welcome. In fact, the folks on this team are really, really nice and they'll probably support you through the process. So a couple of examples of initiatives you can lead include design principles for Kubernetes. You can come up with your own research frameworks. You can invest time in coming up with user archetypes or personas or roles. You can even think of design first issues for non-technical members. If you're from another SIG or you'd like to drive an initiative for another SIG, we're also here to provide support. So for example, the contributor experience SIG has mentioned that they'd like support figuring out how to gather user feedback for beta features. If this is something you're interested in, we're more than happy to help you drive that. And that's it. Thank you so much for attending our talk. If you have any questions, feel free to drop them in the CNCF chat or come find us in Slack in the SIG usability channel.