 Hi, UXD team. You ready? OK. OK. Welcome, everybody. Today we're going to be talking about the expectation of open source user experiences and how the competitive landscape has influenced teams to being more proactive as far as user experience design goes. We're going to talk a bit about open source, open source UIs, as well as proactive and reactive design methodologies. And we're going to use Red Hat as a case to show examples of each and talk about how our UXD team has changed over the last five to six years. So my name is Serena Nichols. I'm the DevTools UX Lead at Red Hat. I'm Colleen Hart. I'm the OpenShift UX Lead at Red Hat. So before we get into talking about the expectations of open source UX, let's try to understand why open source is getting so popular in general. I'm sure this is a crowd would agree, but technology thrives in the open. And it's a proven way to collaborate in the software space because people are sharing and being collaborative and just building on top of others' work. So in a panel earlier this morning, I think Sarah did a great job of kind of summarizing this where you're building something, but you can only reach some certain limit without help from others. So you're just kind of continuing to build on others' work by providing feedback or accepting feedback in some cases. So it's kind of this continual cycle where everyone's being more transparent, but the end product is benefits from that. So with people wanting more awareness, everyone's kind of pushed into this open source atmosphere and everything's in the open. So with design in the open, how can we think about the user experience design process in open source? And it's the same principle. So it's about being transparent with your designs and your processes. So it's allowing others to contribute new designs maybe or provide feedback on existing designs or ways to actually improve. So maybe it's giving avenue for others in a community to provide comments or propose changes and kind of lead that discussion in an open platform. OK, so now we're going to talk about open source UIs. In the past, providing reactive design was kind of perceived as just doing enough to make products more usable. And if you look back to 2013 and before, enterprise software was really considered hard to use and cumbersome. And then 2014 and 15, what you saw happening was companies like Salesforce and Google started investing a lot of money into user experience. And they were really starting to be referred to as the pioneers of Enterprise UX and kind of pushed the UX movement across Enterprise UIs where there was a lot more focus on usability. So today, companies are heavily investing in UX. And they're really utilizing more proactive design methodologies. They have a lot more investment in UX, meaning they have a larger UX department. They have more resources. And they're really able to kind of go through that process of research and design and collaboration before development and throughout development. So now we're going to examine the progression of UX in Red Hat itself. So if we look back at Red Hat in 2013, our UX team was centralized. And we had about 10 people by the end of the year. Here's an example of, I think, six of our interfaces where, as you can see, they're very disparate user interfaces that felt super disconnected as a whole. They varied in behaviors. And each dev team spent time and energy to develop all the user interfaces separately. There was nothing in common between the two. So if you kind of fast forward about three years, our UX team had then grown to about 50 people. Not only were we comprised of interaction designers and visual designers, but we also expanded to include both researchers as well as front-end development. And our team really started to take off. We had defined a set of common user experience standards, which we called pattern fly at that point, which had implementation in both Angular and React. And our product designers and our developers were really able to utilize pattern fly to make those products look like a single product portfolio. And as well as, since they use common components, they had some level of consistency between them. So as you can see, this set of six products looks way more consistent and really feels like a single portfolio versus three or four years before. So now in 2019, clearly Red Hat has really invested into the user experience in usability. Our UXD team now has 106 people. And pattern fly has now evolved to be our open source design system. It's called now pattern fly four. It's been revamped to provide modular, accessible, and responsive components. And it's really driving our new UIs. Here you see a few screens of our newer products that are really using pattern fly four. And the expectations of open source software really continues to increase. So Red Hat's clearly, like I mentioned, continuing to invest in UX and designers and bring a more proactive user experience design into our projects. Yeah, so thinking about proactive design, this gets us into this topic of what is proactive design versus reactive design. So proactive design is really when you get a lot of information upfront because you're going through an entire UX process of gathering information, research, observing users to get that information. Because you're driving that process, you have more influence. So you're working to funnel down to a design to affect the whole user experience and the whole flow of a user versus on the reactive side, you have less influence because it's really driven by the product. So what exists in the product? You're making some minor tweaks, but you're not driving based on this information gathering. It's kind of here's the problem, let's react. So there's a little more concern and a scramble to make some minor improvements along the way. So taking a deep dive into proactive to start. So proactive is a holistic approach. Like I was saying, you'd have, in this case, you have, as a designer, more influence. So over that entire, thinking about that entire flow, you're solving a greater challenge, a user challenge, something that users are stumbling across. So thinking about exploring with usability testing or even just observing users. Sarah, one of our researchers was talking today about just observing and understanding what problems users are going through by just asking them questions or even just watching how they use a product can be really helpful. So suggesting alternatives based on these observations takes time and that's kind of the downside of this proactive design. You have to invest in it upfront. So Serena talked about the trend is moving in that direction, but it's definitely not easy. It takes time and planning, results are worth it. So looking at some of the quotes we found that resonated with us on the proactive design side where proactive designs are informed by actual user needs, not by guesses about how people might use a product. And designers are empowered to promote what an ideal experience could be. So it's really, in a proactive design case, you're providing even more guidance because of that research you've done upfront. So without it, you're only fixing part of the problem because you're fixing a problem based on something that already exists in a product. So maybe it's caused by decisions that were made ahead of time into the product versus understanding users' needs and solving the problem upfront. So looking at an ideal flow, creating journey maps for user flows upfront, you're dictating what that flow should be rather than backtracking in a proactive design case. So the ideal process is probably something, some version of this, we've probably all seen, it's really the design thinking process or I'm sure there's various buzzwords for it, but it's really taking into account some process where on the UX side, you're thinking about the discovery phase, which is really important in the proactive side. So running usability tests, competitive analysis, interviews, really digging into the personas to understand who you're designing for upfront before you're jumping into actually creating journey maps and kind of conceptual designs, then getting into wireframing, prototyping and kind of some iteration there before actually some implementation. And even after implementation, we'd like to be able to test and retest and kind of iterate a bit there on some of the design. So the proactive design process is really kind of what we think about as our ideal approach if we have the time and the resources. Okay, so now we're gonna take a case study for proactive design, which is OpenShift Topology. So the topology view is something that we wanted to be able to visualize application topology of an OpenShift project. And so we started this effort in late 2017 and as part of our discovery phase, one of our customer conferences, one of our designers held an activity where they asked our participants to kind of roughly draw out a diagram of their architecture on pieces of paper and explain and discuss how those parts interacted between each other. And these are a few examples of those. So these were trying to really understand how people's products, how people were working with OpenShift and what their architecture was like so that we could help figure out how to visualize these in our product. So as part of the create phase, mockups are created and we did the typical sharing with internal and external stakeholders for feedback and this was a super iterative process. We were lucky enough to have time on our side. So we had quite a lot of time to do different iterations and get feedback from customers. So based on some of the feedback that we did get from customers, you see a new set of designs were created, relationships are different, what you're seeing visually is different, the information that you're seeing is different. And this came to be like from a product perspective, what we really wanted to try to achieve. So based on mockups, what we decided to do was POC it. So based on the API support that was currently existing in the product, we tried to have somebody develop POC and see how much we could achieve. Some of the data wasn't available so we had to cut some functionality out. And the other thing that we figured out was like some of the interactions that we wanted to achieve really weren't as possible as we thought they were. So again, we went back through and did another iteration on the design. And then we had our team started to implement that early this year. And this is kind of going through a usability test that we held at Red Hat Summit which is our customer conference in May this year and got additional feedback. And so since then we've tweaked it a little bit and what you're going to see here is kind of a preview of our final product and half of you are from the UX team team so you know that but we're also running a usability test on the topology view in the cafeteria area if anybody's interested in doing it tomorrow or Saturday. But this is going to be part of OpenShift 4.2. But this is a really good example of being able to go from having a long time period to having a long time period, just about two years from start to finish where we were able to do user research, competitive analysis, multiple iterations of design, collaborating with development as well and being able to produce something in time for a release. Yeah and just looking at another use case not quite the two year lead time but similarly still a proactive design use case for sure we had maybe just under six months, at least three to six months for this feature we were working on. Same product we were looking to create a service catalog and it started with a bunch of research up front. So we did a number, we went through a number of interviews and conversations with stakeholders and looked at four or five competitors and did some extra reviews of strengths, weaknesses, kind of what these products that were already out there were doing and what we could kind of learn from the existing competition. So that first phase was super important for the service catalog design as well and then we jumped into kind of following that at least month or so in the research phase. We jumped into the concepts and started thinking about how we wanted to structure this thing and how a user might flow through the various pages or areas of this catalog. So we kind of had a team, a couple of us were working on the early concepts and then came together and picked what we liked and didn't like of the various concepts to pick basically to create our ideal scenario and move forward from there. So still on the creation side but moving, you'll notice it started to move more towards a higher fidelity mockup. So we, on the far left, still we're in balsamic, lower fidelity, but as you get towards the bottom we started working with the visual design team within our group to get that high resolution to be ready for development. So as we passed it on and started working with the development team, we got some prototypes and implemented that we could tweak a bit and test a bit at Red Hat Summit, get some initial user feedback based on those prototypes and kind of go back, like we said in that process, go back and iterate on some of these designs to tweak them a bit, make some improvements where we could, kind of based on a couple of the things that we noticed. So one key being, you know, not everyone, depending on their environment, not everyone would see those highlighted items at the top. So that whole blue area might go away, you know, what can we do to make this catalog a little bit more interesting, maybe pop a bit more and what can we do to make the categories stand out a bit more. So, you know, once we saw it and watched users interact with it, we kind of realized, okay, we need to make some adjustments. Let's add some search capabilities in here. Let's add a tour so users know how to use this new product. So kind of that revamp, and then it went through another round of implementation with those updated designs. So definitely another example where we followed that longer lead time proactive design cycle. Okay, so now we're gonna dive into a reactive design. So by a show of hands, how many of you guys think that reactive design is a bad thing? Okay, cool. So reactive design is not design driven, right? It emphasizes that UX is an afterthought, which for sometimes for a designer, it's not like what we like to hear, right? Cause we want, a lot of times we wanna hear that. That development's gonna listen to us. So here's a quote about reactive design. It's just what it sounds like, reacting to a problem in the moment. So oftentimes we're asked how we can fix something, help users are complaining and these things are too hard, so what can we do? So reactive design really results in fixing problems based on decisions that have already been made or implemented. So is it a good thing to do reactive design? It definitely is, right? I mean, sometimes people think about applying a Band-Aid or putting lipstick on a pig, adding visual improvements, syntax changes. UX is being an afterthought, but really it's not impacting the end-to-end user experience but it's still improving usability as a whole. And it's better than nothing. So I think one of the questions we have here is like how many companies have enough designers that are able to provide thoughtful designs for every feature that we create? That doesn't happen. So in actuality, developers are gonna have to develop things when they don't necessarily have a design and then use reactive design as a tool to make things better. So yeah, I think a positive use case for reactive design, this will be very familiar to you, but basically we had a case where a feature was upcoming in our product pretty recently and the development team had a cool idea to expose this API Explorer into our product and it's an interesting concept that we hadn't chatted about on the design side at all and we kind of got brought in after this initial POC was created. So on the UX side, we're thinking, okay, we're in the reactive state, we wanna get this into an upcoming release. How can we make some improvements to start? But it's already created and seemingly going in the product as is. So one thing we did was just take a step back a bit and try to think about, can we validate some of the assumptions and use cases that this is solving? What are the, what's driving this requirement and need in the product to help us understand how we can then improve it in the product? Like I said, of course there's minor tweaks we can make but we really wanted to make it a usable feature that will succeed in the product. So when development is asking us for help on something already implemented, we can still do some of that like gut check of what is this requirement? What is it fulfilling this feature? So looking at our reactive designs, I have a couple of examples but a few of the things we looked at were can we just make it a bit more usable by adding some filter capabilities, some search capabilities, sorting, some basic things. You'll notice we kept the basic structure of there's a page somewhere in the navigation that has this list view of things and when you actually dive into one of the objects, you're getting more information about what metadata is available for that resource. So it was still a list view and a detail page but just adjusting kind of how some of that information might show up and how users might flow through the pages. I think we got some improvement from these designs. So just simple things like adding tabs or adding some hierarchy to the page. We got some improvements there and just a couple other screens from the design side. We were thinking about this, maybe it's not the V1 version of this feature but maybe it's something we can implement moving forward and make it even better by adding this throughout the product. So this feature in particular is allowing you to explore various resource types in one exploration area but what if we provided that help in context? So when you're looking at a specific resource like a pod, you could access this exploration help in that context. Maybe using a side panel was the recommendation in that screenshot on the left and part of the reason for that is that was an existing design convention or pattern that we're already using in other places in the product. So it was kind of one of those in a reactive design state, you're trying to make use of things that already exist. Do you have enough lead time to introduce completely new patterns? Probably not but maybe you could think about that for a forward-looking feature and in the meantime make use of some existing patterns that you have to improve the feature. So that's kind of what we love to do in creating even just like some of the tabs, some of the side panel designs that you see on these pages. And then when we got back to the updated implementation, just searching through for some of these screenshots in GitHub, there were 17 or 18 pull requests related to this feature. So the key was definitely collaboration. So we had a lot of collaboration back and forth between design and development and there were a lot of these minor changes, let's implement sort. Another PR, let's look at filtering. So it was definitely a success in my mind because of that collaboration. So a couple things, it built trust for sure going back and forth with design and development working hand in hand. But it also, we definitely both learned something from each other. So I think on our side, on the design side, we learned by seeing their initial POC what was possible and what this feature could do for the product, what was kind of available via the APIs and maybe what limitations we had today. It was a little bit more clear to see versus if you don't have anything upfront to work from, it can take a long time to gain that product knowledge and do that upfront research work. Sometimes two years, hopefully not always, but that's kind of why in some cases this can speed up the design process and I think it did for us creating this feature. So building trust, collaborating, it was a positive result. So I would consider reactive design as being something that can work if both groups are kind of on board. Yeah, so I'm going to talk about a couple more examples of reactive design, not specific examples, but just general. But I do want to kind of reiterate what Colleen said and what I said previously as well is in open source companies, collaboration is key, right? It's one of the biggest premises that we have. And I think that is what really makes reactive design work better. Being previously, to Red Hat, I worked in a company that was not open source and the collaboration was not the same. So I think, at least here at Red Hat, reactive design to me is much more successful. So another example where reactive design can be useful is like if you discover usability issue during beta testing, right? And it needs to be addressed before GA. So in this case, you're up against a time crunch. You need to kind of research what the problem is and determine how to best solve it. And you probably have to collaborate with your stakeholders, product management as well as developers to identify how much can you fix, how much, based on the time that you have. This could include addressing the problem completely. Or if you can't, like maybe you can still improve it by providing contextual help or adding some documentation to explain the problem or how to fix it. And then you can also be sure to plan for a better solution that can be implemented in the next release. And another example where reactive design is productive is if a design attached to the developer, in a work in progress PR, for example, and they're looking for somebody to collaborate with. In this case, you probably have more time since the developer has asked for your help. And you want to work with them to understand what the issue is and take advantage of your design system. So like Colleen said, look and see if there's any patterns or design conventions that you can apply to the problem to help fix it or at least provide some consistency. And again, if there's not enough time to fix it the right way, just suggest a stop gap and see if you can implement that and follow up with the plan for a better solution that can be implemented in the next release. So now we're just gonna go over a few of best practices or lesson learned. So when proactive design works well, it's a great tool to take advantage of when you have the resources. So if it's a new product, if it's a new feature of a product and you have a considerable amount of time available to work on the design, when can it go wrong when you don't have very much time to market, right? So there's a couple of examples. One, if you just need to get something out in a release very quickly. Another example where proactive design doesn't work well is in open source when we're using new technologies and we might be designing with an alpha version of something and APIs are gonna change really quickly. Sometimes you just have to, in that case you're gonna have to use reactive because whatever design you produce is probably gonna get changed when the software goes from alpha to beta and the APIs change. So there's definitely situations when proactive does not work well as well when the time to market is limited. Yeah, I think some of the lessons learned. So when one can reactive work well, it's a couple of things we just discussed in the last use case, but really for one in the open source like Serena said, in collaborative environments that definitely helps so you can have that, like that Explorer feature I was talking about, you can have that back and forth conversation. You definitely I think need that in the reactive state and then when teams are agile and ready for features or pages to change quickly. So you're making incremental changes. You can make some iteration, some changes in version one and then iterate on them in the next release or in the next cycle. And as long as everyone's kind of on board, yeah, we're gonna revisit this page maybe again and again, that's okay. That's kind of part of this reactive side when you're making these small improvements to get something out the door that's usable enough at times. So if everyone's kind of on board on the same page, then reactive can work. Also to improve product cohesiveness. So we kind of joked about it just sticking a bandaid on it, but it's actually, it's important. So look and feel does matter and we try to be just looking at some of those screenshots in the beginning of our products. Way back when in 2013, I think they were from the portfolio didn't look cohesive and that's something that at Red Hat we've been trying to focus on to make it look like one cohesive unit. So yeah, some minor tweaks you can make being reactive could be some visual changes, look and feel things. They're important and they do make a difference at the end for the full user experience and user flows. And then similarly the syntax. So keeping it consistent, whether it's conceptual help, contextual help, excuse me, or it's field labels or tool tips, simple things that can be reactive. Like, oh, let's try and make these changes before at least goes out the door. Those are things to look for where reactive can be okay. Do we want to only think about the content strategy from a reactive standpoint? Definitely not, but it's ways to sneak in some of that work as a quote unquote afterthought to sneak some of it in. But yeah, we should do all of this on a proactive base as well. So I guess, yeah, I was gonna say, I think there's one other thing that like, we're coming at it from, because we're working on web applications too, right? So if you want to think about things like if you have a SaaS offering or websites, I think reactive comes more into play as well because you have a shorter development cycle. But we're definitely looking at it from the point of view where we have at least a three or six month cycle. Relief cycle, yeah. Yeah, when you can make more media changes, it's definitely easier, yeah, for sure. So yeah, in summary, we mentioned this a number of times, the collaboration, but the key to success with design is to be collaborative and iterative at every stage. So there's really a place for both proactive and reactive design for, you know, in every case, I think. So you can use both depending on your lead time and what scenario you have. But yeah, in today's landscape, you need both. So curious if anyone, that's all we got. I understand what you're saying about the strength of the proactive approach. And from what I'm seeing here, it looks like it's always attached to a product, or that kicking off the cycle. Is there any room for doing that on its own as kind of like just pushing out the boundaries of UX and then applying those lessons learned when a product comes up, or do you always attach it to a product? That's a much better product. I think I understand. So you're saying is there room to, like before there is a product to do some proactive research to understand, like what even? Yeah, yeah, I think, you know, where I come from, like everything's product driven. We're doing all of our UX work to serve as a product. And it's never really any time to just kind of, do we throw on our own and then apply it to a future product. So I'm wondering if you get a chance to do that. I didn't hear the total end there, but I think, I guess from my perspective, yeah, we are thinking about it in terms of, you know, adding specific features to the product, but I'm trying to think. I can see from a research perspective, I mean, I think there's always room for proactive design. Like if there's researchers in the room, it probably agree that talking to users, you know, anytime you have a large amount of lead time talking to users, you kind of can't go wrong. Like it's hard to argue with data. So whenever we're struggling with some of our design decisions, we often turn to the research team and ask to run usability tests. Even if it's, you know, we've gotten anecdotal feedback here and there on random features. Like we often turn to the research team and ask to take a step back and just ask them to just go talk to users, whether it's they give a certain direction or in some cases they don't. Like they'll run a really open ended research activity to try to get feedback from users that like those users are actually thinking about rather than us dictating, like tell me about this specific product or feature or page. Kind of like what do you do today to use, you know, what do you use today? How do you go about your everyday? What can we learn from them to bring that data back? But I don't know if that like answers your question. I think the other thing too is like at Red Hat, we have product managers that set the direction of the product, right? So they're out there talking to customers every day. So they kind of have like that future vision of where we want to go. So oftentimes by the time we start working on something, like there's already an epic that says, okay, this is what we want to do in the next set of, in the next release, here's what we want to focus on because they've already done their research based on competitive analysis as well as talking to customers and how they're using the products today. So our level of research and interaction with customers is different. It's like, it's more usability than it is understanding the products at a higher level. So that helps. Thank you. So user experience is far more than user interface. And I know Red Hat as a, you know, old school geek company, health and suspenders, Linux world, a lot of old deckies came from a very command line, even pre-web focus. And you see that in a lot of the products. I think that's what you were dealing with with a lot of those early interfaces. But the risk on going the whole UI ways end up with where Microsoft was in, which is that you can only do certain use cases through the UI. And I was wondering if in your team, you're getting an emphasis on being able to span the whole realm of user experience, not just web UI, but from new user being exposed to a system at both the, ideally web UI is a wonderful way to bring them in there. But to be able to think in the terms of bringing them from new user through to power user, or to be able to say, okay, eventually they need to be able to take the workflows that they're doing here as a one-off on the web into our automation tool chain and getting it into Ansible or something like that. And so there is a whole workflow there. Is that part of the language that we use when we talk about the round trip of user experience design? I think we're getting there. I can give the example. So for OpenShift in 4.2, we're gonna have a developer perspective or developer console, right? And they have an associated CLI called ODEO. And so what we're talking about is making sure any improvements that we're doing in the user experience. We also kind of match in the CLI to provide consistency between the two. Is the developer gonna be able to do that? Like, I just did this in the UI. I wanna see the ODL command to do the same thing from the command line. Is the developer, I'm sorry, are they gonna be able to? So not just us be able to, us as Red Hat and the wisdom of the world, to be able to put that power in the hands of the developer to say, okay, I just clicked through. How would I do that from the command line? To be able to teach them that? So oddly enough, there has been some discussion about that in the last couple of weeks, whether or not it will happen in the next six months. I don't know. But that is how you talk about it. But there has been some increased discussion around that. Yeah, I mean, I'll give you the examples with K-Native and Serverless and how that's gonna, how those all come together in the developer console and there was a face-to-face a couple of weeks ago and some of that stuff started. We started talking about those, but in my experience, that's brand new, hearing those conversations. I have not heard them very frequently before. So hopefully that's where we're going. And I don't know. Like, I don't know if you guys on the OpenShift side with OC, do you have any interaction with? No, we don't. But we've started to hear a lot more from, like when we've talked to users, we're hearing a lot that they're using the UI or coming to the UI to learn, new users often are trying to learn, what's, how do I kind of get started? What are these different resources? What are these different workloads? And they're kind of there to learn. And once, like you said, they're progressing into maybe a more power user. Maybe they're only using the command line at some point. But we ended up talking to a lot of users that because they're using it to learn, a lot of these features are coming up where we want to help teach them through the UI, whether it's contextual help or kind of pointing to different sections. One of the things we've talked about is adding some hints of, okay, you can only take this flow so far in the UI. And here's a hint of how you go run this in the command line. So providing some hints or tips of either how you do the equivalent task through the command line or how you can expand on a flow to do more complex or more advanced tasks through the command line, like providing that in the UI in context so that people can learn and eventually get to a place where they are doing those advanced things, maybe without the UI. We've started to talk about that for sure. This is a lag. So the other question is, I know how the Red Hat product cycle works and it's built on the idea of taking a successful open source product and locking it at a certain compatibility level and back porting fixes and changes to that. So REL7, there's a certain expectation if something works on REL7, it will work through the entire REL7 life cycle. And then when we go from REL7 to REL8, we can make big changes and break things. And we have the same thing with OpenShift going from three to four. And I think we've seen that OpenShift four is gonna break a few workflows because there's some really, really different assumptions in there. And so when a customer, and my customers are banks, so they're fairly risk adverse, when they're going to make a change like that, it's a huge impact and they'll probably put that off for a long time. Putting on your Red Hats, what requirements are put on you for user experience and user ID, UI changes within a single product line versus some kind of reactive versus proactive? Like what can you do between OpenShift 4.2 and 4.3? And what would you have to wait for OpenShift 5 to be able to do for changes? What are the guidelines that you're given that way? I don't know. I don't think, I mean, that's a good question. Yeah. So we started work calling and I started working on OpenShift at the same time and that was on 3.7, right? Yeah. So I'm trying to think of what was the big change between 3.7 and 4.0. Really is the UI is that 4.0 is tectonic. It's based off of the tectonic CoreOS acquisition, right? That's the only major release we've been brought to. So I don't really know the difference between a major release. The difference between minor dot releases is more time-based as far as I've seen. So it's not like you can only do a certain thing. It's really just how much can you get in in a three month or six month cycle? I'll give you an example that might frame what I'm trying to ask. So in IPA, in IDM, we have a tab on the top and I know this intimately because I worked on and this was a sticking point where it's called network services and underneath there's DNS and there's auto mount. And if you take it from a UX perspective, DNS is a dominant use case. I might deploy IPA solely to get DNS there. The number of people who know how to do auto mount that would do it through the UI is probably zero. So we've taken a really important UI element and hid it behind this idea of network services. And this is something that should be, and I filed the bug and the bug was overturned because the UI developer didn't talk to customers and didn't see it the same way. It was like functional fitness and this. But I could see his point, which is changing something like this breaks things because there's an expectation that it's gonna be there. And so he actually had a really good point, which is you shouldn't break that expectation except, and I'm trying to figure out where is there, are there guidelines like that? So it's like a regression then. Essentially, if you did that, you would no longer be able to satisfy that use case in the UI if you removed it. It might be tagged as a regression. I mean, the UI's not automated, so it's gonna be a human looking at that so they could still do it, right? If there was a DNS tab now at the top, they wouldn't break them that perspective. And this is a human judgment, obviously. Whereas if you thought, oh, you're DNS, I need to go into network services and then see DNS there and it's not there, then that would break them. And thinking, okay, we have to do something with auto mount. If we buried auto mount, that would break it. But to just change this here would not be a, from a automation perspective, it wouldn't be a non-backwards compatible change. So do we think in these terms, do we have restrictions or requirements along these terms? We talk about dealing with UI changes from minor version or major version? I know it's a hard question. I don't know that I'm framing it that well. I'm not sure. I think it's more because we just don't have as much experience going through the major versions. I don't, within the same product. So for example, going from three X to four, right? So not only did we do the, we went to Tectonic, but in addition to that, a lot of the developer workflows are no longer available in 4.0 or 4.1. So perfect reason to have it a major bump. And then in 4.2, that's why we're starting to introduce the developer console. I don't particularly know if there's anything written about that though. Interesting question. Thanks. Thank you. Thanks everybody.