 All right, I think we can get started. Looks like we have quite a few folks interested in the topic. So I'm gonna be talking about how could we make Drupal a better out-of-the-box product? So that's, we're gonna be focused on Drupal core, some processes and some learnings that we've taken from the last 12 months to how we've approached things slightly differently from the past. If you don't know me, I'm Lauri Escola. I'm a staff software engineer at Aquias Drupal Acceleration Team. I'm also a Drupal core product manager, actually changed from a framework manager to a product manager. So moving from more engineering focused role to more towards thinking about, you know, what kind of a product should Drupal be and how can we make it better? So earlier this year, when, you know, I started being interested about improving Drupal as a product, I started doing some research on how is Drupal perceived in the market. And, you know, I went to look at social media like Reddit and Tech Talk and others. And one of the first things that I actually found was this comment from Reddit where someone described their experience with Drupal so that, so this is a quote from Reddit. Want to suffer the pain of ever changing halftone features? Go with Drupal. I've been doing it for over 10 years and number of times changes are made in core for sake of feeling that it will make more standards compliant, it's crazy. I'm not even kidding. If I can ever do an update without it breaking seemingly everything, I'll be happy. Don't get me started with Contrib. The only reason I'm still here is because of dollars. So this was not necessarily a very good perception to have. Obviously, this is something that we want to work on to improve. So besides, you know, going on social media, we talked to lots of users and figured out, you know, what specific things people would like to say improved. And that's how something that you might have seen on Drupal.org before is that we created a roadmap for 10.2 and 10.3 and sort of beyond. And that describes also what is our strategy for, you know, the next couple of years. We want to focus on three of these areas. So we're going to focus on reducing the time it takes for site builders to become proficient with Drupal. So that's really honing in the idea of the ambitious site builder being able to get started really quickly and achieve great things just by themselves. Second of all, we realize something that is really important for a lot of site builders is that they are, not necessarily the end user of Drupal, they are actually responsible, they're accountable to their editors. And what is important to us is that the site builders can actually achieve what is important to them, which is that they have their editor satisfied. They are satisfied with the tool that they provide for them. So we want to provide site builders with tools that empower them to create the experience that their editors want to have. We also want to acknowledge the fact that reducing Drupal applications can be quite costly sometimes. You know, there's lots of updates happening and a lot of that requires manual work. So we want to take some steps to reduce the cost of keeping Drupal applications secure. All right, so how is change happening in Drupal Core? You may have seen this before. This is the IDSQ process or sort of product management process for Drupal Core. So how it works right now is that we have, in the IDSQ, we have some ideas where we prototype them and then create plans for those ideas. Then in the middle of that, after we've done the validation of those ideas, product managers approve them to be features in Drupal Core and they get built in the Drupal Core issue queue. And you know, why are we doing that? Why do we have some process to manage the product? I have here a picture of something very complex and you can think about, you know, how is that related to what we do every day? But you know, it's very easy to think that adding more features adds more value. There's like clear correlation that we add one feature and it adds value. But that's not necessarily always true because what if one of those features is only used by one of our users, but we collectively maintain that feature just for one of our users? That's not very smart from us to start adding features that only make sense for a single user. If we started making features that only work for single user and we created those features for every user, then you end up with something like this where, you know, there's an extreme amount of complexity and just finding the buttons that you need are going to be really challenging because if you know, there's just overwhelming number of them in this cockpit. So there's a cost to adding features to Core. It adds complexity to the product. It reduces the velocity for us to deliver new features because we have to maintain that functionality. So we have to be very careful about what are we going to add to Drupal Core. We don't want to just, you know, add those features that satisfy a single user. And that's why we have the product managers. We are responsible for understanding how our users work and then delivering features that are most important for them, solving problems that large numbers of our users need. And that's why we use the IDSQ for facilitating that process. All right, so there's two ways of getting things in. Like you can get a feature in is from the Drupal Core issue queue, or you can get it in from the IDSQ. So why would I want to go through the IDSQ instead of just directly opening a feature request against Drupal Core? The benefit of the IDSQ from the perspective of a contributor is that it provides clear sign-off points for that contributor. You know that you can get feedback on your idea before you've actually done the work. Because if you have something in Drupal Core issue queue, usually you get review when something gets in the RTB CQ, which means that you have production level code and that can take a lot of time to build. And that's when you start getting feedback on your feature request. The idea of the IDSQ is that you can just propose an idea and it can get validated before you've even built it. What it also facilitates is the opportunity for us to iterate on those ideas before we've built the production quality code. So the idea is that if we have some feedback about the idea, we can then iterate on the idea and improve it further from the original idea. And we can do that without actually writing production quality code and doing it on that level. Instead, we just ideate on the idea itself. It also helps involve the right stakeholders at the right time. IDSQ is clearly something where the reviews are mainly coming from the UX designers, the product managers, and you're not gonna get feedback on the code itself that you're pushing. So you can have very, if your prototype includes code, it's very clear if it's in the IDSQ that we should not be reviewing the code itself because if it's just to demonstrate this functionality for the product managers and the UX designers. But yeah, we have some challenges with the current approach, all is working as we hoped when this was first introduced. One of the probably biggest challenges we have with this process is that product managers still tend to get involved very late in the process. So there often is lots of production quality code written at the point where product managers get involved. And the challenge is there again, like the goal of the process was to lower that barrier when you get feedback so that you haven't made as much, you haven't used a lot of as much time to build that production quality code. Like maybe you could have built a prototype in one day to demonstrate it for the product manager and get the feedback with that. But instead, sometimes we've had people work for months on production quality code and then that's when they're getting the feedback. And the problem with that is that it makes it really hard for the product managers to say no at that point. It makes any iterations really challenging because of now you are managing that production quality code, even if it's in the IDSQ. Another challenge we have is that approved ideas tend to focus on Greenfield ideas. So adding new features rather than improving what we have. And this is something that is quite anecdotal but it's also I looked at the issues that have went through the process. There's 20 issues, all of them were Greenfield. And then if I look at the issues that are in the IDSQ waiting for review, there is a proportion of that that is focused on improving existing UIs. Actually, I was surprised how high of a proportion it was of improvements there. So there's something off in the process where we're not using it for improving existing things, we're using it more for Greenfield ideas. Then we've had some challenges with ideas being very large in their size. Like we tend to only approve ideas that are like multi-year projects, but it doesn't have to be that way. We could have ideas that are couple more and small projects. It doesn't have to be these big multi-year projects. In the end, the main challenge is that we don't say no often to new ideas. So we just leave them there and then when there's enough pressure, then we maybe say yes. Some ideas for how we could improve this is to make some adjustments to the process so that we encourage more iteration in the early stages of an issue and making sure the product managers have enough information to provide that feedback over there. I'm gonna walk through a project that we did earlier this year on FieldGUI, how we utilize the similar process and how some of those things could improve our existing process. To use that idea skew process for following through an existing work rather than building just Greenfield projects. Also, something that we should do is to continue the discussion about the idea skew and make it more transparent how the process works so that contributors who are interested in user experience or product management can be involved in this process. So like I said, I'm gonna walk through the process in the context of the Drupal 10.1 and 10.2 FieldGUI improvements. This is a sort of case study of how the process looked like from that perspective. We did not necessarily follow all of the things that we have here because the process that I'm now talking about did not exist when we started. So it was kind of created as the next step from this but I'm making some connections there to what we learned from here. So here's a graph of how the product management looks like on a large scale and this is something that I just wanted to find something from the internet that sort of resembled what we're doing and this is kind of the ideal state where all of your ideas are mapped into a strategic vision. So everything starts from your strategic vision which is to empower ambitious site builders. So everything that we do should somehow help that persona. Then you have the discovery team that is responsible for actually validating that whether the ideas are valuable or feasible and how that's done is done by user research so discussing and doing different kinds of studies where you involve your users. Once the discovery team decides that something is feasible and valuable for us to work on then we have the engineering working on that. So this is sort of just from the product management perspective how we think about how should this look like? So I'm going to convert this into like how would this look like from the perspective of a single issue? How would single issue go through this process? So there's a model called double diamond which is introduced by British design council in 2005 which maps pretty close to the ideas Q process that we have. You can only see one diamond there but that's kind of the problem that we have is that we are very focused on the solution stage of the process. So what we do is we explore solutions and then we select solutions for, you know based on what works, what we can implement and then we build those. And so what we know at that point when we've done this stage in the ideas Q is that we have a validated solution. We know that this is something we can build. This is probably something that, you know could potentially have some impact. But the real process actually contains a step before that which is something that the product managers and UX designers are responsible for which is discovery. And in the discovery stage what you're doing is you are first trying to understand what is that the pool of problems that your users are having and then you're actually doing more targeted research to validate which of those problems are valid problems for us to solve. And once you've done that you have validated problems that you work on. So the idea is that you don't even, you don't start prototyping ideas that are not for problems that are worth solving for us. So there's a step before we start prototyping. And this has kind of been done but it's been very implicit. Like sometimes we start prototyping even before we've agreed on what is the problem that we are trying to solve and whether that's a problem that is worth solving. So taking this back to ground and the ideas Q process to take a look at how that can look like in the context of your pool. So the left, we've added now a new step for discovery which is something where you would usually have mostly product management and user experience focused folks be involved. This is not to say that, you know other folks couldn't be involved in that process but that's the kind of skills that are involved in this stage. And once we are done with that stage we move into the prototyping and that's when we start doing actual engineering work. All right, so in the context of the field UI we focus on the reduce the time that it takes for site builders to become proficient with Drupal. So that was sort of the strategic outcome that we wanted to achieve by that work. So we started by doing some discovery in this area. So this is usually very open-ended sort of research where you just see what problems exist in the world. And you know, this is quite specific to product managers. If you are a user of Drupal you probably don't even have this problem because you probably have some problems yourself that you can use as your guess of what the problems could be. But when you come from the product management perspective and you don't have as close sense on how is it to use the product then you actually go talk to your users and ask them very broad questions about how has it been to use Drupal for the last six years? Can you tell about the last project where you use Drupal? And then they start talking about things that they care about and you start picking up some ideas on what kind of problems did they have in that project. The great thing about Drupal being open source is that we have plenty of data about problems that people are having online. So we have the issue queues where people are reporting issues. So that's one source of information. There's also lots of information in social media like what we did as discovery. That's a great source of discovering what potential things there are for us to research. And in the case of the field GUI we started with the assumption that the current field GUI is perceived as hard to use. And how did we come up with this was it kind of started from looking at the issue queue we looked at some of the past usability testings that we've done. And we noticed that there's historically being a lot of usability issues in the field GUI but we've not actually delivered a lot of solutions to those problems. So we sort of had a sense that maybe this is an area where we could focus on potentially. There was also other sources of data like I had some interviews that were not necessarily related to this or not even discovery intended as discovery but where I was getting someone's telling me a story that they were in a workshop with their customer and they were using a competing proprietary SaaS CMS to build their content model in that workshop with their customer. And the reason they were doing that even if it was a company that was focused on Drupal was because it was so powerful that they could just build out the content model in half a day workshop and the customers tended to be very convinced about that. And they even told me about in some cases how they went with that SaaS solution because of the client was so convinced that the SaaS solution would be powerful because they just saw it themselves in the workshop. So that's not necessarily what we want to happen. We want people to be able to have the same experience with Drupal. There's other reasons why you could focus on field GUI like it's a very strategic focus for Drupal like content modeling is one of the key aspects why people choose Drupal. So that's another reason why you could say that focusing on field GUI is something that we want to do. So that's how we discovered it. Okay, field GUI could be something that is hard to use we got some data about that but we didn't know what specific problems or whether it was actually even true. So that's when you start doing actual proper targeted user research and that helps you understand who is actually impacted by this problem and why does the problem exist? So what does it mean? Why does the problem exist? That's the specific diagnosis of the problem like what kinds of things do your users run into with field GUI? What contributes to the fact that it's hard to use? It could be also that we just discovered it. Okay, actually the problems are not that bad it seems like people are quite satisfied with this. And then the result of the research would be that we were wrong in the beginning and we probably should not work on this but in this case we, you know we did get lots of data that actually helped us prove that there were aspects that made it harder to use than it should be. Something to keep in mind at this stage is that you are dealing with lots of opinions so it's good to remain mindful that your own opinions might influence findings so we ended up going back to listen to a lot of the sections that we did because of you might find new ideas from those when you actually re-listen to them because of when you, if you just do your notes based on what you first heard it's usually it's just the things that you care the most that you get from there but if you go to re-listen then you can kind of see beyond that. So common methods for conducting user research would be user interviews that could be in DrupalCon just you know, discussing with people the hallway track that's something that I know a lot of people are doing already. So that could be one form of user research. It could be focus groups, that could be buffs. That's a great way to collect data from a large group of people. Or it could be usability testing where you have a more well-defined plan to how you want your users, that you want to walk your users through a certain process where you get them data about usability issues. Usually you have relatively small sample size for this which might be surprising because after you've talked to five to six people about the same problem you start realizing that everyone is maybe talking about same things. And it becomes less and less valuable then to do research because you start hearing like the additional things that you would learn are more like niche things. What you want to focus on is the problems that a lot of people are running into if it's one out of 10 people is like it might not be the most important problem for you to solve. So we at least started seeing lots of repetitions after just a couple usability testing sessions. Some tips for conducting user research. So try to be as open-ended with your questions as you can. So try to avoid leading the user to a certain conclusion. So instead of asking them, is field UI hard to use? Which doesn't give them a lot of answer to explain their thoughts but instead asks something like, how do you find using field UI? How is your experience? And then they can make their own conclusions to how is it to use the field UI. And then something that's very important to keep in mind is that we should not believe in what people believe they will do in future because the best predictor of your user's future behavior is their past behavior instead of what they say they will do. So instead of asking like, would you use this product? You should ask questions like, have they run into some of the problems that your product is trying to solve or have they even tried to solve that problem potentially beforehand? What's also important to remember is that the behavior may depend on the context that they're coming from. So ask lots of follow-up questions like why do you think this is a problem and then they may be able to expand to you like why something is a problem, what is the context where they are coming from? Even if you are in a situation where you feel like your user is not necessarily particularly interested in the problem that you're talking about, just keep your ears open and continue listening because that's the best opportunity to make those little discoveries. That's also how we discovered the page building experience problem. I was doing research on something completely unrelated. It was another project and people were particularly interested about that. So it turned out in majority of those interviews people actually started talking about what kinds of issues they have with page building and that's how we then realized, okay, maybe that's something where we should do some more focused research. So in the context of the field UI, we conducted the usability study that was focused on the content modeling. The goal for us was to hire six users and three users who are new to Drupal. So people who have less than one year of experience and then three users who have extensive experience with Drupal, so over 10 years or even more, like lots of more in some cases. Some of the problems that we discovered were quite surprising to us. We were not anticipating them from the beginning. So things like reusing the fields we didn't even think about that there could be problems with that. So it was very refreshing to see how you can discover very surprising problems when you do usability testing like this. The next stage is that so we've now done qualitative research to understand why and sort of what kinds of people are involved or are having this problem. The next stage is to combine that with quantitative data and that's what helps us put the findings in perspective. So qualitative data is not very good at answering how many users might be having a problem but it tells you the why and how and who. But then with quantitative data, you can see that, okay, these patterns are happening with this number of people. So that's something that helps you and put that in your perspective. This is a really challenging stage for us in Drupal because we have very limited access to analytics which means that we can use surveys mostly and even surveys can be challenging because it might be really hard to actually reach the users that you want to reach because if you want to go beyond the inner circle of Drupal when you're doing this type of research because people in the inner circle tend to have a different perspective on these things. If you want to reach editors in particular that can be very challenging because if they don't necessarily consider themselves Drupal or stages use Drupal as their sort of daily tool. We've been able to get fairly good results by reaching out to agencies or site builders to send surveys to their editors but obviously require some more work and some more thought behind doing the research. Some of the quantitative data that we used was like in form of card sorting so we had some hypotheses for example that the way that the field types are grouped is not working well. So we did quantitative research then to get sort of from a broader pool of users of like how do they think about field types and then we discovered that there were some changes that we needed to make to the field types. All right, so now we've done research and we've essentially validated our problem or it could be that we discovered that this is not a problem that we want to work on. It could be either or at this point. What we do is we provide a brief on what insights did we make during the research who's suffering from this problem that we were researching and what specific problems did we identify that we could solve to address the challenge to have the outcome that we were hoping we have. Is there any data that we collected during the research so we need to publish those as part of this brief and then what ideas do we have for solving the problem? And here I want to emphasize that ideas are our hypothesis for the outputs that could have to decide outcome. So we don't have to know for a fact that these outputs are going to have to decide outcome yet. So these can be quite opinionated. You don't have to put too much thought into researching them. So you just write down your ideas and then later on you validate whether those are actually good ideas. So some examples of the findings that we made were use a struggle with understanding when they could reuse a field, like whether they should reuse a field or create a new one and whether a field that was available for reusing whether that was compatible with sort of the content model that they wanted to achieve. Some of the common field types were hard to find. So for example, the text field type was on the bottom of the list. There was a very long list and it's a very common field type. So that was a challenge that we identified that maybe the text should be easier to find. And then we found some issues like just pure bugs where when they went through certain steps that some configurations were not as they would expect them to be. So that was also great in terms of that it was not necessarily pure usability issues but we were able to also find some bugs. So then we went into the prototyping phase and that's when we start now validating if our hypothesis for the solution is right. The goal of the prototyping stage is that we want to spend as little time as possible building the prototype. So any code that you write at this stage you should be willing to or that you want to throw it away. Let's put it that way. Any code that you write at this point it needs to be so bad that you want to throw it away immediately and that's how it was for us. And it's a lot of fun because if then you focus on the actual what is in the UI? What does the user see instead of what is in the code? So if you run into any problems on your code level you can maybe document that these kinds of problems we run into but don't focus on solving them now. It's fine if you don't wanna do any coding at this point. If you wanna build a prototype in Figma that's easier for you, go for it. It doesn't have to be all in code. I just noted a lot of folks in the Drupal contribution space tend to prefer HTML, CSS, PHP kinds of prototypes. So that's one way or another but it just needs to be that you don't focus on building good quality code at this point. And it's good to keep in mind at this point is that we should not rush to the next step because it's so much cheaper for us to iterate at this point. So if there's something that we believe we can still improve it's still worthwhile maybe doing another iteration on your prototypes at this stage because it's so much cheaper. All right, so once you've built a prototype then you want to validate them. And how that happens is that you show the prototypes to five to six users again kind of similar story where you start to realize that there are patterns after a certain point that you're not necessarily finding lots of new things anymore. Make sure that you're solving the problem that you are trying to solve but also focus on what kinds of new things can be, what kinds of new problems did we discover that we could still solve by iterating on these prototypes. You can use tools like usability metrics to put your progress in perspective. So usability metrics are things like success rate, the time that the task requires or the error rate or the user subjective satisfaction. And then you can sort of collect those from the different steps of your iterations to help put kind of the progress that you're making in perspective. So you know that you're making progress on what you're doing. And when you're ready to build you can use the prototype as your blueprint for the actual implementation. So do not use the code that you wrote but the actual what you see in the UI is your blueprint. And this helps you avoid writing like a 10 page document to what you want to build because you have that very clear blueprint in the form of prototype. So what is important at this stage is that we should not focus or we should not be too stuck with the solution. We should not feel that we need to deliver a solution. We need to be obsessed with the problem. We need to fall in love with the problem instead of the solution. So if the solution that you prototype doesn't work we need to be willing to destroy it away and start again but with a new approach to try to solve that same problem. It's so easy to be stuck in your head with a certain solution that I want our users to have this but if it doesn't solve the problem that you hope it's solving it's not worth trying to stick with that. It's much easier for us to make business case it's much easier and much more valuable for us to work on those issues that we know that we have confidence that it solves a problem. So in the context of field UI building the prototype for the first round of validation took around one week. So it took around six months of engineering to actually build it out in for a lot of the work to happen in the core queue to make it into production quality code. So that helps you put it in the perspective that six months of work maybe you should be able to prototype in a week. Obviously it depends on what you're working on some things are easier to prototype some things are harder to prototype. But yeah in this case we were able to make pretty fast progress in the informal prototypes. How we built the prototype it was this bunch of hacks on field UI and it was probably the worst code that we've ever written. There was like thousands of lines of commented code in some of the classes and it actually did make it quite difficult because we were actually struggling with some of the early iterations like we did not get very good results on the first iterations. And then we had to do pretty radical changes to the prototypes and it was a nightmare on the fourth round to try to make the UI somewhat functional. So maybe after the fourth round we would have had to do like a rewrite of that but that actually those iterations were pretty quick for us to build even if we needed to make big changes like maybe one day or two days of like making changes. So that was pretty nice. It was like it was low pressure in that way to make those iterations because they were so quick for us to implement. So now we are finally that we have found a solution that we can we can build for that problem we can get into the actual development stage. Now we are in the core queue and we involved subsystem maintainers and release managers and framework managers on our solutions. And now it's the time to do all of the bike shedding that we need. All right, so one of the challenges we have with the core issue queue is that we tend to be very focused on the functional aspects and the reliability aspects of our MVPs. And then we maybe trickle a little bit of usability on top of that, just to do like the bare minimum of what is required to pass some kind of a gate. That's how it works in a lot of the initiatives that I've personally worked on. How we should instead try to approach it is that we built MVPs that have a very limited set of functionality. They solve maybe one specific problem and but they are something that we know are usable and maybe even have something that users find desirable, something that they really enjoy using. There's many benefits to this, even though it takes extra effort to build MVPs. So oftentimes you actually have to invest more to deliver an MVP than it would take to build the whole product. But the reason we do it is because it reduces the risk of your project because once you've shipped something, something is out there, you are no longer in that place where you have two years of work behind you but you've delivered nothing, you've realized that, okay, now maybe we should be working on something else but you can't change your focus on that because you haven't delivered on your previous project. Then you're kind of stuck with that project but if you delivered an MVP, even if it did not solve the problem fully, you delivered some value, you could react to those changed priorities now. So it's kind of our way of have the freedom to move on to other projects once we've shipped the MVP. Another great benefit of this is that you start getting actual user feedback once you deliver an MVP. It shifts your focus from just building something to actually trying to solve those problems that your users are raising. And that's probably the biggest reason why you should deliver an MVP because of just that mindset change that happens after you've delivered something. So keep in mind that building doesn't end when you've delivered your first solution. So remain focused on the problem you're trying to solve. So again, it is easy to think that, for example, in the case of the field UI that we want to deliver a better field UI and that's what we did. But if you've not delivered that, if there's still specific problems that you could solve, it's worth still keep going on that because if you've just established a process, if you know that context very well, you've built up sort of this... Like if you work on a very old subsystem, it is really hard to start making changes there because if no one knows what is going on there. If you've now worked one year on a subsystem, people know roughly how that works. It's in their heads. So you're in the best position to continue working on that right now because people have it fresh in their mind and it's kind of, you know, it's alive in a way. Like, you know, people can provide you reviews and yeah, you're not blocked. Something to keep in mind is that we don't also have to always solve everything 100%. Like we need to think about what is sufficient because if at a certain point the problems can become really, really hard and then it's worked while thinking if those solutions are gonna be feasible. So we need to also evaluate whether the solutions that we could deliver to those problems are feasible because ideally we would be mostly working on problems that are both valuable for us to solve but also feasible. So that's an important consideration to make whether it is feasible for you to actually build something for that problem. If it's not feasible, then give some serious thought whether you want to work on that. In the context of field UI, we kept very tight scope on issues, tried to split it to as many small issues as we can and we tried to stay pragmatic on those issues to help make great progress. That meant that some issues had handful of follow-ups. Like some of the issues had like six, seven follow-ups that we needed to make, which is not always ideal. But in this case it worked well because we were quite on some of those issues where we were quite early in the process so we had a lot of time before the release to fix those things. And in the worst comes to worst scenario, we could have always reverted the original issue later if those follow-ups didn't make it. Something else that we had to do was that because we split the work into so many different steps was that we needed to actually do design work on the individual steps of the iterations. So we needed to do like, okay, this is what this is gonna look like after step two. This is what this is gonna look like after step four. And we ended up even doing user testing on those steps. But that was again, because we believed in that iterative process that we were in, that that's the best way for us to deliver it. That's how we diverse ourselves. That's how we continue getting feedback from our users. And in particular, I think in the context of field UI, it was helpful to start getting feedback from contributed module maintainers early on. So that's also an added benefit once you actually get something into Drupal Core, contributor modules become aware of what is happening and then you start getting feedback from there. So that's one form of user that we have to satisfy. So some of the improvements that we shipped, so this was the first one that we landed. So this is to fix the reusing fields problem. So this used to be just a select list with the field name before this. So what we did was we added, and it was not even actually the field name, I think it was just a machine name of the field. So what we did was we, based on the information that we got from the user research, we knew what types of information users were looking when they wanted to reuse the field, and then we created a table that displayed that information for them before they even had to reuse the field. Some of the things were quite small in their scope. So one of the issues that we discovered was that default value we did was something, because it's very large, people thought it's a preview of the field and somehow they thought that they need to fill that when they are creating a field and that added to some friction to the process. So what we did is we hide the default value field behind the checkbox, and that helped significantly, especially in the use case where people are creating new fields because then they didn't see sort of this default value of the field. And the final improvement was to group the fields differently to match what we discovered in the cart sort, but to also display information that users needed to consider when they selected between different field types. There's a lot of work in this area where we need to review some of the description texts and maybe make the process more smoother so that, for example, when you have a lot of contributed modules, that all of this looks good in those cases as well. So if you're interested, tomorrow's a good time to help with these things. All right, so this all probably sounds like, you know, this is a lot of work. The reason why we do this is that the each hour that we can spend in the beginning of this process will trickle down into saved hours later on in the process. So while we are adding work to the discovery stage of this, if we can prevent us from working on ideas that are not worth solving or doesn't solve the problem early on, it can save tens of hours or hundreds of hours of contributor time later on in the process. And that's the whole idea of the IDSQ to help give that feedback for contributors and avoid them wasting time by building up fully fledged features that won't be accepted. All right, so that's pretty much it. I'm wondering if anyone has any feedback or questions or we can continue the discussion after the event in Drupal Slack and there's also usability meetings on Fridays as well where folks are discussing things like this. So those are great places to get involved.