 And so basically, yeah, testing, right? So for plan in particular, people have been wanting requirements management and quality management. So requirements management is just like the really old school boring thing. And so we're not obviously not going to build that into GitLab, but we're going to create some things that are close enough and then we're going to call it requirements management. So that's the, at least in MVC strategy. So for example, inside GitLab, you already have issues and we're going to have, we already have ethics and we're going to have ethics of ethics and we're going to have issues of issues. So you essentially have a way to have multiple levels and then that is requirements, right? So we're going to brand it requirements management. I'm going to talk to John. We're going to figure something out and maybe we'll have something to make it a little bit, a little bit cooler in this way. So, but that's what we're going to do. So I wanted to figure out what's like the equivalent for quality management that we could do. And so I've been really scratching my head for a long time. And then so my little knowledge of quality management or like test management is going to be, is very similar to requirements management in that you have like traceability, like you have a, you know, the concept of a traceability matrix, you have requirements and then you have requirements that satisfy requirements, or you can have a bunch of test cases that satisfy that requirement. So I think like we've already have that. So what I just described the multiple levels, we have that already. So I'm going to call that, definitely going to call that requirements management. I'm probably going to also be able to call that test management or quality management, but then what's the next step beyond that? And so I have a bunch of links in the doc that's in the agenda and Mech has it open. So Jason, it's in the invite. I already have put a bunch of links there. And so I wanted to make sure that I want to get your ideas specifically, Jason, because I know you're doing things like usability, testing, accessibility, testing, and all that makes sense in like CI world. So maybe like if everything I said, if you have no questions or comments or everything I just said, maybe we can start, you explain to us what's your vision of testing and verify stage and see if we can like hijack any of that or just call it quality management. Or just wanted to get your vision there. Yeah, it's all about creativity, we're very fine. So I don't know yet. And so, but what do you mean by like, do you have the doc open? So I'm looking at usability testing and accessibility testing. So for one specific question I had was, when you say like, maybe just usability first, what does that even mean? Is that just like literally like UX types, usability testing? Yeah, it's like a designer. Yeah, design feedback, that sort of thing, usability. Right. And is that would be like, is that like, do you see that as part of the CI like sort of like there's a review app and then you can go in and start clicking around and then measure you. Yeah, more tied to the review app side of things than the CI pipeline side of things. But yeah. Okay. Like you could imagine a feature where you go to a review app and you're able to provide, you know, feedback or something. Right. Like there's some fields or something. It could be external users, it could be external users, but it's a way to collect usability feedback. You probably saw those category epics, they're all just like TBD, TBD, TBD. Yeah, yeah, yeah, I'm looking at them. Yeah, that makes sense. And then same for, and then is that the same vein for accessibility testing? No, accessibility is different. And this one is more tied to like compliance. Well, so there's two, I guess major reasonings for it. One is that it's something that people seem to be caring more about these days in terms of making sure that their sites are accessible in terms of doing the right thing. Right, right. And making sure that you're serving underserved users. Sure. So that's one aspect of it and why people are prioritizing it. And then there's also a, like especially for US federal and maybe other countries federal or sorry, governmental rules. There's like, if you want federal grant money in the US, you need to be able to build, be building an accessible website. So there's accessibility suites out there that make sure that you don't have color bread problems or, Right. You know, that it has features for blind or otherwise users with limit limitations. So is that like at the code level that you're sort of like security that you're going to hook into GitLab or is that separate when you said like there's like test suites? It's more like front end testing. So it's more like Selenium than security testing. Okay. But it is automated. It wouldn't be like. It would be automated. Yeah. Okay. Yes. I think we have a few libraries already that we can reuse like there's an ATT from PayPal or something. There's a bunch of other libraries that you can just run your checks on the style on the UI component. It's going to, there are a bunch of rules that it would check again. We will hire right here. Yeah. There's more and more stuff out there for that. Right. That makes sense. So in that, when I say security to when I say security, that like in the sense that it's part of like, you're going to bake something into the product. Like you're going to have some libraries. Yeah. Whereas the usability piece is you're giving some UI for the users to interact. Yeah. Though we are potentially providing way to collect that feedback. And then also like maybe have control on the. Okay. You know, if you go to a review app, that you know how, if you're logged in as a operations person to get lab.com, you can see like the special toolbar that, that shows you like something. That's the top. Like we can imagine building something that like puts, you know, the, the web application inside of a. Right. Another feedback control or something like that. I don't know exactly, but. That makes sense. Okay. I'm just going to close it up the, the two there. And then I saw you closed UAT text thing, which I understand is like, my understanding is just stands for user acceptance testing. Yeah. That's literally like the business feature itself. That's like a, yeah, business sign off, like where you get like this person and bring them down and make them sit in front of the computer and say it's good enough for you. Yeah. Exactly. Yeah. And we, we removed that because it's sort of a manual thing for one. And also it doesn't really fit with the GitLab kind of, you know, way of working or like what our value bar is like, you know, having a. Yeah. Executive come down and manually. Right. Right. Okay. That's part of your delivery pipeline is not a. Okay. Somebody wants to do that. They can do that. But we're not going to build it as like a. At first classes. Working and a lot of it's honestly covered by the usability testing. Right. I imagine like a lot of the features you build there. Yeah. But that said, it is a different thing. It's just not a thing that we really believe in as a company. Okay. Okay. Makes sense. Okay. So, and then some Mac. So when we say quality management, last time I checked in with you that is yet a different thing from what Jason's been saying for the last 10 minutes. It's more automated. Is it closer to the, to the feature itself or how do you think about that? So I think we need a test case management functionality. No, no, like it could be a variable thing. I think even though we don't use it as, as our test automation engineers, because we don't want, we don't want any mental tests at all. We could, we could use some of it to add, stop some exploratory test flows and then dock food it on our own. And then, but I think there are a lot of customers who are still on waterfall moving into agile. They still have a quality street that they need to run manually. And I think that that will help a lot of customers and they'll add a lot of value. So yeah, that'll be the first thing I would, I would help both of you look into. And then how we can link those test plan, test cases to test plans and link it to requirements. Cause that's like, that's the crux of CD many ways. They have to all go together. Product requirement, test requirements, product requirements, test requirements, ship, go to production, boom. So that's one on quality. I think that's the biggest one actually. And I have a demo thing set up for both of you to take a look. This is, I was involved in this Ruby conference organizer and they're using this tool to plan their stuff. And I think that the table view is really awesome. I think it would be awesome if we could just build it in second lab. You have something on your computer right now you wanna show? One second. I am notion. Well, let me, when I go back to you, let me re-lock in. I think it locked me out. This is gonna be on you. Okay. So let me prepare it. The other thing I wanna talk about clients that I think are a blocker for integrating a bunch of stuff right now in-house. The Omnibus release management testing, Marin is working on the delivery team which they have their own pipelines. They have good bot, sorry, slack bots and release. And they wanna tie it back to our projects. They wanna see your multi-project pipeline. That's one. The other thing is we wanna do cross-browser things in the Selenium space and channeling all our inner-sits. We should be ambitious. So even if there already is SOS labs or browser stack or a visual div engine out there, we shouldn't be afraid to bake it in our product to make it a one size. Yeah. So that's the other venue that I would like to help both of you explore. That makes sense. So let me talk about, okay, as I let you set that up, Mech. So the way I see the test case management thing, so when I look at the doc that I share, somebody already created GitLab Quality Center. So that's exactly what I think that would be. And so I don't think that's high, so yeah, no, Regis created this older PM from way back when. But I don't think this is something that we will do as a high priority in the sense that making individual test cases and objects, or maybe I don't know, but it would be, like I see that as similar to requirements management, we already have issues and epics, and then so we would somehow rebrand or repackage in the UI somehow as those being test cases. And so since you can already link issues together so that that's really natural, right? So whether an issue serves as a requirement or serves as a test case, then it can link to each other. So that makes sense. So that's why I wanted to bring this conversation back to them. There was one, well actually before I go to that one, let me show you something that Sid suggested as an idea. So if you click on requirements dashboard concept from Sid, so if you click on the third bullet point in the doc, like it just like Sid proposes something, just wanted to get the feedback of both Jason and Mech, like does it make sense to you? Like would that even be useful? It just seems like a cool idea, but I don't even know like how useful it would be. So it's the third link in the doc. Jason, what do you think? You're unmuted, so you're ready to see. I'm just trying to understand what I'm looking at. It actually makes sense. I actually think copies is also trying to do something like this. I think it's called depth optics or something where they call it data from everywhere and they display it in an easy to ingest manner. But like how is this used? Like is it used operationally by like an SRE? Is it used by a developer as part of like CI? It's probably used by the Scrum Master and well, you don't have Scrum Master, but yeah, like the product lead. Like I think product managers, when a feature is about to be issued, we'll find this useful because then you have a list of issues and all the issues. I'm sorry, merge requests. So this would be a way to see a summary of all the merge requests that are in your release and all the issues associated with them. Yeah, I would say this may even replace the status in our kickoff document where like this feature is merged. Like you just go to your dashboard for that feature. Like it's all green, we're good to go and boom, you're done. Instead of like everybody, all the project managers like going through all issues and it is done, this is done, this is done. I think and then we see for this is maybe a thin layer that wraps around our issues and merge requests and build results. Yeah, this is your status, it makes sense. How do test cases tie to this? Oh, right now the test cases are essentially in an issue and we've called a test find issue and that's it. And then when it's closed, the test automation is all done. So we just using the blue status on a test find issue is close to signify that, hey, it testing is done. So there's that one one point that we can leverage or lean on, but like underneath that is just all linking related issues and merge requests and making sure that those tests automation are closed out and we're using a combination of milestone and labels. Like there's a label for test plan and that test plan has created it and then the data just flows back in. So it was really basic, but it works. When you say that test plan is closed, like you mean that the code has been written for the automated test case? Right, and we don't close the test plan until everything is done. Another challenge on that part is some of our test plans aren't closed and it's closed after a feature has shipped to production and that's also the problem of resource and scale. We just don't have enough people to work on it, but ideally it should be closed out together and then you say, hey, your feature is done and then you go to production. Okay, that makes sense. So I just wanna ask for this particular feature, I think it's like, Mac, you said it's a great feature. I don't even think of it from the perspective of like a release or a sprint type of thing. That makes sense to me now that you've described it as like a sprint retro or it could be like before you release something to production, it's in a certain environment and you're looking at everything is green and it looks good, so I think that's fine. But from the personas that you're looking at, Jason, would they even care about this feature? Would they find it useful or is it sort of not related, do you think? Would it help solve any of the things that you would need? Well, I think like having, yeah, like information about from a release perspective, if everything is tested is interesting. Okay. I'm not 100% sure I've understood what this is correctly. It's a requirements dashboard, but I've just never heard requirements in this time. But no, yeah, like I put it this way. I don't, I'm just treating SIDS suggestion as a literally a UI feature and I don't even wanna care where it falls into. It seems related, so that's why I'm bringing it up here. But like I'm asking you from first principles, I guess, both for you and Mac, like would this solve anything from your folks' perspective? Yeah, and what this is is the part that I'm not 100% sure of. Yeah, so if it isn't, then the answer is just, it could just be a bad idea, right? Like that's my point. I just wanted to validate, like if it's even a good idea. Avoid feature factory at all costs. Right, yeah. So if it's not a good idea, like this is like SID looked at this and he's saying like, oh, this could fulfill requirements manager and quality manager. And I'm like, are you sure? It doesn't sound like it. It would be a nice to have as like a single pane of glass view, but we will still be missing expanding on test plans. Right, right. I mean, one problem that sounds related to what I'm hearing is that I know is important is like, and maybe it's the same thing you're saying, Mac, is that like even for our own internal releases, as we get close to releasing 11.5, what is the state of, you know, the testing and all the issues and all of the merge requests that are part of it? A view of that is super, super useful. Like a progress, you think you mean? Yeah, progress or risks or like, you know, like how are we tracking the completion? What are the things that are off of track? What are, you know, are any of the test cases like, are any of the issues like having like a higher percentage of test cases, not passive? Excuse me, sorry, excuse me. So if we wanna talk about past failure, we need the test result object. Yeah, yeah, and that makes sense to me. The requirements driven development and like the links to NASA documents about how they do software development, I don't know. Right, right, right, yeah. Okay, no, no. So if this is like NASA style development, I don't know, but if this is like a dashboard about a release, then that's an interesting variable. Okay, okay. Okay, so I mean this, I'll create an issue out of this. I'm, from what you folks have been saying and what I've been thinking about, I don't think it's like a super urgent priority. It's like a, it's a nice thing. Could be something to explore further. But what I wanted to get out of this meeting is at least like a direction of what I can call quality management. And it seems like it has to be somehow related to test cases. And so let me, I guess, bring the discussion back to that thing that I pinged you folks on like a couple of months ago, which is test run object. And so the concept here is that you go back and update an issue or an MR based on a pipeline result. And so does that still make sense? Is that, could you call that quality management? Like what, how do you folks think about that? So yes, the test run object, we have to come from a test definition somewhere. So I think we will need a test case management because for me to update, for me to update something, right? You need to create the test case. That thing could be an issue. That thing could be an issue though, right? That that's what I'm talking about. Sure, we could reuse issues and we package it. And because it's already scaling, might as well just build on top of it and make sure this is, yeah, it's scales. And then so, but it does make sense from your perspective, that a CI thing, like a pipeline, a CI abstraction and object would go in and go backward and update. Update some data. Yes. Some requirements, some test thing that a non-computer person doesn't make sense. Like a human said, at some point in the past. Right, I was establishing the traceability. That's the whole point that like, it links the traceability, it verifies the traceability. Right. So what I've seen in the past is it's called a reporter mechanism where you would stop your test cases where, hey, this test is this ID and you can choose whether it was a new test, it will go and dynamically create a test object for you. So a human doesn't have to go and create 100 tests manually. You just run the test once, enable auto creation and then boom, you have a bunch of things created for you already and disable it again. Right, interesting. It iterate polish, iterate polish. That shouldn't be something I would, that would be the direction it would take. That's interesting because we could have something where like the pipeline refers to an issue. If you've given one, if it doesn't, maybe it creates an issue for you. Right. Issue being the test result object, whatever you want to call it. Okay. Okay, Jason, I forget, I thought I saw you leave some comments, I forget, this was a long time ago. Did you have any thoughts on this one on like how should CI integrate, if at all? Integrate to which part? Like having, does it make sense that a pipeline, like when a pipeline runs a set of tests essentially and whether the end result of the pipeline or individual jobs within the pipeline will go back and update some test object, some issue object if we're calling that in GitLab. It makes sense in a general sense, I just don't know what that thing would be or where they would find it, or how they would know what map it to. So would it be something that, like from a CI perspective, do you see users going in and writing like a script? And then having the script, like as part of the script you put in like issue IDs or like a project URL in GitLab and it integrates back or would it be like a native feature inside the verify stage, like how would that be? Like because I know with pipelines, you can do so much with the scripting itself. So sometimes I don't understand whether you offer it as a first class citizen feature inside GitLab or you expect the user to code the script themselves. If the test plans or requirements are part of GitLab, then we should figure out some way to wire it automatically, I think. Okay. But I just don't know, yeah, so I don't know what they are, like I said, I don't know what they are or how they would be looked up, but I would think if it's all within GitLab, it should be integrated and not like the right scripts to wire parts of GitLab to itself. Essentially, right. And then, but I mean, as we build this up piece by piece, like for example, right now, there's nothing stopping somebody from doing everything that we just said in the past five minutes. They can write all this inside us a GitLab script, like a YAML file or whatever, it can all be done. It can create an issue from GitLab, they can call the API, that can all be done right now, right? Yeah, so like there's JUnit, that most tests are output JUnit XML. Okay. And there's probably scripts to parse that XML to create issues or... Right, okay. The question is like, if we're to work in a scalable way, like where should the issue be created? Who should it be assigned to? What should it contain? Those things would be hard for us to make a generic rule across all customers at GitLab that can like do that mapping. And especially without knowing like what exactly, like if the... Yeah, so like if you kind of work it back, there's a pipeline running that's associated with a SHA, a push, and a test case runs and fails. Like how do you find that magic issue or whatever the object is that it's that failure is supposed to be associated with? Right, and so my... In a general way. In a general way. Yeah, you can always... Exactly, my original thought was that does it make sense to be like coded into the test case itself? So this, I guess is a really mech question where like, or both of you are smart, so you should know the answer to this. I don't. Like does it like when you're doing modern development, would you write a test case? Because like would you write a test case that refers to, when you write tests in modern development, you write tests that refers to features, but you don't refer to like issues, right? Or you don't, you might refer to... You don't refer to the, if you were that feature was introduced. Yeah, yeah, like you... No, no, no, we want, that'd be too much of a hard link. I think we go by test suites, test suites in test case, like one file via test suite and then like in each file, you have like different test cases. Okay. That would be hard. I think Jason brought up... But it is okay, but you can refer to something beyond the code as my point. You can refer something. Yes, yes. I've referred to the code. Okay. I've seen and my team in the past companies were aware at the start of the test, you will code the ID of the test case. Exactly, that's my point. Yeah, that's my question. And then at the end, that's a reporter that makes a database call and referring to the test case ID, it updates a test run with that ID. There's a lot of teams doing this already. And I know that Apple is using Test Rail for manual tests and there's some integration there. I would say we should build on top of... JUnit is probably a good thing to look at because Java is too big and I don't think we're doffing JUnit HTML reporter on our side. You can also get JUnit output for lots of different, like you can tell the Python test runner to output in JUnit XML format. It's a good comment interface to start. Yeah. With JUnit HTML report. Okay, that makes sense. So I think what I can do is go a little bit further on this issue and try to sketch something out. Again, I don't see... The important thing, and it sounds like Mike was answering it, is that from the CI perspective, there needs to be some way to look up. Like I have a unit test that I ran. Right. Like what does this relate to? Right, right. Okay. What do I need to update? Okay. So this is super helpful. I just wanna know that this is not ridiculous, it makes sense whether we do it anytime soon. That's like a different discussion, like whatever we learned at product at your lab. But it's okay, like we should have these conversations. Take the time, like I'm using your time to at least know what that is and you know, sketch out something at a high level. We never work on it. That's okay because like a community contributor could come in and say, let's do this. Okay. So before I interrupted you, Mech, I think you wanted to say, or was that Jason, I probably interrupted you both. Did you remember what you were gonna say Mech? Yes, I was gonna give you a short demo of this. Oh yeah, so there's also that. Yeah, yeah, let's see it. Let me start the demo first. Share screen. Google Chrome. Can you see this browser? I see a volunteers list, yeah. Yes, so this is a product called Notion.so and they're based in San Francisco. I was involved in the RubyConf Organizer. We're doing a RubyConf in Thailand soon and then this thing is awesome. It's live, it is of a table. And you wouldn't, kind of like that, but this speaks really well to, this is a modern test case management system called Test Rail. And if you talk about test case management, it's always a table like structured because they need one column for a test name and then they have test steps on the side. So there's something like this. So this is something we could probably look as an inspiration for maybe working on the quality center side of things, but this is really, really neat. I mean, I know what we're working on live and it's in issues, but if you can have tables as well, it'll be awesome and this is gonna, yeah, it's gonna be huge. That's pretty cool. On a side note, they're doing a lot of things that we are doing here as well, like there's labels and stuff. So yeah, feel free to look at their product. They're advertising themselves as a Slack edition, so they work on top of Slack. And this is like, they have labels, they have issues, they have tables. So it's kind of like, it's your area, Victor. But it seems like very biz-opsy, so it's not necessarily for development, but just general work. Yeah, project work, project management, yep, yep. Project management and work collaboration type. Yeah, yep, yep, okay, that makes sense. Notion SO, okay, yeah, that's a cool domain. Yeah, look it up. Okay, this is great, this is great. This is super helpful in helping me. So like I said, I know a lot of requirements, management, I think I can start moving toward figuring out what a quality management solution is. All right, so I don't have to waste more of your time if it's not needed. Anything else you folks wanted to chat about? Wanted to chat about the, so in response to the feedback that I have received from the past group discussion. So again, we should be aggressive and I'm looking forward to work with Jason on this. So I put a few more things on our side from quality, verify stage that we could probably look into. The first thing is multi-project pipeline support. I know there's an issue for this already, but I think that's gonna help unlock a lot of things on our side. I wanna talk about the roadmap from quality on our side, which is we are integrating more visual diffs, more cross browser testing, and we're looking to use other tools for now. But I'm happy to help iterate and see where we can help build this in the product. That's awesome. So this is a use case for multi-project pipelines that you watch. I hear a lot of people say multi-project pipelines, but everybody seems to mean something slightly different. So I'm curious what you are. Right, so the delivery team has, let me just share my screen real quick. It'd probably be more. The delivery team has a project that they have for their own build, chat-offs, and all that stuff. But then our team, the quality team, has our own project as well, where we were responsible for the test automation runs of say staging of Canary, of Knightleys, and let me just share my screen real quick. And we want to be able to link these pipelines together. So from quality, we have a Knightleys project, we have a staging project, but we're only working on, let me click on CI, pipelines. We're only working on tests on staging. We're not worried about the deployments, we are not worried about other stuff that's beyond testing. So we need to glue this together. And there's a lot of hacks going on right now to make this work. It would be nice to provide this support natively in GitLab as a feature. Yeah, okay. There's one story that I think maybe, one issue that's triggered by. You can set, you'll be able to set triggered by as a parameter for maybe your test suite. So you could say that this test suite is triggered by these other pipelines and other projects. Maybe that will help. Okay, can you please link the issue? Yeah, I'll link it in the doc where do you want it. Okay. Thank you. Thank you. Okay, I don't want to take more time, but just going to go through this real quick. That is the most important one though, in terms of making sure that we really understand what we mean. Because there's a lot of project pipelines to mean really different things. Okay, two other things. We're looking to use some visual diffs, exercises and visual diffs in our testing as well. And it's going to help overlay accessibility testing as well where we want to compare the snapshots of Firefox with accessibility colors sensitivity on where if the color changes, somebody with accessibility can't see that object anymore. So we're looking to use a tool called ApliTools. And they've been in the market for a while, very mature. I think they have machine learning layered on top of the visual differences as well. And I use this in the past company where they're really good at unstructured data, like data from drones, day one, day two, they can actually help me verify what changed and what's an unintended change because there's a lot of noisy data. So we're looking to use this at a limited capacity, but I'll be happy to pull you in for any product discovery here. The second one is Selenium Support. We are setting up some additional help on other providers to help fill in the gaps on browsers that we don't want to maintain infrastructure on. For example, Firefox, Android, mobile browsers, Apple, mobile browsers, those are a lot of coverage that we don't have the manpower to build all the infrastructure. So it's a natural fit to go and grab somebody else to help fill in these. So we're looking at both Sauce Flaps and Browser Stack, but I don't want to say that we should be, we should replace Browser Stack with our own stuff. I think we can if we have the resource. But another easy thing to do is there are issues for integrating Selenium Views with Sauce Flaps and Browser Stack where you display the screenshot or if you can figure some kind of integration with their tool, we can just display information on their side. That's another one. And I think you are aware of this already. There's a Selenium screenshot views inside CI. So if you have a screenshot with that test, you just display the screenshot and get lab. So you don't have to. That's on the roadmap, but a bit later. Yep, yep, yep. So these are all the top of mind for me so far. Okay, cool. I also added these to the Epic for improvements for the quality team. Awesome, thank you. Yeah. Awesome. Great, anything else? Thanks for that. No, I think that's it. Yeah, I think there's a lot, so. Okay. Last thing, do you mind if I put this on YouTube? I didn't hear anything though because it was super confidential. I don't think there's anything confidential. Feel free to edit out anything I just said if it's supposed to be. Well, I'm not going to bother reviewing the video. So either you are conservative, then it's fine, I just won't put it on YouTube. Or if, yeah, like, Jason. I think it should be fine. I think it's fine. I just, I'm sick. They can listen to me sounding sick. Nobody cares. My kid was roaming around. My kid was sick this morning and that's why he was roaming around and so whatever, and my wife. No, I don't mind, it's fine. Okay. Always work with the open and transparent. Yeah, appreciate it. All right, talk to your folks later. Bye now. Thank you. Bye bye.