 Greetings everyone again. We are recording this. Welcome to the secure retrospective for 11.7 And today we are going to start with Fabio. Fabio, sorry. He's kidding me. Stop doing that. I guess we won't have Fabio with us. We'll speak for him. We significantly changed the scope of the 6666 which is the upper remediate issue. We especially changed significantly the scope after the start of the cycle. So that's a problem because when we start the cycle we commit on something and for that case we didn't know what we were committing to. So there are some comments in the doc from Camille, Tetiana, and me and myself. For the sake of time I will try to submit that. Please feel free to speak out if I'm doing something wrong. We need more time before we kick off to make sure that we go through all the issues and we spot all the missing points and the gas that could happen during the next release. So that's also part of the improvement. And I think we've discussed that a lot already. We've been asking the engineering team to spend some time on the issues for the next iteration. But I have the feeling that just telling it won't solve the problem at all. So we need to change something. And I think the solution for my point of view at least would be to make sure that we end up all development by the end of the month and not the future freeze. So that we have some time to discuss with reviewers if needed to spot and fix bugs in the current iteration before it's getting shipped to work on all the things that would be left over for the next iteration. What do you think of that? I see everyone nodding. No one wants to comment. We still have some time to kill because Olivier is the next on the line. So I will move his points. Is Lucas with us by the way? The IP one? Yes, I'm with you. I will move your points if you don't mind. No worries. Yeah, what went bad? We had an issue that was catch late. That's the one that was resolved today. And I don't know if the better solution would have been to not ship the project filter because we didn't have enough time. Sorry, brain is already farting. And yeah, and I also think there are some UX things, but it's good. It aligns with our iteration value that we just can go and better things up in the next iterations. Yeah, Lucas, I have a ticket in for that. That's a line with the ticket you put in for the UX stuff. I think we did get crunched on time again. I think the GDK or some issues with how data is coming in and we can see it was a problem. When I reviewed it, I don't think there were more than 20 projects to see, so it's hard to see that issue right away. But again, I think there's a bit of, there's some issues getting parity with what we're seeing in GitLab that's live and what we're testing against or developing with, right? So it's a common mistake because for us, it's because of the missing seeding. It's hard, you know, and we run into issues, whether it's just UX or also performance issues when suddenly a project has like 10,000 vulnerabilities or something like that, right? So that just happens and you just encounter these issues once you actually use it. And I mean that people are encountering these issues is actually good because it means they started using it and they found something. I mean, we will never be 100% safe. And I mean, issues around not having anything or around having too much are always there. So, yeah, I just wanted, because otherwise I'm just impulsive, I just wanted to put something on the bad side of things. Maybe Olivier is here yet and yeah, we can switch over. And I think it's converging also in the idea of having this last week between the end of the month and the future freeze to do that kind of things. If we have something deployed on staging or dev, and you can ping me on this, it's really good to have another pair of eyes on that kind of features because as you can see with that issue, with just a few seconds and a few clicks, I was able to spot that kind of bug. I mean, I'm not saying that I'm better than the others to spot that kind of bugs, but I'm coming from different contexts. And just stepping back from time to time, it's good to spot that kind of things. So, I would like to spend that time at the end of the iteration to really test the feature from end-to-end without anyone on my shoulders, behind my shoulders saying you should click there. I should click myself and make the whole process, the whole user experience myself. That's the best way to spot gas in the UX in the feature itself, the bugs and the things. I'd say that this is exactly what the review app is aiming at, because I think we should do that at the time of merging the feature, at least when it's switching the final major request, because we have a lot of major requests right now. So, just before closing the issue, the last major request, the review app for that major request should really be the place to do the QA of a feature. I mean, to be perfectly honest, I think it's fine like it happened with the project drop-down. I don't think it's fine, we catch it so late. I think it's perfectly fine that we merged something where we just can filter for many projects and then iterate on it. And it's just a bit unfortunate that we had to do another MR that needs to be picked now by the needs managers, but other than that, I think it's fine. And you're completely right. Review apps, that would be amazing if we could leverage them more early in the process. That could help, but not in that case. If you take the review app for the 6666, just a comment. Excuse me, Philip. Just a comment on review apps. I don't find them usable actually, at least during the last week. They are hanging constantly, or giving me 500 errors. Is it behavior actually common and widespread? Or it's just me? Now, the QALITY team in the discussion group yesterday talked about that. And this is actually one thing they are focusing on to improve the reliability of the review app, because it also implies some smoke tests, because some of the test jobs are running on the review apps too. So they will be fixing that. And the other thing that is currently blocking the security is that it's something that we work on in this situation, is that we don't really see correctly the review app with the security feature, what the security feature needs. So it's not the best place today for us, but it's really the goal we should aim at. Thanks. That's exactly what I wanted to add. We don't see that correctly. And in the case of this issue, we are also waiting for some other features to store the data in the database. So even if we have the review app in place, we wouldn't be able to face that correctly, because the database will be pretty empty. This is something that we should improve in the future. Olivier, you have two next points, if you want to talk to them, please. Yes, the first one, we are aware of it, but it's worth mentioning that we delayed way too much the results of the dependency scanning project again, because it's the second time. And this is not really bad for the end user, for the customers, because we are still in time for the 22, but it really makes the release process and the QA process to hang a long time. And we obviously are doing mistakes, like I did. In this case, I forgot to enable the feature flag to enable the dependency scanning on the production environment. So it just went out of my mind. It was before the Christmas holidays, and we just figured that this week, you figured that Philippe went trying to display dependency scanning issues, vulnerabilities on the group security dashboard. So we shipped on production, because it was a race candidate. So in production, we shipped the adding of dependency scanning vulnerabilities to the group level security dashboard, but it was empty. So it's not a big issue, but it demonstrates that we should force ourselves to stick more to the process and avoid delaying. Even if it's technically possible, this should stay really rare, and we should take more care about staying in the process. I'm just keeping seeing the same thing again. So I would just stop talking. Actually, you have the next point. Did you cover that one? Yeah, sorry. Yeah, the second one is about storing container scanning reports in the database. So it was merged without being reviewed by someone else's team, and it actually went with some flows in master. So we added a feature flag in the reach right before the code freeze so that we are able to do some fixes before the 19, because when you are using a feature freeze, you are able to do some changes until the 19th of the month with your approval of the risk manager, of course. But in the end, I think we finally disabled it, but the decision is not yet taken. But we may finally disable totally container scanning due to other issues, and because you also figured out that the model is not the best right now to cover upcoming needs for container scanning, and it might not be a good idea to start filling the database if we want to change it or to add monetary properties in next iteration. Yeah, makes sense. Anyone want to comment on this? Okay, let's move to what can be improved. I mean, they're forming the engineering evaluation with front and back end. I guess the only thing we can change to improve that is making sure that we keep the last week. If you see something else, feel free to add that as a comment in there. Otherwise, just saying it, I'm really not sure it's going to solve the problem. We've been there, I think, in many retros already, and we have so many issues that we don't have any spare time to work on what's coming next. My suggestion to Fabio was to create some kind of office hours like it was suggested for the UX, and make sure that we have one meeting per week to discuss on the next issues of the next iteration. That's the only way for everyone to come with their homework and actually discuss about that. Otherwise, I'm completely fine saying you should do that and please do that for the next time, but I have the feeling that in the next race role, we will have to solve the problem. So let's see how it's going. I think it's also due to how our team is made and what we're working on. I mean, for the code diff, the feature that was delayed or the scope was reduced about providing the patch for dependency scanning updates, it's something really complicated for people not familiar with this kind of feature. And on top of that, we are also, I said, a new team, not really familiar also with the GitLab code base. So maybe with the upcoming changes in 2019, we will be more focused on a specific area. So we will have more knowledge about the upcoming features because some of the issues that are planned for the next iteration, we are just totally blind about how to do them. And that doesn't help on planning them correctly and estimating them correctly. Yeah, I believe it. There are two things that we can add also on this. Maybe we'll add that to the doc. The first one is sometimes we are expecting some other dependency, a component that is developed by another team, we don't know if it's going to be ready or not. Sometimes it's, for example, the reports, we were waiting for them, not for that feature especially, but you see what I mean, we were waiting for the reports, and if they are done, we can use them. If they are not, we need to plan B. So that's a lot of expectations and that's a lot of planning in advance. So sometimes it's just better to wait for the last minute to have all the details. And the second point is we, as you said, we rarely spot all the edge cases of a feature before starting to work on it. And I'm not sure that this is something we can work around or find something something better to do with it. If we keep switching between subject, we count, there is no way we count. I mean, when you're playing with Seth and the next iteration, you're playing with Dast, you're playing with totally different project and features, and it's not easy to think in advance about upcoming issues. Yeah, I agree with that too, especially because all of the different reports act so differently. So when I designed them, I designed them to output to the user the same way. And as we've seen in a few new issues, it doesn't really work like that. So it's even hard for me to understand how we should be showing this data to the user that's beneficial if it's kind of very different, right? Yeah, while I'm writing my comment on, do you have the next point again on the feature flags? Yeah, I just want to have an advertised feature flag. And so I already talked about it in the weekly, but it's worth talking in the retrospective too. Please, if you're working on the GeekLab raise application, of course, because we don't have such option on our site project. But please use the feature flag. It's really handy, and it allows to, it's not a good thing to rely on this, but it kind of helps in some cases. So as I said, you can kind of skip the code freeze pressure if you have your feature behind the feature flag. It's not good in term of procrastinating and doing the job later, but it's good for avoiding the review rush and merging things within the hurry because there is a code freeze date. Instead, you can have one or two days more to correctly review and eventually fix some little things before merging. So this is really easy to set up. Please use it. And it's really handy. So the default approach for us is to cover any new feature with the feature flag while we're in alpha state. Even if we are not in alpha state, because it's something really rare at GeekLab to use the term alpha or beta on the feature, because we are constantly iterating on every features, actually. So kind of everything will be alpha or beta. But yes, basically, if you're introducing something, you're better introducing it behind the feature flag. I also forgot to mention one important thing is about the default enabled option. You have to keep in mind that the feature flag, you can act on the feature flag on our own environment, like staging or the production environments, but you can't act on the feature flag for the on-premise instances from our customers. So it's really important to consider that if you set a default enabled fast, false, sorry, it will be disabled on all our customers on-premise instances. And the opposite is true. If you've said default true, like this is the case for continuous scanning, I'm currently have a match request open because I put it with default enabled true and I disable it for the production environment, but I have to submit another match request to remove the default enabled true because we don't want the feature to work on the on-premise instances. So please keep this in mind when using the feature flag. I mean, I think it's more about practice. It's whether you think that something should be enabled by default or whether you should explicitly enable that. Like from my perspective, I said that like everything should be enabled by default and as a last resort, you should have to disable something. It just makes like configuration maintaining and ensuring that everything is on the green path. And like if you see the staging is okay, it basically means that you don't have to do anything else on production and like you know that it's working. If it's not working, you just can disable this feature ahead of the time before it gets deployed. But if it's still working and you don't manage to fix that, you can quickly prepare a match request and still match it before we finish that, like before we finish release the actual release, not a release candidate. So I mean, it's up to like to the example. If something like, I would say if something is finished and you assume that it's finished, it should rather be enabled by default. If something is like in fly, it should be disabled by default because this piece alone doesn't make sense to be enabled. So I think it's based on basically on how complete is the feature. If it's like the incremental iteration that is not finished, I would say disabled. If it's done, just enable it by default to not impact later on having to remove that, like change the status. Yeah, that makes sense. All right, for the sake of sign, and we have the opportunity to keep that meeting under the 30 minutes. So I think we are going to head for that. And we just have three points in what went well, and we all will be able to attend the company call because there are a lot of annonces today. Let's start with Tatiana. You wanted to talk about small remarks. Yeah, so we have this big feature about filters in security dashboard, and because it was exploited in a smaller task on different MRs, it allows us to iterate inside of the iteration and synchronize with front-end. So it's not good that there is no same because maybe he have some different vision of this process. But from my point of view, it was great to have small MRs. Great point. I didn't tell my comment in the doc, but Fabian is not there, but Fabian is also a great job with the auto-remediate by splitting the work in many small MRs, even if in the end we just had one big MR. But we had the chance to see every step or every part of the feature in a different MR that was super useful, even if in the end we were just closing them because they were not adding any value or they were not working as they know, but that was super helpful. We should do that whenever we can. Lucas, I please. Yes, so despite all changing up the scope and everything, it's still landed with docs, with screenshots. That's awesome. I want to thank everyone and it landed despite the holy days, which is awesome, which is my next point. Yeah, that's really interesting to me because we basically just normally would have had, I think, 21 days in the release cycle, working days and basically taking off Christmas and New Year's means five to seven days taking off, which is like a quarter of the release cycle. So yeah, that's that. Any comments? I just wanted to add that this is also this month that we started the reaction rotation, which means having one team member that is not working on deliverables. But on the other side, we had new team members coming. But that's still a point. All right, is there anything else you want to mention to discuss to improve? I think we are in January. We'll take the two minutes remaining or four minutes remaining. I really like the idea of having once a year of meeting with the team. Very casual. The best would be a retreat, but we're not there yet. I don't want to organize that right now. It's going to happen in 2019. Don't worry, but not really soon. So I want something earlier than that to see what we could improve, not a withdrawal of the previous iteration, but the withdrawal of the previous year. There are probably a lot of areas in QA or I don't know, in the processes that we can improve on. We never take the time to discuss that kind of things quietly. So I will organize that if you don't mind. And I will invite all of you, we will have a one-on-one session with a kind of brainstorming session. And I'm pretty sure that a lot of good ideas could come from that kind of meeting. Also some new features. This is the kind of meeting where when we spot something that is missing for us, we can convert that to a feature that will be helpful not only for our team, but for a lot of customers as well. So this is something that I will put in place and I will ping you once it's there. And with that, if you don't have any more questions or comments, nope. Thank you very much. Enjoy your rest of your day and see you in the company call. Bye-bye. Bye-bye.