 Hello everybody, it's good to be with you today. My name is Elizabeth Sweeney and I'm a product manager on Chrome. I'm here today with my friend Jeremy from Zolando. Jeremy, can you tell us a little bit about yourself? Yeah, hello Elizabeth. Yeah, so I'm an engineering manager at Zolando. For those who don't know us, we are like a fashion online platform. We are serving over 3,000 brands to 17 countries and our main product is a website and an app. That's awesome. I was very curious to talk to you a little bit about your experience today about how you use tooling to improve Core Web Vitals. Based on your experience, what best practices can you share about using Lighthouse tooling to improve Core Web Vitals? Yeah, let's jump right in. I have a nice story to tell. Last year, we conducted a very large user research study which revealed that we were kind of perceived by our customers as a faceless giant and that our website did not really stood out. So when we had to ship a new release with new features like your virtual wardrobe, which is a way to have your wardrobe on Zolando and to be able to declutter it, we also wanted to apply a visual facelift to the website and be more playful. So we had these custom fonts. We had this color snapping feature where our images are kind of like matching with the page's background so to be a bit more playful. We also took the opportunity when we did this release to have a more centralized way to ship feature our scales with a highly reusable set of components. You can also learn all about this if you're curious in our engineering blog. But of course, anything in software engineering and these big releases, like it didn't really go as smooth as we planned. In fact, when we tested this new release on our beta users, the website showed that it was much slower. So if you see on the right side, we have a kind of like this table and we have this label role, which was the name of our release. And you can see like it looked like pretty red over there. In fact, we were measuring first contentful paints and also what we call primary hydration, which for us is kind of like a way because we use React. It's kind of like the React hydration time or like when the hydration is kicked in on the client side after server-side rendering. This was the matrix we were using at the time. So of course, we didn't want to leave it at that. We wanted to fix that release and like deliver these new exciting features to our customers. So we set up this performance task force and we're starting to work on performance improvements. And for that, of course, we used heavily like Chrome DevTools. We are big fans of this performance tab, which is very nice to see like your performance bottlenecks. But as we kind of like started iterating, it was really hard to know what were the most impactful leads to follow and what was the most impactful things to fix. In fact, actually, since we didn't have like very nice like performance feedback at this point, we had to experiment or to try different things and to release again to production so that we could measure with our users on this field data on our real users data, which essentially meant that like we had this one day performance feedback loop and every day, we had this kind of like tables like day to day. So we could report the progress, but we were not exactly like iterating as fast as we think we should have. And we wanted really like to be able to ship this really faster to get it out. So we knew at this point we needed a better way to measure performance. We were already, as I mentioned, like using Chrome tools, especially like Lighthouse Audits, but we were using them like more on our local setup. And of course, my local setup is not representative of my user setup on the field. We know that most of our users are actually using our website on mobile, for example, and maybe not on the good connections. So we needed a way to have this more reproducible setup in this lab environment. So we had very early, we had eyes on Lighthouse CI, but the integration efforts at this point just to save our release were actually dimmed like, okay, like we won't be able to integrate this very nice suit out of the box. So we started with the first step with Lighthouse as a non-module. This enabled us to, when we deployed, when we made changes and we were able to deploy our like GitHub branches to production, I mean, or to production like environment, we were able to test on this setup, which was more like as a service. This gave us a very small feedback loop at this point, like around like one hour. And as you see, spoilers, we were able to get those numbers back to green and save our release. But we also didn't really want to leave it at that in the sense of now we had the opportunity to not only be like reactive towards performance, but become a bit more proactive. So as I said before, we wanted to integrate with Lighthouse CI. So after that release shift, we had a bit like more like rest time on the infrastructure team. So we set out to do this proper integration. So we did an integration with our GitHub status checks in our GitHub enterprise. We also like set up this like nice Lighthouse CI server. So we have like now a timeline of performance. Now what this means is that every developers, when they are shipping web features on our website, when they're making pull requests, they have this like nice performance feedback on their pull request. And we can also set thresholds on our most critical pages like our product detail page. We also wanted to increase awareness on core web vitals and to move towards reporting more user centric metrics. Because as I mentioned before, we are measuring react adoration time, but our users don't really care about when react is adjudged on the client side. They care when they are able to click on a button, for example, on when they are able to add their items to the cart. And for that, like FIDs from core web vitals or like the equivalent in the live data with total blocking time is much more relevant. So that's why we also wanted to make this shift on this awareness. And I think this nice dashboard with kind of like the recommended guideline stood standing out is also a good way to engage with our stakeholders and like to make improvements. So I would say like for us in a nutshell, this lab and field data are very complementary. First, because your field data is still very relevant because this is a final judge. This is what your users like perceive or like this is how you experience your website and production. So this is this still needs to be in the picture. But for us, this I think this lab data was was really a way to reduce this performance feedback loop to something on like around 15 minutes. We also now consider performance as a feature. So it's okay to go iterative like in any feature. So we started with this lighthouse as a non modules. Now we have this lighthouse integration. What matters now that like in any feature, we have non regression in place with this GitHub statistics. And also that we also measure it very early in the development because again, like test when you catch a bug early, it's also much cheaper to fix. So in the end, like after slowing down the website, I think we kind of redeemed ourselves using lighthouse CI. And I hope that this gives opportunity for other people to try it out. Thank you. Well, thank you for sharing sharing your experience with that. Honestly, it is just endlessly exciting for me to see you using lighthouse in this way. It just like it excites me to no end. I wanted to ask a few questions. First is if you were sitting down with somebody who was just getting started with trying to think about how to integrate performance and tooling into their, you know, into their setup, what would you recommend as just the first step to get started? Okay. So actually one of the first step I recommend is to have a session review. I mean, I remember like we had a session back in the days where you presented actually core web vitals and the ecosystem around it because it didn't come out of nowhere. And I think like what you mentioned was like just start by measuring. So go to those page insights and put your most important pages and start to get this feedback. Also what people can do, it's very easy to set up is to go to the core web vitals GitHub page and it's very easy to integrate in your website and to report these measurements. Like it was very easy to set up. It's like a one hour task and you already, then you can put it on the dashboard and get started. And then as you gather this data, now you can look around at all the tooling and ecosystem and build upon it. I think that's what I would recommend. That's very solid. I'm curious as after you have kind of gotten this setup instrumented, is this now part of your design process? So as you are thinking about what features you're building out, as for instance, maybe product managers or marketing are thinking about what functionality they want to bring to users, is performance now a part of the conversation? How do you see this integration changing that aspect of how you work? So I mean, like in any big company, sometimes it's not the same team like maintaining the whole infrastructure and like shipping the new features. We are more like enabling other teams and sometimes we are not always part of the early discussions on design or like new functionalities. But now we are much more confident because what happens is that when these features start to see the light or we start to test them out, these GitHub status checks actually can give us like early warnings if performance is not there. So I would say we are not there exactly at the top of the pyramid where we want this performance integrating by design or like everybody is aware of it by default like the product manager. But actually now that the fact that maybe developers, we actually get feedback during maybe like retrospective or actually these features but delayed because it didn't make the performance threshold of the let's say infrastructure team. My hope is that on the next release or on the next cycle, so the product manager or product owner will take it as an input and actually like it will actually derive from there. Yeah, that makes a lot of sense. And the last question for you is where do you want to go from here in terms of performance tooling or what is the next exciting thing that you're keen to dig your teeth into? Yeah, so the thing is, as I mentioned, we are kind of building this infrastructure. We have like a front-end engineering team of around 50 to 60 developers which is quite large and we don't believe in this one size fit all and different pages have different requirements instead of performance. So actually now what my team is exploring is more like to give like more granular control on people contributing to the website and also this lighthouse integration. So they can set up like their performance thresholds on different pages. They can also set different throttling mechanism. And the idea is also with that to also give more like performance ownership to the development team themselves like so that actually it can be also included as part of the whole lifecycle. And we have also this idea of like it's nice in the middle that different teams can contribute to different premises. But then like if every team would own their own performance, they would be able to be the guardians of performance as well. So yeah, that's exciting. So just giving back like the ownership of performance on a scale, thanks to this tooling. Awesome. Well, I look forward to hearing about that when you do it. In the meantime, thank you again so much for taking the time to be with us today. It was a pleasure. Yeah, thank you.