 Hi everyone, I'm incredibly excited to be here. My name is Meil and today we're going to talk a bit about Frontine Infrastructure, which is something I've worked on for the past few years. More specifically, we'll start by discussing what it actually is, then go over its core parts, and finally, we'll get a more abstract discussion about how to be efficient when you're part of a Frontine Infrastructure team, which can be surprisingly tricky. My hope is that by the end of this talk, we'll have a better idea of how to improve the tooling you actually use every day. So the first question we should ask ourselves before going further is what is actually Frontine infrastructure? We all have this picture of infrastructure being maintaining networked edges, Kubernetes clouds, sometimes data centers. So what does it mean in the context of Frontine? Frontine infrastructure is about building the platform that will leverage your developer's strength. It's about giving them the tools to focus on the core proposition of your project. Frontine infrastructure is made up of two equally important pieces, perhaps even three, but we'll come to that later in the abstract part. The first piece is developer experience. How do we make it so that engineers don't have to spend their time tweaking the tooling that we use, be it Webpack, ESlin, Babel, TypeScript, Jest, or any other tool we fancy? How can we let them fix bugs and develop new features as fast as they can? So that's the first piece, but it's not the whole story. An equally important part lies in stability. How can we make sure that Frontine developers don't have to worry about their changes? How can we be confident that the next deploy will work just as well as the previous one? You need to have strong answers to both of these questions because otherwise we'll eventually run into a wall, which is never a great experience. First, let's talk about the developer experience. There's a lot to say about it, but today we are going to focus on three core axes. Automation, something that I call clon-and-go, and the iteration speed. Automation is about decreasing the cognitive complexity of developers turning a project, making it very approachable, be it an open source project which often needs external contributors to survive, or a company product which onboardes new developers every year per a single month or weeks, automating processes decreases the amount of knowledge that one has to have before being productive and makes it much easier to evolve procedures over time. To reach this perfect onboarding state, the first rule is to make sure that workflows only require to run as few commands as possible. If installing a repository requires to first set up the database, import the fixtures, build the templates, and finally start the dev server, then why not abstract it under a single init and start command? You can even have a good visual interface to see the various steps that the script is playing out if you want to. The point is it needs to be that simple. There's just no reason nowadays to get working on the product behind the technical implementation of your infrastructure. You shouldn't assume that people will know how to use it. The second rule of automation is that your infra needs to be the source of truth. We have so many great tools today, we should use them. We can enforce a common formatting with pressure, we can validate the semantics of your code with time script, we can check for dangerous patterns with ESLint. My point is that we can and we should automate as much as possible the detection of most problems. First, it makes sure that whatever lands in master is right by your standards, but it also makes it much easier for any new contributor to jump into the code knowing that if anything's wrong, the system will catch it. This slide is an example of what we do in Yarn. We have a dedicated GitHub action that simply runs all the sanity tests we can think of. Some are fairly basic, some are more complex, some are more security checks, some are more linting. Not all of them run every time, only if they are actually relevant to the changes. But in general, the idea is that they can catch a good deal of the problems you might head into while working on your pull requests. And on top of that, they also indicate to you how to feed them. No, that's only the first iteration and we are still working on it. So the next evolution will be to automatically fix the problem as much as possible. For example, in case of conflicts, it might be to run Yarn itself in order to automatically merge them or to automatically fix them. Or it can be to run ESLint to apply the fixes. Anyway, let's continue with Claude and Go. So I used this term to express that we want developers to be able to just jump to the project and be immediately productive. As you can see, it's a direct follow-up to what we discussed with automation, but it definitely goes farther. Where automation means that we want to execute code without user interaction, Claude and Go means that we don't actually want to execute code at home. The ideal project should allow you to start running basic command like build and test as soon as you see into it. And if somebody is missing, it should explicitly tell you how to fix it without you having to read the 10-foot long internal documentation that got outdated three weeks after being written. Interestingly, Claude and Go goes even farther than the code itself because the exact same thing can be said about the editor tooling. Of course, everyone in your company has their favorite editor, but usually you can find a top three and PRP's even the top one in certain cases. Most of those ideas now offer the ability to store project settings directly within the repository. And by leveraging this, you can be sure that everyone's on the same page. Whether you use editor config, creator, ESLint, all those extensions critical to the developer experience can now be configured the same for everyone. Of course, you won't want to store a user settings here like the font of the editor theme, but offer everything absolutely required for a good developer experience. It really makes sense to ship it out of the box. Finally, Claude and Go is something we really took into account when designing YON2. In the past, the workflow when switching from a JavaScript project branch to another was to run YON install to make sure the dependencies were up to date. The problem is that this has a cognitive cost. You don't really want the engineers to have to remember to do this. At the same time, you can't really run it automatically because it's just too slow. Even the fastest installs of a few seconds will still make it a pain to work with it in anything but the simplest workflows. You don't have a few seconds to spare when you're check-outing a new branch. YON2 comes with a solution to this, so I won't get too deep into the specifics here, but the idea that by mirroring your dependencies archives into your repository, you never have to worry about running YON install ever again, even when switching branches. I expect this kind of integrations to become more and more important in the future because something we noticed in computer science is that to make things faster, we sometimes had to make them run in parallel. The same is actually true for developers working on features because by giving them the ability to jump between branches by removing the context switch cost then your developers can capitalize their time efficiently and produce more. Finally, a good developer experience is a lot about the feedback look when it comes to the development itself. How can developers see if their challenges work? You will often be tempted to use pre-commit hooks to run various sanity checks, but if I can give you an advice, do it sparingly. We recently had to disable TypeScript on pre-commit inside our own codebase because it was causing unnecessary slowdowns that prompted engineers to entirely disable pre-commit. Keeping it light makes it actually more likely that early mistakes will be spotted, especially since ideas are now so good at surfacing type errors. As far as the product development goes, I found very important to make sure that the ultra-lowed story works. If product engineers have to refresh the page between each change, they will lose momentum retrieving the state. Speaking of this, the React team unveiled a few months ago a project called React Refresh, which aims to fix most of the long-standing flaws in previous incarnations of ultra-lowed. Next year, integrated it in their offering not too long ago, and the generic webpack plugin is now almost stable, so now is probably a good time to start looking into that. Once the development is done, you will want to see the changes in production before actually deploying them. We want to share it with your project manager, or designers, or cloud maintainers. To do that, you will need a deployment service that supports deploying any branch repeatedly. Thankfully, most providers, like Virtual or Netlify, no support this kind of workflow out of the box. But even if you have your own deployment pipeline, you can still easily implement this kind of logic. The only thing you really need is a way to tell your backend to load the assets based on the branch name. Ashing the name and storing it into a dynamic subdomain is more than enough in most cases without need for more complex stuff. I think we'll stop there for the developer experience part. There are a lot of other areas of improvements we could list, but my goal here was to highlight the main one and show you some easy wins that often go unnoticed. Before we move on to the next point, one last thing I want you to note is that the principles we've discussed here are as important for open-source projects as they are for the company. As maintainers, our role isn't necessarily to write all the pull requests, but rather to put our contributors, which often includes ourselves, in a position where they can easily work on the project without being burdened by all the maintenance aspects of things. Sometimes it can be as simple as just making sure that running yarn tests will actually run the tests. Anyway, let's keep going and discuss about the second pillar of frontend infrastructure, stability. Stability is critical. Its goal is to make you confident. It leads to faster bug fixing because you know that your infrastructure isn't responsible for them, and it makes you sleep easier at night knowing that your fellow teams won't need to page you just because something unexpectedly broke and deploys don't go through anymore. But how do we get there? The first step for a stable infra is a very simple one. Control your dependencies. Don't rely on external ones that can change at any time, and I don't only mean JavaScript dependencies. Take the network. The network is a dependency like any other. If you suddenly lose access to it, your deployments will stop. If your registry loses access to it, same thing. Of course, you can just stop relying on the network, not when so many of our deployments occur in the cloud, but we certainly can decrease its surface. In particular, don't install your packages from external registries. That's a bad idea. They often go down, and when they do, they completely block you unless you are prepared for it. Thankfully, solutions exist. Yarn has this concept of offline mirror to shield against this class of issues, but even if you use something else, you can still use the like of Verdatio to set up a local registry. There's actually a talk about Verdatio at this very conference, and I really recommend you to check it out. So that's for the network. But installs in general are tricky. By Murphy's law, any code that you need to run is a code that will eventually fail. So to be really safe, you need to cut down the amount of code that needs to run. Cutting down installs is a fairly new concept, so there isn't a lot of support yet. Yarn supports it with this zero install mode we talked about, but that's pretty much it at the moment. Still, it's really something you should consider. For production settings where installs are responsible for so much of the CIA time, removing installs altogether brings very significant improvements, both in terms of speed and user experience. And finally, one last piece of advice about dependencies. Don't use native dependencies or any dependencies with post-install scripts, really. Post-install scripts outside of the security aspect of running external code have a high tendency to fail. Sometimes it's a remote URL that gets really limited that actually happened in one of my pipeline not too long ago. Sometimes it's a local library that isn't there. Sometimes it's literally just a Monday and the post-install script isn't made to work on Monday. That's the true story. I talked about how code that runs is a code that fails. This is especially true for post-install scripts that you don't control at all. So be mindful of what you use, try to prefer WebAssembly packages instead of native ones, since they are pre-compiled, and avoid packages with post-install scripts. Stable infrastructure aren't only about deployments. You also need to be sure that your tool chain works, and writing it in TypeScript is a very good way to help in this rigor. It's funny because TypeScript is often found in front-end code nowadays, but I still find a lot of infra scripts written in JS. I don't know if it's because the scripts don't deserve the same level of attention, but I find that too bad, because they're quite critical to the code that we are writing to the product. Something to realize is that you don't actually need to transpile TypeScript into JavaScript before being able to run it. You might have heard about Dino lately, but even going as far, we can already have pretty good solutions in Node World. Babel can add support for executing TypeScript files directly into Node. TSNode can do that as well. And together with the official TypeScript compiler that can type check without transpiling the sources, I think we have all we need to make sure that the code we write for our tool chain doesn't suffer from typos, wrong argument types, missing a ways, and all this class of errors that are already fixed for front-end code. So we talked a lot about specific points we can improve in the infrastructure of many projects. But now we're going to discuss something slightly different, but just as important. How do you know what matters? How do you know what to prioritize? I could go on and on and keep telling you particular pain points that are often true today, but the fact is that the impact of those changes will vary from one project to another. So rather than keep enumerating a fixed list, I think we should now check out how to find our own additional items. And with that, we enter a more abstract part of this talk called monitoring. The idea is simple. There's only so much time in the day and only so many resources to spend on an infrastructure, right? So how do you pick the task that will have the most impact for your team? You also need to measure whether your project actually improved things so that you can iterate and refine your approach of a time. One problem is that infrastructure is hard to measure, really hard. As we previously saw, a good part of the front-end infrastructure is the developer experience. And the developer experience is driven by a loose concept of happiness. And how do you measure happiness? There are a few tricks for that. The first strategy is called passive, because you're waiting to see the data. You aggregate all kinds of objective data points and look for trends that emerge. Two good tools for that are Yarn and Webpack, which both offer hooks allowing you to retrieve accurate usage metrics of a time. The second strategy is active. This time, you directly go to your users and ask them what they think. You can do this as a mass sweep with company-wide points, or you can schedule one-one with your colleague, but the point is you need to go to them. Finally, the first approach is to be reachable. It won't be enough to get your peers' feedback one every bloomin'. You will need to make sure they have the proper channels to share their own ideas. By experience, something that works quite well are GitHub issues. Infrastructure tasks are often long-lived and it can become hard to track progress. Good old-fashioned issues are a good way to let everyone share their problems, avoid them a recurring one, and subscribe to progress on the one they don't care about. I'd like to go over the passive strategy a bit more. What you can see on screen is the number and duration of the type check command runs in the time datalog or every day. We have a Yarn plugin that collects this kind of information and sends it straight to our dashboards. Through this, we get a good sense of what problems the engineers may face, quickly detect regressions, and try edge feedback. We do this together many metrics, which scripts are used, what's the size of the code base, how many ESLIM disabled are there, what's the size of the build, what's its duration. All these automated information gathering helps paint a kind of real-life time picture of our work. So we have found ways to gather metrics, but now there's an important question that we need to answer. Which of them matter? That's a question between you and your users. See, metrics are an insight. They give you a snapshot of data, but the way you will interpret them will be impacted by your users' perception of that. For example, you might think that, let's say, the deploy time is too high because it appears so when you look at the graphs, but when talking to the engineers, you might find out that none of them really see a benefit to bring it down, perhaps because they are doing something else in parallel. And since infrastructure is about making your developer's dream come true, you will have to take it into account into your plan. That's actually what front-end infrastructure is, a repeated cycle of interpret, validate, track. First, you find candidate problems, then you validate them with your users, and then you find a metric that will be impacted by the fix, and you can finally track them. The key is to validate, because without it, you run the risk of working on an idea that you will find useful, but that won't be perceived as such by the people it was supposed to help, which isn't good for anyone. Okay, so I talked a lot, and there are so many things about front-end infrastructure. I think this is only the first of many dyes we could make into the subject. To recap, by investing on developer experience and stability, you get to have a multiplying effect of the teams you support. To do that efficiently, you will need to plan your work ahead. You will need to have a longer vision, while still saying flexible enough to be able to adapt for new OKRs that could appear in your radar. Finally, you will need to make sure that everyone's aware of the value your work will bring once completed. If they aren't, it may be the sign that you haven't communicated well enough, or perhaps that the value just isn't there. It happens, and it's not a big deal. The key is to use that to find out more about what you could do that would have more impact on your users. I hope you like this talk. I've experimented a new approach and first designed the plan using Excalidra, a platform allowing to build graphs with an emphasis on the content rather than the visuals. I will share the plan on Twitter after the talk, so if you forgot to take notes, feel free to join me there. Finally, here are some links that you can look into to find more information about various topics we discussed. I will leave it here for a few seconds so that you can take a screenshot like right now, for example. Yeah, now is the time. I will join you for a Q&A right after, so please feel free to share what you thought of this talk. It's the first time I've done it, so I'm kind of in uncharted territories there, so I would really appreciate having your input. Thanks everyone for listening. Have a great day!