 Okay, so we have Nicholas. Finally, we have someone that actually implemented something in the browsers. A couple of the performance APIs. So it would be really interesting to understand more how it is. So, Nicholas, please. All right. Hello, everyone. Like Peter just said, I'm going to talk about shipping a performance API on Chromium. And I'm going to talk about it from the perspective of how we worked to ship the element timing API. I'm Nicholas, and I work on Google Chrome on the speed metrics team. In particular, my focus is on defining and implementing new web performance APIs. So those are the ones that you can run on JavaScript. So I'm part of the Chrome team within Blink. Blink is Chrome's rendering engine. So basically that's where we implement the JavaScript APIs. So the objectives of the talk are basically two. I want to explain the process that's involved with standardizing a new web performance API so that developers get a more clear sense of the steps involved and can also maybe understand why sometimes it takes so long. In fact, they have a 42-step checklist so that I don't forget some of the steps that are required. And the second objective is to encourage web developers here or web and performance enthusiasts to get involved. We love to get feedback about what we're working on, the new APIs, and what we should be working on, like what new APIs we should be surfacing to web developers. So that's one of the other purposes of the talk, to get some help from you, to get some feedback about what we should be working on. Now let's go over the steps to ship an API. The first step is to identify a problem. So in the case of performance APIs, the problem will basically be there's a gap in measurement. There's something that web developers want to measure, but they can't do it right now or they can do it but in a very hacky or unreliable way. In the case of element timing, the gap in measurement is being able to measure when an image has rendered on the screen for the user. So throughout the talk, I will have some screenshots from some websites to showcase what I'm trying to say. For example, here I have a screenshot from stevesouders.com and the blog is saying how to measure the image render time. So the idea is that you have an image tag and it contains the hero.jpg, that's your critical hero image, and it has an unload where you call performance.mark. That will basically give you a high-resolution timestamp of when you call the method. So that gives you timestamp hero one. And in addition to that, you have to append a script right after the image tag and the script will also have another call to performance.mark. So you get a second high-resolution timestamp, in this case hero two. Now to get the estimated image render time, you get the maximum between those two marks and that is the proposed solution. Does it sound reasonable? No, of course not. It's very hacky and in addition to being very hacky, it's also inaccurate. Why is it inaccurate? Well, those two calls will be running on the main thread, which is the only thread that can actually run JavaScript. But browsers generally actually render content in separate processes, which means that the timestamps will not really capture the actual time when the image was rendered. Now that we have identified a problem, let's start by writing an explainer. So what are the parts of an explainer? The first and most important part is to present the problem. So in the case of element timing, we have the following problem. Developers know what content is important to them, what they want to know has been rendered on the screen for the users, and browsers are the ones who actually paint this content. So they are the ones that can compute when that content has been rendered. So from should be panicers explainer, it can be summarized as web developers want to know when the critical elements on a web page have been displayed on the screen. Now once we have presented the problem, we can have some use cases. So what are the user needs that are being satisfied by the new API? Or what are the user needs that we want to satisfy? And what are some examples of how this new API could be used to solve real developer problems? In the case of hero element timing, we have two examples. One is time it takes for images to be displayed upon page navigation. So some images are displayed on the page immediately after navigation because the user doesn't have to do anything to see them, they just show up right away. And the other use case, a little more tricky but still applicable to this is measure the time it takes for hero elements that are caused by user interaction. So for example, you can click on something that causes a new giant image to appear on your web page. Of course in this case, it doesn't make sense to compute the delta with respect to page navigation because it depends on when the user interaction occurred. So now that we have the use cases, we have a third optional component which is usually in explainers but I will call it optional and it's proposing a solution. So having proposed API of how we could solve this problem. I say it's optional because the idea of the explainer is more to present the problem and to make sure that people agree that this is something we want to solve and the solution is just something optional of maybe this is one way we could solve it. In fact, we discourage people to have very concrete proposals in their initial explainer because then as even browser engineers are human beings, right? So we get attached to the solution that we're first proposing. So then it'll be harder to make drastic changes on the initial proposal if we have a very concrete proposal at the very beginning. So in the case of element timing, our initial proposal is basically let's annotate the hero elements. So the web developer has to do this and you can see there it is using the element timing HTML attribute to annotate this is an element I care about. And let's expose information about those annotated elements via the performance observer, which is the class that exposes most of the web performance APIs on JavaScript. So once you have written an explainer which presents the problem and the use cases that you're trying to achieve by solving a problem, you can start by socializing the explainer. So there are several ways in which you can do this. For the web performance APIs, we take mostly two steps. One is to present to the W3C web performance working group. We basically share the explainer and we talk to them to see what they think, what concerns they have. Do they think these use cases are valid? Is this something that could be implemented in all major browsers? Things like that. The ways of communicating in the web performance working group are several. We have a mailing list, which is in the slides. But we also have bi-weekly video conference calls. So basically we have an agenda every two weeks where we just gather together for an hour and talk about whatever topics are on the agenda. And we have roughly yearly face-to-face. So we gather for let's say a full day and try to solve as many problems as we can in this space. Another way to socialize the explainer is via the web platform incubator group or WICG discourse. So I also have put a link for that discourse, which is basically a forum with ADS. Oh, I should point out that the members of the web performance working group are mostly browser engineers. So people like me from Chrome and there's also people from Firefox and Safari. But there's also in the case of the web performance working group, there's web developers from large companies as well as web developers from other smaller companies. In fact, WICMedia also participates. And there's also analytics providers like Akamai participates in the working group. So basically it's open to anyone that is really enthusiastic about web performance APIs. Now we have socialized the explainer and ideally we partner with people that are really interested in solving the same problem and we develop a concrete proposal. Now this proposal usually will live in WICG. That is kind of its purpose. The idea is that that is the place where we incubate new proposals before they become actual web standards. And in addition to moving the explainer to WICG on GitHub, we also request a design review from the technical architecture group or TAG. It's made of people that are experts in the web and they have a familiarity with the majority of major web features. So they do a high level design review and provide some feedback in terms of privacy concerns as well as design concerns for the API. In addition to this, we send an intent to prototype. The idea being that we're signaling to the world this is something that a Chrome engineer is looking to implement. Before it was named intent to implement, but we decided to rename it to prototype because it is actually more accurate to say prototype because it is implemented behind a flag in Chromium. But the flag is basically disabled by default. So it is code that is not shipping to any user and it is just like basically the playground of the browser engineer where he can just implement whatever he wants without having a fear of it reaching the final users yet. So this intent to prototype is sent to Blink Dev, which is a public forum, obviously composed primarily of Blink engineers. But anyone can see the intents that are sent to this forum. It's basically a Google group. And there is no approvals required at this stage and that makes sense because we're not launching any code to the users yet. Now, proposals can take multiple iterations. As you can see for element timing, we did quite a few of iterations. So the first is the original proposal and then it links to the updated version. And then you click on that, then there's another proposal which says obsolete. And now it has a link to the updated version. So you click on that, oh, it has been moved to WICG. And then finally that link will take you to the actual current explainer. So we have multiple iterations because like I said, socializing the proposal, the idea is to make changes based on feedback. Now, once we have a more solid proposal and after we've sent the intent to prototype, then we can actually start prototyping the proposed API. So implement in Chrome behind a flag that is disabled by default. In parallel to that, we add web platform tests. These are tests that help prevent the greatest joy of web developers, which is like they said in the previous talk, basically browser vendors not behaving the same. So the web platform tests ensure that you can test the same thing on all major browsers. So that's the idea. I can go a bit over how they look like. Here at the beginning you have some imports of some scripts which are the test harness. They just allow you to create new functions and do some assertions inside the HTML file because that's what the test is. Then we have some content of the actual test, some style there, and then we have a hero image which is annotated with the key element timing HTML attribute. And then comes the core part of the test, which is using the performance observer to obtain the rendering timing of that image. And then inside the performance observer, you have a callback and in that callback you do some assertions to check some of the properties of the information that you received. In parallel to implementing or prototyping the API and writing web platform tests, we draft a spec. So the specification of an API is meant to not only inform web developers about how it works, but also inform other browser engineers about how they should be implementing the new API. So some characteristics about it is composed of both pros and algorithms. So some of it is paragraphs, but other parts of it are more algorithmic steps and things you should do in sequential order. It is written in Bikeshed or Respec, so it has a special language just for it so that it can have a very specific way of rendering when you translate it to HTML. And usually a new specification will interact with new existing specifications, like for example the HTML or the DOM specification. The reason for that is, well, you need to call your new algorithms from somewhere. So usually there will be interactions with older specifications. And one very key component of a spec is that it shouldn't have any Chrome specific jargon. Many specs are written by Chrome engineers, but the idea is that when someone from, say, Mozilla or from Apple is reading this spec, they should be able to understand what it's saying and they should be able to implement the steps of the algorithms. So it should not include anything that is specific to Chrome. Now once you have a draft spec, we can start doing the internal launch review. So in the case of performance APIs, our main concerns are privacy and security. The specter ring a bell. No? Yes? Well, there are tons of privacy or security concerns with exposing high-resolution timers on the web. So yes, we need to do a very careful review of any new features that we're trying to introduce. In addition to that, of course, the web performance working group or the tag can also surface some concerns and we have to make sure that all of those are addressed before we actually ship the API. Now an optional step in the process is to do an origin trial. So I have a link there for what an origin trial means, but the basics of it is it's a way of allowing experimentation before we actually ship the web feature, like for real things. So in this origin trial, the idea is that browser engineers can get some early feedback, which, have I mentioned that we love feedback? Maybe I have, yeah. So we love early feedback and the reason we do is because after you ship a web feature, it's very hard to make breaking changes to it, right? Because once web developers start relying on that feature, if you make some breaking changes, you might break, well, real websites, right? And then people will be mad and they'll be mad at us. We don't like that. So the idea of this is that a web developer, let's say they're interested in an origin trial for this fantastic new web feature, they sign up and they get some tokens for their domains. And some small portion of page loads can use those tokens and will actually be able to access the experimental web feature. Why only a small portion? Again, so that this website does not start relying on these APIs to actually work properly. And once they start using this experimental API, which is disabled by default, but under control environment, it can be used, then we can get some, oh, we can get some feedback from it. I forgot to say that in order to launch an origin trial, you need to send an intent to experiment. So again, this is sent to BlinkDef and it requires approval from one API owner. So you will ask, what is an API owner? It's a person from, well, it's a web expert, not necessarily works at Google, because we do have an API owner who doesn't work at Google. But it's a web expert that is very familiar with Blink and with Blink's mission and what we are trying to achieve. And they can assess the interoperability risks and the benefits. So there's always trade-offs associated with this API. And once you get an approval from one of these persons, then, yeah, you launch your origin trial and you can get some feedback. So in this case, I'm highlighting feedback from Peter, hi, Peter, that is basically saying the API looks promising, at least for Wikimedia web pages. It looks like it's actually, sorry, accurately capturing the image render time of the critical images from the pages that were tested. So in general, we get some feedback from origin trials, but, well, you can understand that most web developers, they want to use features that are actually already shipped, not features that maybe ship in the future, are they going to ship? I don't know. So we don't get a ton of feedback from origin trials, especially for performance APIs, but if you're interested in that, then let me know, because I'd love more early feedback. Now we need to polish the proposal before we actually ship the feature to users. And this involves many steps. For example, we need to know what other browser vendors think about the new API. So that's basically getting signals from browser vendors, but we also need to think what developers think. Do they actually think it's useful? Or are we just working for no reason? Because if it's not going to be used later, then what's the point? Then we also need to polish the draft spec, which is hosted in WICG. And we need to make sure that there are no major bugs in the chromium implementation, which is behind a flag. And we also need to address feedback from the tag review. So these are examples of polishing the spec. These are pull requests into the element timing GitHub repo. And then these are examples of polishing the implementation, because these are chromium code changes related to the element timing API. Now let's say we are all done and our API is like super perfect, right? Because code is always perfect. Now we're ready to ship. So what's the step we do? We send an intent to ship to Blink Dev. This is again signaling the world, including other browser vendors, that we are planning to actually now release this feature to users, like for reals is now. Not origin trial, like enabled by default. So this is obviously a more rigorous step, because it is actually now the real deal. So it requires three approvals from these special people that are the API owners. And once you get that approval, then we basically ensure that Chrome status is up to date, because I don't know, does anyone use Chrome status at all? So yeah, some people here raised their hands. Some web developers use Chrome status, and our dev rel also uses Chrome status. So it would be nice if it's updated, right? Otherwise, who is going to promote the new web platform APIs? Well, no one. And then we can flip the implementation flag so that it is launched by default. Even after shipping, the work is not done. There are several things that we need to continue doing. For example, the most obvious one is basically remove all the experimental flags, because once they are enabled by default, well, maybe after a while to make sure, like, oh, no, you need to unship this. I removed all the flags. If you still have the flag, it's very easy to unship a feature, but once you're sure you're not going to unship, then we remove the experimental flags. Then we also want to continue the conversations in the web working group, because we're not shipping a Chrome API. We're shipping a web platform feature. So what do we want? We want everyone to implement and ship the new feature, right? And we address issues that are surfaced on the GitHub repository, so that's one example there. In addition to that, we monitor usage and crashes. You can't tell from the screenshot if that's usage or crashes, but obviously it's going a little upwards, so I'm going to say it's usage, right? And we also remove features that do not have multi-implementar support and have very little usage, so that is to say that even after we ship our platform API, we may actually change our minds in the future if we see that it was actually a failure, because sometimes, like, everyone makes mistakes, right, even us, so, yeah. So a very simple process here is a summary. I don't know if you can see very well, but I'll try to be concise about summarizing it. The idea is you have an explainer, then the explainer, again, the idea is to present the problem. After you have presented the problem in the explainer, then you can start socializing your idea to W3C, the worker working group, or to WICG. Then, once you have socialized this idea, maybe, well, we don't partner a lot with people to develop the original explainers, but it would be great to partner with people, right, like, share the work, but perhaps you partner with people that are really interested in the proposal, and you write a more detailed explainer together, and you move it to WICG. In addition to that, we can start requesting a tag review, we can send an intent to prototype, saying we're ready because we have this concrete proposal of how to solve this problem, and we're going to start implementing behind a flag. After you've done all those steps, then you can implement behind a flag, and while you implement, also very important, write web platform tests, so that not only Chrome knows how to implement the API. In addition to that, write a draft spec for the same reason, and then do a launch review. That's more internal, but it helps surface potential problems. After all of that is more or less done, you can start an origin trial, and perhaps get some feedback from it, okay. In addition to that, well, keep polishing your implementations, spec, tests as well, and whatnot. Finally, once your API is in good shape, and once developers are like, when are you shipping? Give it to me now. Then you can send an intent to ship, because you have concrete evidence that this is something that is going to be useful as a web platform feature, and perhaps you have already concrete signals from other browsers saying, yeah, we think this is interesting. Sometimes they may disagree, but that doesn't block us from shipping. Then once you get approval on that, you can now actually release the feature to all users. Like I said, the work there is not done. There are some steps, even after you release the feature, which are cleaning up the code, encouraging other browser vendors to actually ship the feature, if they haven't done so already. We're not always the first ones to ship a feature, by the way, but almost always. We then monitor usage, right? Make sure that it's actually being used by web developers, because if it's not, then what's the point, right? And that is the end of my talk. Oh, something's happening. That's like the signal you're done. I think, is it? How many quarters is my manager willing to let me work on something like this? I would say more of a multitask kind of guy. Like web features can take a very long time. Like you saw from the, when I show the multiple iterations, that's multiple people writing similar ideas until eventually someone actually pushes it forward and ships it. So it can take a very long time. From the time the initial explainer is written to the time it actually gets shipped, for actual intensive working on it, maybe a quarter could be reasonable. Good question. Why are new APIs always in Chrome? That's a good question. Why do other browser vendors not implement many of these APIs? Well, right. Well, it depends how you define standardized, but the main answer is resources. So Chrome has more resources than other browser vendors, both Mozilla and Apple, at least for the web platform, right? So they choose to prioritize on some things and not others. In particular, performance perhaps is something they don't prioritize as much as I'd like to, right? Because, well, I work on performance and I'd love to see them ship the APIs shortly after we do. That would be great. But yeah, that's not the case because of prioritization and the resources they have. But one way to help them prioritize is by shouting at them when they don't have the APIs implemented. I hear that that works wonders. But yeah, the reason is that resources and the prioritization that they have. Do we do any snapshots like from real websites? Yeah, we do have a bunch of lab tests and testing in the lab is really hard because unfortunately, the web is not very sequential, right? Because so many resources arriving at different times, there can be multiple race conditions and the truth is loading the website twice won't necessarily give you the same waterfall. We don't use web page replay, I think. But we do have our own lab setup where we can gather traces from the lab loading and then we analyze those traces and that helps us catch regressions in our own Chromium code. So when things get obviously much slower, we are able to use the lab to detect that before it actually reaches real users, which is one of the main benefits of the lab as well as less noise, of course. So yeah, we do use that. I'm not the person most familiar with that, but if you want some tips, I'm happy to connect you with people that know more. Sorry, why is vendor... Oh, yes. The idea is that we want to make features easy to use, not hard to use. So yeah, I understand what you mean. Before APIs had the prefix, but then basically the problem is web developers don't test for our browsers. So developers that have less market would just copy the prefix and that just looks bad, right? Instead of putting their own prefix because nobody's going to use it, then they would just copy the predominant prefix, like WebKit underscore something. And why would Firefox want to ship something that's like WebKit underscore something, right? The idea is that these APIs are for the web platform, not for WebKit or for link specifically.