 So, yeah, I'm from Los Angeles, Syria, and today I want to talk to you about majoring performance with the user timing API, which, you know, works in Chrome, which we like as that works. And there's polyfills for other browsers if you want to do that in other places, if you will. So this is more of like a quick story about my tinkering and experimentation. And so I blog about these things at pixelhandler.com. And so you can actually see the gist of what I'd like to share today already posted over there. So I'm going to kind of just jam through some information that I've already shared at our local meetup. So my curiosity about performance kind of is combined with a conviction that performance matters, like, so I should start measuring things and possibly take some observations and make decisions about what I'm measuring. And I'm not very scientific about it yet. I've learned that I can get better. So this is also kind of a demonstration of, you know, learning from yourself and how you can improve. And I've had some support from the people that are staying with us at our Airbnb house. But I wanted to compare, you know, handlebars 1.3 and HTML bars and see if there was any gains in the application that, you know, I tinker with my evenings and weekends. And in case you missed it, we have HTML bars now. Thank you guys who did that for us. I really appreciate it. So that was my curiosity. Like, let's see what this thing can do for me. So where should I begin to measure the performance metrics for rendering a template? So we've been talking about, you know, Embers primitives and we have a router that has this great hook for rendering the template. And apparently everything's done and ready to go when you enter this hook. So right there I can just mark the time, measure it. And but what about after? Like, when is it done? So it has to be done at some point. Well, great. The run queue kind of tells me as I've entered the after render queue, that stuff's done. So I can just capture that measurement and know that that timing should be useful. So should I just use a timestamp or is there something better? And that's why I started searching and I found the user timing API, which has this like high resolution timing and multi, I'm sorry, sub millisecond resolution and was like, that's probably faster than I can think. Like, let's go with that. So later I realized as I was capturing these things that I could just send them to Google Analytics and they can do nice reporting and charts for me. So I'm going to send some of that data there as well as capture it in my own database. So I created a mix in and I like to share that with you real quick because it's really easy. You may want to try it. But it's super simple, right? I need to mark the beginning and the end and there's two points. When I enter this hook and then in the after render queue and I just need to capture this measurement. So I also have a utility, I'm sorry, let me just skip ahead real quick because I'm going a little bit too slow. So there's these utilities, right? Like, does window performance exist so I can fail silently if it doesn't exist and if it does, I can use that API to mark and measure. So you can learn more about that just by looking in, you know, searching a line, right? I have some links for it I'll share with you after and then when I wanted to report it, you know, I'm just going to like post it to my API, send the measurement as a JSON document, also send it to Google Analytics for tracking purposes. Again, I'm not going to go through all this code now because it's available online on my website. And again, you'll find the links for, you know, these performance APIs as well and some great tutorials from HTML5 Rocks and that's kind of where I started as well. I decided to capture a few extra things too that sound very analytical like when the application was ready and, you know, maybe that would be useful. Sometimes I dig into these metrics a little bit more, but this is just a gist of kind of what I'm capturing. What was important for me was this duration value and then that gave me something that I can compare. So right away you might ask, well, what did I find? And honestly, I can't really tell you because all I did was observe the findings and I didn't really study on proper benchmarks. So Crissel then helped me realize that yesterday as I was talking to him about it, there's probably a better way to, like, come up and summarize this data by using, you know, normalized data and actually a geometric mean. Also I did was kind of look at, like, what was fast and what was slow and kind of show me some averages and, you know, that might help me look a little bit deeper. So I'm going to do that next, right? For now I'm going to just, like, jam through a couple quick measurements just to highlight you, like, hey, this is what I saw and not even talk about them because, like I said, at this point they're just observations and not necessary conclusions. But either way, like, now I have something in my app that matters to my users that I can at least observe and at some point do something about. So I think I'm at time. Is it five? Okay. All right. There's more on the blog so you can read the rest of it. Thank you guys. Appreciate your time. Yeah.