 Hey, Jeff, John, good to see you. How are you? Right, we're doing well. Yeah, let me bring it. Properly caffeinated. Yes, we are. One second, my speakers seem to be, hold on, they're coming out the wrong place. Why is it always the audio settings on these things never remain the default? It's random, it's always random. Yeah. Okay, there we go. Now it's working. Terrific. Perfect, how's it going guys? Sounds, you sound good. Can you share your screen? I sure can. One second, let's grab that. For just a little bit. There we go, and we have screen sharing. Can you all see that? Yes, there we go. Terrific. Fantastic. And let me just move that out of the way. There you go, all right, out of us. I think we're set. Excellent. The screen is yours, Benjamin. There we go, all right. Well, welcome everybody who's watching. Who's watching.com, wherever you are in the world. Good morning from the UK. My name is Benjamin Howarth. I'm an ASP Insider and an independent ASP.net consultant. And today I'm going to be discussing browser mechanics for ASP.net developers. Now normally this is a longer talk that I give at various meetups and conferences regarding non.net tools for measuring website and web app performance. And in fact, Jeff's been kind enough to host me on his Twitch stream before to discuss various of these tools. So we're going to delve straight into some specifics. And we're going to discuss firstly what web performance really is, which I think can be grouped into about three different distinct buckets, which are the HTTP request itself, how frequent they are, and what sort of size they are, things like network latency. And then we also have how long it takes the server to render something, whether that's a JSON API endpoint, or whether it's HTML or CSS. And the browser load and pain, how long it takes for your web browser, be that on a tablet or a mobile, to be able to draw what's being delivered and be able to present a working app or site to your user. So there are a number of different tools that exist within this space and we'll get straight into how we go in measure performance. On the front end, we're going to briefly touch on a project called Lighthouse in Google Chrome. And a great hosted service initially started by Google, but now an open source project. It's called web page test. And on the server side, we're going to discuss Jmeter and Mini Profiler in the ASP.net sphere. So without further ado, let's get right to it. I am going to jump straight into an old project that I've got. Now, this is a website I used to look after called verbia.com. It got taken offline last year. It runs on Unbraco. It runs using old web forms technology. And it really hasn't been optimized for mobile at all. And I like to use this as a great example of what Lighthouse can do. Now, if we use F12 DevTools, we'll find all the usual bits of Elements, Console. Let me just maximize this. Elements, Console Sources Network. And over the last few months and years, there's this new tab that showed up, which is called Audits. And this is Google Chrome's Project Lighthouse. This will run a number of different tests on your site or your app. You can choose whether you want performance, progressive web app, best practices, accessibility, and SEO. You can also apply throttling. So if you're expecting your users to be working on three or four G networks, you can actually simulate a network slowdown to then see what your users are experiencing out in the wild. You can also choose between desktop or mobile emulation. And what we're going to do here is we're going to run this test now. And I'm just going to move this over to the side so you can see what it looks like. And as you can see, it's now actually using a mobile render to then try and load the page and then gather some metrics about it. So and interestingly, all the tools that I'm going to demonstrate, well, the majority of them, with the exception of mini-profiler, can also be run in an automated environment because performance testing is nice. But at the same time, automated performance testing alongside your unit testing and integration testing should be something that you should be considering as part of your application build. So we'll notice that this scores a 40 out of 100, which isn't particularly good, but it does give us some places to actually improve on things. So for example, we have, let's see, serve static assets with an efficient cache policy. So these are items that are being served up, be they pictures, CSS, or JavaScript. And basically, they're not being served with an expiry header or a cache control header. What this means is that your browser doesn't understand, does not understand that it should be trying to hold onto these resources between page navigations. And as a result, it's going to try and load them every single time. And that's going to just increase more and more bandwidth being used. For example, this 340 kilobyte CSS file is going to be loaded on every single separate page load as you navigate around the site. So that's a pretty big overhead that we're looking at there. We've also got some other bits like removing unused CSS. Turns out we can actually save 328 kilobytes. There's a large amount that can actually be removed from that CSS file. And let's see, eliminate render blocking resources. So this will be a lot of CSS that needs to be modified to avoid interfering with the paint of the page. What else do we have? Let's see, reduced JavaScript execution time now, because we're using things like jQuery. And in fact, I think this website's a really great example of something done badly. It's got both jQuery and a legacy UI library called Mootools in there. And both of those are creating UI overheads which shouldn't necessarily be there. So, and then it will also give you a list of things that have passed as well. So properly sized images, minify CSS and JavaScript, and so on and so forth. Now, this is built into crime, but can also be run as I mentioned as part of a CI and CD pipeline. And it's something I highly recommend. So that's Lighthouse. The next thing we're going to look at is we're looking at web page test. And if we quickly just knit back here. So this is web page test. It's a free online hosted service. As I mentioned, it was originally started by Google. And what it allows you to do is put in a URL and then it allows you to choose a location from where you want to test, including lots of different profiles. So you can choose Android, desktop and iOS. You can choose Chrome, Firefox, Edge and IE 11. You can choose different locations. In fact, if you want to target users in a specific location, you can notice that there are endpoints all around the world. So Africa, Middle East, Asia, Oceania. Let's just go with London, UK. And let's go with that one. And this is a website I used to work on for the Formula E Motorsport Championship. So we're now going to run start test. Oh, and now you want to do a capture, verify. There we go. And let's run this. So what this will be doing is this will be running three separate instances of a test and then it will be measuring all sorts of performance issues such as large images, using a CDN, minification, all the things that Lighthouse individually tests for, but there's a great feature that's part of web page test, which is something that a lot of sites and projects don't necessarily think about. And that is the actual bandwidth cost to the user. And this is a very, very important metric which will come to in just a second. Because the average web page over the last few years has just grown and grown and ballooned in size. We're now looking at the average site now constitutes, I think it's three megabytes of code. And that's JavaScript and CSS and assets. And that is larger than the original Doom game on floppy disk. In fact, Jeff and I coined the expression, how many dooms is a web page? How heavy is it in number of dooms in terms of bandwidth size? So as this goes, now we get the waterfall breakdown here which is the same as you get in DevTools when you're looking at the network operations. It tells you what's blocking, what's not, how long this is taking. And yeah, first view in 6.9 seconds. That's particularly slow. And we just wait for this to finish running through. And let's see what that second view looks like. Oh, that does not look like it's improved much either. 6.91. And hopefully we should get our third result coming through just now. Now, I picked this site as a particular example because the Forminary Championship, it's a motorsport championship. It's designed to be serving up big images, you know, very high resolution, fast-paced action images for media press and fans. So here we go. Yes, if you look at the actual breakdown in terms of bytes, 70% of the entire home page is just image. And that's two meg. Now, the feature I want to talk about is I mentioned the cost. Here we go. And this opens a little project called What Does My Site Cost? And this is a great feature because this actually demonstrates how your site costs to an average consumer on a data plan in various different countries around the world. So for example, on a post-paid plan in Canada, that page would potentially cost 40 cents to a user. And that's pretty expensive. If you haven't got a good caching policy within three or four pages, you've already racked up a Starbucks coffee. So this is a fantastic feature and web page test again also has a node wrapper around it so you can again integrate it with your CI CD testing. And we'll come to that a little later on. But this is an important metric that a lot of people do not consider when they're building their sites is the actual cost of the consumer. Especially if you look at places like Sub-Saharan Africa and the Indian sub-subcontinent, you'll find that 75% of mobile users are still only on two or three G data plans. And as a result, they may not necessarily have the sort of data plan that you or I do in the Western world. So if you are looking to target those sorts of audiences, tools like this really put a fresh perspective on what that performance and what those sort of that bite overhead is actually costing your consumers for using your website. And it's an important one to bear in mind. It just offers a fresh perspective as to how much it's really costing you in terms of development, in terms of potential use attraction if you're losing out to users because your site is quite literally too expensive for them to access. I believe YouTube did an experiment with an app called YouTube Light and they noticed that once it had been deployed, they noticed that the average load times had gone up. Now the reason was not that the service had gotten slower, but that people who previously couldn't access the service were now prepared to wait a few extra seconds waiting for their videos to load with it with lower quality advertising. And that's a very important thing. The response, the average response time may have gone up, but the number of users they got increased exponentially. So this is an important thing to take away from this. All right, so we've covered sort of some front end tools here and discussed things like minifying your CSS and some basics like header management, cache control, and the actual overall cost to your users. What about on the server side? Well, first thing we're going to talk about is another little tool that I have where are we called Jmeter. Now Jmeter is a project from the Apache Software Foundation. It's mainly used as a load testing tool, but it has so many versatile users. It's quite remarkable at what it can achieve. So we're going to quickly pop this open. All right, so what we do is we set up a test plan and this is basically stored as an XML file. And what we can do is set up here, there's a number of threads or users. So we can simulate anything from 10 to 100, even potentially speaking, we could simulate 100,000 users. What's great is that this test plan does not just have to run on a single machine or a single server. You can effectively set up slave agents to run tests. So you can run a test, let's say you want to run 100 users on each machine. You could set up 10 virtual machines in the cloud and suddenly you've got 1000 users, each looping over a single request 100 times. That gives you a million requests to your website with some very, very low overheads. And what you can do is simply set up an HTTP request and we've got the name of our website in here in the path we want to test. Interestingly, the number of tests that we can actually conduct is quite substantial. And if you look, there is a sample as here we go. So the common ones HTTP request, you also have FTP requests, JDBC requests, LDAP requests, SMTP and TCP samples. If you really want to go down to low level traffic and test your infrastructure, the amount of information that Jamie will allow you to collect from all across your infrastructure, whether it's API endpoints, whether it's testing that your load balances are correctly spinning up extra instances in the cloud, whether your services are correctly serving, even the right bytecode, you can test just about everything you want using JMeter. I've yet to meet an example where JMeter could not do a successful, could not successfully monitor some sort of web traffic. It's very, very low level, very powerful and completely open source and extensible. So it also allows you things like conditional controllers. So you can go if or loop or while or a foreach. There's lots and lots of different ways you can run very, very customized and very extensive testing. So we'll just quickly have a quick look at the graph result and we'll just quickly run this here. And let's have a look, start with no pauses. And over here in the top right, we can see that the number of running thread is 100. And we start to get some graph results in. And we can now start to see how quickly the responses are coming back from that formulary website. So we've got the average and the median and the deviation. And that seems to be doing pretty good in terms of performance. That seems to be doing very well. The average response time is about two seconds. Given the amount of traffic that we're generating, that's quite impressive. We'll just go ahead and terminate that. And then we will kill this because that normally ends up with a thread lock. Now, Jmeter is fantastically powerful and you can save this file as an XML, as you can see by the file extension up here. It's a JMX. It's a custom XML file that you can include within most of your CI CD setups. And in fact, most common cloud providers have some sort of Jmeter agent setup. So you can have a master server that runs the report and coordinates with these slaves. And it sends the test runner over to the slave and then the slave will run your test accordingly. So as I say, if you want to be able to successfully test that your app can handle a million, two million, five million people within set period, you can do that. And more importantly, it's not just limited to web pages. You've got every single option for every single HTTP verb within the request. So you can test API endpoints. You can test, maybe you want to test that your API can handle 100,000 inserts. Let's say you're expecting a rush of signups if your new startup. You can go ahead and extensively test that. So that gives you Jmeter. And lastly, we're going to quickly go over to Miniprofiler. Now, Miniprofiler is a fantastic tool built by the guys over at Stack Overflow. And you can see it in action over here on the left. What it does is it injects some CSS and JavaScript if you're running in debug mode. And it will then tell you, view by view, in the MVC view, how long those views are taking to actually request. And it will do that for every single request that's been run over time. So these are all individual requests and it's telling me how long each one took. And then it will also tell me how long the entire overall page took as well. So if I reload this, now this will probably take a while because I suspect the app pool has just fallen asleep. Ah, there we go. And suddenly that's a lot slower. Here we go. And we can start to see even things like duplicate queries. We can start looking at query execution time. There's a lot here that we can look at. Now, this is a site based on Ambraco, which is why a lot of this I kind of don't look at in too much detail because a lot of the duplicate queries have good indexing on them. But if you're building your own custom site with your own custom things like entity framework, there's also support for things like RavenDB and various other providers. But yeah, this is one of my favorite tools for diagnosing. If there's any problems within a site, this tells you where it's taking certain milliseconds to render certain partials or certain views. And then you can go in into Visual Studio and take a look in some extensive detail as to what might be causing those slowdowns. There is another tool that I want to give an honorable mention to here, which is called Glimpse. Now, Glimpse provides you a bit like Elmer does for ASP.NET logging. It provides you with a dashboard. Unfortunately, it doesn't appear to be updated in the last couple of years. It does have a Node.js plugin. It does have an ASP.NET plugin. I'm not sure if that means it's just stale from lack of contributions from the community. But I'm not sure if it's still out there. So if anyone does know anything about Glimpse, please leave something in the comments. That would be great. Now, I mentioned about how do we measure all these things. And so we've got loads of different metrics from Lighthouse, from WebPageTest, and from Jmeter, and from Miniprofiler. So I will cursory go over a few bits and pieces, which are some easy things, which should just be straightforward to do to improve your overall website and app performance. So firstly, make your request as small and as infrequent as possible. Use things like cookie-free demands. Move your stuff over to Blob and CDN storage. It's cheaper than a coffee per month. Gzip and deflate static resources if you can. It will save you probably between 20 and 40% on your resources if you can Gzip or deflate things. And in fact, Azure's own CDN supports this out of the box. If you want to Gzip or deflate dynamic resources, there's a couple of security risks involved in that. So it's generally not recommended. I also want to briefly touch upon HTTP2, which is a brand new, effectively the upgrade to HTTP1 and 1.1. It supports single connection, but multicasting multiple resources over a single connection. Now, whilst I think that's great as a concept, it's only available on later versions of Windows Server 2016 and Windows 10. So on older platforms, it's not available and won't be available. So it does not necessarily solve everything because you're also having to rely. Not only does your server have to support delivering content over HTTP2, but then so does your client. And if you're looking at devices that have got older browsers, older versions of Android that maybe aren't being updated, again, I refer back to 75% of people still are only being on 2 or 3G. And the chances are they probably got feature phones rather than what we expect to be modern smartphones. So whilst HTTP2 is a great step in the right direction, you can't necessarily rely on it to solve everything that you need. Caching on static resources is straightforward. You can use eTags, which are generally a hash representation of your file so that if the hash changes, your server can then go, oh, I need to re-request that resource. Cache control requires a little bit of configuration if you want to use that header. Private versus public can catch you out, especially if you're in an enterprise environment and you're using a proxy in the middle which could accidentally cache things without you knowing about it. And the expires headers is one which then falls back on cache control. All right, and reducing asset size, this should be a given these days. Let's have a look. So we want to minify your CSS and JavaScript. If you're building single-page apps, my suggestion would be don't lazy load unless you know it will work reliably. A good example is using an app, let's say, on the London Metro or the New York Metro. If you've got Wi-Fi in the station, that's great, but you've only got it for two minutes. If you lazy load your views in, then you go through a tunnel. There's no Wi-Fi. The app ends up being a broken experience. Using an in-memory cache to boost speed, if you need to use a cache to store data in temporary places, whether that's RavenDB, Redis, or Memcached, use it. And send 304s wherever possible, which is simply a 304 not modified. So again, I refer back to the method of e-tags and I think the latest version of Windows IIS 10 supports this out of the box. In terms of server render, we've discussed Jmeter and we've discussed Mini Profiler in terms of isolating potential issues within your load balancing infrastructure, your server response times, and Mini Profiler for isolating those instances down within your razor view. One other word of caution, based on experience, is that using the dynamic keyword, whilst it might be popular for handling JSON objects, it can cause a massive memory overhead. I've worked on projects where a home page has used dynamic to cast content out of a database into what was basically a five-page carousel and because it was using calls to dynamic, it was taking roughly 3,500 milliseconds just to produce the HTML. Once we'd replaced the dynamic calls with calls to static types, the total size for the entire page was reduced to about 142 milliseconds, which was about a 97-96% improvement. Dynamics can have its uses if you don't know what the contract is. For example, social media APIs, but generally speaking, they should really be avoided from a performance perspective. All right, and lastly, ongoing monitoring. As much as it's great to be able to measure your performance on an ongoing basis and make sure that you have targets that you want to hit as part of your CI and CD, you also want to have some ongoing monitoring. So there are application monitoring tools such as New Relic, which I've used on a number of projects and personally quite like. Gibraltarsoft Loop is a fantastic project which builds off post-sharps, AOP framework, and is mainly designed more for desktop environments or environments that don't have AOP, so non-NVC environments, but it allows you to log up to the cloud. I believe they also have a free tier. And of course, this wouldn't be .com without mentioning Azure's own application insights, which I believe also has various free tiers as well. In terms of automating your performance testing, I have discussed Lighthouse and WebPageTest and Jmeter, and I'll briefly just go over to Node to the NPM package repository to show you that Lighthouse can actually be run as a Node module. So if you take a look at the Node CLI down here, you can install it as a global tool and then run it against one or multiple URLs. There's lots of different options for what you want to do in terms of whether you want to save those logs. Whether you want any particular Chrome flags to be run. Let's see, audit mode, which will process save artifacts. You can then put an output path. So you can then specify JSON, HTML, or CSV output, and I believe most modern CICD environment support reading the JSON output for this. And of course, WebPageTest as well has an NPM package that wraps it. So you can then create automated WebPageTest from options all around the world, whether you want to test in multiple locations, depending on what sort of, excuse me, what sort of information, how frequently you want to run tests. For this, you do need to request an API key, and generally speaking, they will let you use the sort of global service with an API key. However, if you're performing an extensive number of tests, it's recommended that you download WebPageTest from GitHub yourself and run your own private instance to be able to do your own automated testing. So there are plenty of options. You can specify an individual server, you can specify an out file as well, and different locations, sort of connectivity profiles, as you see 3G slow, 3G, 3G fast, 4G LTE, edge, and so on and so on and so forth. Number of runs you want to run, number of late, if you wish to label your tests, lots and lots of different extensive options here for you to really put your web app on your website through its paces. So unfortunately, in the words of Jimmy Kimmel, apologies to Matt Damon, we ran out of time. I do have this in a Docker image that's not quite ready yet, but if you want to basically follow me on my, where do we go? If you want to follow me on my GitHub at some point, I will make sure that that's released later on today. All right, that was a quick blast around the various tools of the open source arena in terms of what you can do in terms of performance measuring. I really hope that was informative for everybody and if you want to find out more, as I say, follow me on GitHub and follow me on Twitter. Oh my gosh, Benjamin. That was great. You know, we got a number of comments in the chat room as you're showing the tools. Hey, how does this work with Blazor? And folks, we're following along with you and trying with the WebAssembly stuff. So certainly some of the things you're saying about JavaScript and CSS, they definitely apply going forward as we look at WebAssembly. Oh, absolutely. And in fact, one of the things I've chatted, so Steve Sanderson is working on WebAssembly and one of the things, I haven't had a chance to get into the performance side of WebAssembly yet, but one of the things I have noticed is that there is some issues with things around lazy loading and about selective assembly loading and there's a lack of bundling and stuff in there at the moment. And I do actually want to try and contribute to that at some point. So I'm not 100% how this is... I mean, as much as these tools will be relevant for Blazor, what I want to do is I want to get into some of the detail to find out when you do actually do a full deployment what sort of build mechanisms can be tied up with this stuff to make sure that your bundles are as small as possible. Because obviously, we are going to want to start delivering Blazor applications in the same sort of way and we're going to want to measure that sort of performance as well using tools like Lighthouse. So yeah, there is something to be said there. The answer is I don't know yet how Blazor handles that. But the answer is I want to find out and if you want to follow me, as I say, if you want to follow me on Twitter or GitHub, I'm fairly certain I'll be digging into one of those at some point. There's your Twitter and GitHub over there. That's the one. Thanks so much. Right on there. And fantastic samples. I love the real world application. I mean, that's great. Thank you. No, hey, no worries. I mean, you know, one of my specialties is basically I've been working for the last, I'd say five years or so, on high traffic, high performance ASP on sites going from let's say 10,000 users up to five, six, seven, 10 million and it's something that really is missing. You know, we have all these fantastic .NET tools to build all these amazing things and then we go, wait, hang on. How do we actually, you know, get the scaling right and still manage to keep our hardware cost low? I mean, as much as the cloud is great in terms of our scalability, you still need to be able to actually think about what you're, you know, the balance between what your budget allows for what you're running, along with the budget of your consumers. You know, if your consumer is accessing your site on a slow phone and it's fast, it'll be fast on a fast phone. Basically, that's sort of the rule and that's why I keep, I keep somebody knocking around here, I keep a really old feature phone that's got like Android 3 on it and it only supports 3G and I keep that around and just go, right guys, if it works fast on here, it'll work fast anywhere else. So, Well, thanks so much Benjamin and we're so glad you could join us for .NET Conf 2019. And now I think it's time for a code party. It's time for a code party. We'll catch you later Benjamin. Fantastic. Thanks guys, real pleasure. All right.