 Yeah, thank you. Good morning. Welcome to the Clue Style, which we don't know why it's called like that. Why can't you talk about performance, awareness, and then a bit about optimisation as well. Once upon a time, pre-COVIDs, before my two children, before like APK, Prime Minister, something, I studied computer science, and as part of that, we did a practical lab where we improved an operating system. So, managing I.O., memory, paging, scheduling, tasks, and whatever. And as part of that, there were specific tasks and specific places in the operating system with the best life cycle, whatever you want to call it, where we measured some things. We did build that in advance. I think we were teams of two people, maybe 10, 12 teams, and the team that I was on, we won. And that surprised me because, again, we didn't know about that, and we didn't do optimisation. What we did do when we built the operating system was just think about what could happen, what kind of different states could we be in, where do we want to go in, and what kind of overheads could that be. If there are two things, three things, is there an order in which they should be checked, but I don't know if it's like that. So, the smallest claim of what this talk is not about is not to talk about A, sorry. It's also not your front-end performance, SEO, Lighthouse, whatever kind of talk which you might expect when you meet performance, at least not specifically. And this is also not a talk where I tell you these are the type of things that you should do always in any kind of project. That's not how this works. So, what is this going to be? It's a set of best practices guiding principles that I just discovered, adopted from other people, and they are common sense. So, you don't learn a lot of new things, I would say. But, yeah, we learn things that we should know and should follow also good. So, this is about adopting a performance mindset. It's not applying performance tactics. It's thinking about how to write things, systems, software, built hardware that perform well. There are code examples, but they're just for illustration. So, if you just copy that and the use case allows for that, that's great. What you really should try to do is understand the concept, what we're doing there and then translate that into your context, your project, your text, whatever you're doing. Depending on the context, some things that I will show here or talk about are not as important or maybe not true at all, they don't apply it. So, for example, do you have data collection? Then maybe you don't have to care about memory as much as in other contexts. Is there some form of compiling, transpiling, optimizing the code? Maybe you don't have to care about things that, for example, PHP is literally parsed and other languages. And also, there's always competing interests in software development. So, let's say we want to improve performance, but there are other aspects that have something to do or maybe we play against performance. So, code should perform, but it should also be usable. Sorry, software should be usable. It should perform well, but it should also be usable. And the code should be readable. You should be able to easily debug the code. You should be able to maintain your software projects easily. And then also, there's the balance between processing resources or capacity and storage memory. If something should really be quick, you could try and store stuff in memory but if you don't have memory, it cannot be quickly. If you want to optimize, you have to pick both or pick one over the other and then there's to be a balance. So, in the end, it's not always about performance. Depending on your context and if you're with an operating system, maybe it is. But what is performance? The definition of, one of the definitions of performance is how well the task is being executed and what does that mean? So, if we're talking about technology, IT, it's for example the response time. How long does it take to execute the task? If we're talking about a system, hardware, what about the processing speed? How long does it actually take to process something and not maybe the full request life cycle of reading something, processing, post-processing and sharing and pushing something somewhere? What about the resources? We said before like processing resources, CPU, GPU, what about the memory? If we're talking about just system, how available is your system? When we talk about music, how do you like the performance of that artist that's also personal preference? It's based on the expectations, what are your expectations and what do you base this being well performed on? And of course there are many more. Guided principles, as I said, are common knowledge, most of them. We start with don't do work too early. When we realize there is work to do, we shouldn't do work over and over again if nothing changes. Also, we don't need to do unnecessary complex work. Sometimes the task involves complex things or maybe multiple things but if they're not necessary, you shouldn't do that. Also, don't optimize prematurely. I guess you heard that sometimes. You're building something and then while you're building or maybe even before you're already thinking about optimization I don't know, that doesn't really work unless you build it and tested that in some context. However, you should measure performance if you care about performance. That is, you build something, you test it, you use it and then you measure. And then you know where stuff is not performed well based on your criteria. Also, we need to clean up ourselves. That's true in life and software development. And also of course if there are multiple tools, multiple approaches, multiple libraries or I don't know, algorithms you should know about some and then you need to understand how do they differ, which of these things is best or the right tool for the job that you're doing. So let's start with the first one. Failing early is something that's easy and sometimes not done for no apparent reason. So one way to fail easy is we check the context. That means should we actually do something here. Consider this, again, just example code. So let's say there's a function we call the function and then there is a condition check. We don't know what is this here, but there's some condition that needs to evaluate true. If not, we don't need to do anything. We don't need to talk to some remote API. We don't need to write to the database or something. We just check this condition, which of course may involve all these things sticking to other resources. But we don't perform our actual task. That could be one example in the workplace context. So if you do something to a post, but you're interested in published versus non-published posts, it could also be that you have multiple checks and multiple conditions, and then you check is it published post or not? Is it a special post or not? And this is where it can start that you think about these things. This is not really about optimization. It's just about awareness. You should know about the data that's in your project. Do we have more published versus non-published posts? Do we have more special or non-special posts? Maybe you want to switch to these conditions. It's a simple equal or this equal unequal sign, but it is an operation, and if you know that 90% of your posts, 95% are published, maybe you should check that first, maybe not. But I guess everyone understands if there is an expensive check. You maybe don't start with that, but a simple once. Maybe you start with an expensive check. If you know that will catch like 95%, that's up to you. You need to know the project, you need to understand the code base, the context, the data, make the interaction. What about data? Let's say we have a function that processes data based on some ID, some object, or a URL or something. So let's say we do something, then we get the data based on the host ID that we have, and we do something again, we process the data, and we do something again. This could, for example, be a batch process, a CLI command. Where do we get the data? It's here. We already made a kit of something. We've written to the database, we've opened screen connections, or I don't know why. If we don't loop over data, because we don't have data, why should we do that? Maybe we need to do some things. For example, if it's a CLI command, we need to write like, check in for data, there's more data than we're done. So we could check for the data, and that'd be done, or as I said, do some minimal messaging, or whatever the context requires here, so that person using our software does something that's happening. Caching. I expect caching is one of the things that people think of in the first place. Three aspects of optimization and performance. We should cache expensive operations. So we don't want to spend time or resources on something that we know doesn't change, but we could do like, ask for five times. If you want to bake a cake and you don't have a recipe, you don't call your mom and do the baking while she's on the phone, and then you realize, oh, I wanted to bake three cakes. Then you have to call again. What you would do is you ask for the recipe once and you let it down so you can go back to that again. So caching expensive operations could be we cache something in the local context. That's for example a static context or in a classic context, it's an instance property or something. It could even be a hyperlocal cache. So it's just in that one function. We want to store something here because in that one function we would need to check or use the thing three times. We don't need to ask for that thing three times. You can just store it in the local variable that's great. Caching could happen in the session context. It could happen in the request. For example, in WordPress, there are local variables that share context or states between the request. There could even be an actual long-lived cache, object caching for example, and then of course we come to the exploration and invalidation, one of the easy things in software development. We also should cache repetitive operations regardless of their complexity. Again, if we know the data is static anyway or we want the data to not change. For example, if we have a long running process, we ask for the thing and then some time passes and if we ask for the thing again, maybe it changed. But maybe we want to refer to the version that we have for. Maybe we have some conditions based on that. So we should check for something, branch in that one direction, do something, check again, oh, we have to branch here. So it's maybe not the right thing for the process. Repetitive means just, let's keep this handy. We need to know, we need this again to know about having like three references to the same thing, so we just store this. This could look like that. So we just call a function store the result a variable and then we're done. We could also nest its lookups which are not that expensive, but again, if we nest five things and we call that lookup five times, it's 25 lookups. For what reason? Memorization is nothing but a special kind of caching. We want to cache the output, results, behavior of a specific function given a specific set of arguments. These are examples in the WordPress or the app context. Another thing is we want to avoid intermediaries. Let's look at that. So this is a function, this could be part of a function where we get some data and then we want to filter the data based on something. We want to map the data to another data structure. Maybe even we want to apply a filter in the WordPress context together. So what do you realize if you listen to that? Is there anything that catches your eye? How about that? We do the same thing, but we mutate the variable that we have. We don't need data after filtered data anymore. We don't need filtered data after normalized data. And again, do we have garbage collection? Is there some kind of memory resource management? Yes or no? Is this done in the global scope? Is this done in a function that ends after this? That all matters. But is there a reason not to mutate the local variable? If you're thinking debugging, you could still debug all the states of that piece of data. So this is just nesting the function calls. You mutate the one variable that we have, and we're done. We don't need to create intermediary data structures. For example, if we want to process data, and we have an array here, we could do it like that. So in lines 2 to 5, we create a new array, take what we have, add to it, and then in the next iteration we create a new array. How about that? Same is true for objects. We create new objects, the structure that we have, but we could also just add to the object, or maybe override depending on what we have. In the work as context, maybe somewhere else as well, I don't know how to call them. They are functions that expect an object identifier or an object. So it's illustrated here like post or post ID. And let's say the context is we do have a post ID. So what we could do is we call the first action post ID. The second action, the third action. But what we also could do is fetch the object and pass the object around. That might not make a difference depending on what you're using. For WordPress, it does make a difference. For example, the getPost query fetches data from the database because the literal translation of getPost means gets the post data from the database. There is no caching and that's by design. So all these functions that take the objects or the ID will ask the database for the objects which didn't change in the one we sent from the last model. There's also control over how functions are being executed. One thing, one way to do that is throttling the function call or execution. This just means we will do what you want us to do but at the pace that we decide on. For example, there is an event listener that reacts to the mouse being dragged over the window to document something. Depending on your CPU and browser and whatnot, that will happen way faster or more often than you would need. We could also throttle this drag over callback. The drag over callback here and decide, okay, this will be executed in relevance of the times and frequency you ask me just one time within this time frame of what is its 200 milliseconds. Another way to slow down requests or function execution is denouncing. That just means we do this again but after some coming down. So it doesn't matter how often you want to call that and what was the first or the previous time we did that. We do it once wait for a specific time and if you requested this action to be executed again, we do it again. In this example here we have a change event listener and you type something in and then if you're perfectly enough you send requests for every single keystroke. That's not what we want. We can also wait for 300 milliseconds after the last keystroke and then we just ask the API. If you're typing slowly, that still means multiple requests which you have to abort coming later to that but that's much better than like firing for the sentence of 20 characters 20 API requests. You don't need that. You will not use the data you get back. This is an illustration that anyone does not know the concept or the naming of the debounce and throttle. At the top you will see just a random execution of some user event and then if we debounce that we do it once and then we do it again after this pull down time. So you should see at the time before the screen line and the previous collection here is the same and at the bottom this is the throttle. Yes, we do it on the first request but not on the second, not on the third. We do it after this threshold that we decide to use. I'm talking about memory leaks here, maybe. One thing that you should do especially in a dynamic interactive context is remove event to the servers if you don't need the data anymore. You don't need this request to happen or so. So we need to clean up leftover things that might cause sign effects or break visually or actually from the functional sector. One way to do that is, for example, we are in a React context. We have a user tech hook we attach an event to the server and then in line 4 through 6 there is this clean up callback. So when the context of this component changes we call clean up and if component re-renders and something else changes so we may add event to the server again but the one that we added before has been removed we add the second one. So that's the important bit here. There's also for event listeners self-removal if we're talking about one-time events that's kind of new. I think it has been introduced three or four years ago in the state-of-the-art version. So this basically means react a single time on that action given the targets and then remove yourself. That's kind of a function so we had before here that we didn't see that. So setInterval for example if we want to repeat something based on a specific interval we should maybe stop that from happening if the contexts that require the data or wanted this thing to happen is no longer there. So if there's a component that means something to happen or doesn't need to happen with the component isn't there anymore. Again, there might be cases where that still is relevant but most of the time it's not. So again to use the fact we have this cleanup we store the interval ID or time ID and then we clear it. Same thing for setTimeout if you use higher-order components in Gutenberg there is actually a setTimeout where the top width is setTimeout. There is none for setInterval I don't know why but I guess most of modern projects don't use class components and higher-order components. Again, maybe. So yeah, you could use that. So here at the bottom or at the top you see we use that higher-order component wrapper. At the bottom we wrap our component inside and then we have access to a instance method that's called setTimeout component will unknown so we just clear. We should report pending requests. We ask the API for some data and then the thing processing the data or displaying the data is gone away. We don't need the data anymore. So let's say we use the API fetch library. We could use the abort controller to stable API, browser API. We pass the signal to this request and at line 17 to 19 again the skiner function reports the abort controller which basically means this signal to everyone listening to the signal you don't need to do anything. What I find interesting is that this is actually an error so we see that in line 9 for following. So the sketch callback it's an error but we can understand what is the error is this because this request has been aborted or is there something else going on and maybe if it's aborted we don't need to do anything. Maybe we want to update something but we definitely don't try and update data the previous day or something because most likely this context over there like the component has gone away. We should also cancel schedule callbacks. So we saw before this was here a debunked callback. If I'm requesting something to happen and then again it happened the first time and then maybe the component goes away this will happen the second time because I asked for it just after this cooldown. So if we use no-db there's also a cancel callback or cancel a method on this object of this function. So again we have this debunked callback and then if the component goes away we cancel any potentially scheduled requests or country calls. That's about performance mindsets, being aware of what you do but let's think about some high level things, simple things how to measure performance. In the workplace context I think one of the two main things that everyone should know about if you want to use is the gradient monitor plugin and all the extensions also for the debunked bar plugin. Inside there are things like database queries you can just look at the queries you can filter them, sort them they are already sorted by call or by components you explicitly see duplicate queries that you maybe want to look into. Another thing of course when we talk about PHP in general is profiling your process, profiling your application and one way to do that is using the XD bug built in profiling the functionality I'm just mentioning some things so that you know in which direction you can think or just explore. Depending on your computing, hosting context, it might be that for example AWS has the xray service which stores a lot of data depending on how you configure that and you can retroactively replay the request. You can see what happens with what kind of data, what external database API requests have been done and all these lots of things with timing you could have flame graphs or icicle graphs similar to profiling in XD bug Of course as a developer there are an array of browser dev tools for example you can look at the network what happens in terms of resources time and synchronous deferred asynchronous there's the performance tab there are extensions for things like React itself Redux, which you can use in the reviewer context I know that both Chrome and I think Firefox have added something like performance insights which is more insights into the performance space. I don't know if the plan for either browser is to replace the performance tab with that. There's definitely some experiments going on thinking about the database it's possible using mySQL or MariaDB and other database systems to turn on profiling for example if you locally or on some online environment where you have directed or implicit access to execute some SQL commands you can turn on profiling execute whatever you want and then look at the profiles or you can look at a specific profile you see timing and things that happen. Another aspect is if you want to understand queries of course you can explain or you can ask SQL to explain queries based on the indexes that you have and other things which gets to know your tools or your environments this is unique for everyone for your text, for your language for your target like are you front end focused, is your application front end focused this is unique for every environment is this in cloud, is this local is this shared is this a I don't know like complex multinational whatever kind of system do we have extensions turned on or not in PHP I don't know and of course this is unique for every use case so in general you should just be interested in things on a high level and of course if you want to optimize things or not have to optimize things, maybe you don't need to invent something that's there already and if there are multiple versions of libraries or so pick the one that's the most used, the most maintained or so, things like functions in low-dash, yes you can write your own rebounds but maybe the low-dash one is better so yeah and also there's no real reason in terms of like bundle size so you don't need to put in low-dash if you want one low-dash function so that's not the answer for example in the workplace context don't use meta tables post meta terminal or something if you query on the value basis and if you really have to maybe you should have a Linux partial and of course there's a lot more but I ran out of time