 I'm a senior software engineer. I currently work at a company called Dev. You may know it better as Dev.2 or at the Practical Dev on Twitter and at Dev I've been fulfilling my passion for coding, learning and sharing and building communities. If you've not heard this accent before, it's one of the many South African accents. I hope that some of you have been to South Africa, particularly Cape Town, which is beautiful. If not, I hope that you'd visit sometime soon. In addition to my job, I also run a non-profit organization in South Africa called Cassimets. Cassimets is an after-school program that exposes students in under-resourced communities to STEM. Working within the environment of Cassimets has allowed me to appreciate the challenges that we as Africans face and it's inspired me to firstly write this talk to share my experiences on a global level and secondly, build my own applications with performance at the forefront of my mind to compensate for the challenges that we encounter in developing communities. On this map, the countries marked in purple are developing countries. The darker purple is the higher-income developing countries, unlike most of Africa, but they nevertheless are still developing. So why am I showing you this map? The reason is, because whatever the challenges that Africa experiences, at least 80% of these developing countries, there is all these countries marked in purple. They're experiencing it too, which means if you don't follow your apps to cater for emerging markets, these are the people that will either not be able to use your apps at all or will have difficulty using them. Living on the African continent, I have a lot of exposure to the challenges that we experience here and I'd like to go through some. So the first challenge that I'd like to outline is low-chilling and low-chilling is just really the bane of my existence and other Africans' existence too. For those who do not know what low-chilling is, it's the deliberate shutdown of electrical power in parts of a de-power distributed system, generally to prevent the failure of the entire system when the demand strains the capacity of the system. I think they're calling it rolling blackouts in the US, but it's a little bit more impactful here. The implications of these is that there's no power for a minimum of four hours in South Africa on most days, rotating around the different parts of the city. This is a graph on the slide showing the plant breakdown of energy over the two months. It is absolutely insane to think that our plants breakdown is much. In other African countries like Zimbabwe and Zambia, as you can see from these tweets, the power has turned off unexpectedly for like around 12 hours at the time. As a result of having no power, we tend to have fame to no self-reception because the cell phone towers are being overused and are unable to handle this increased capacity during these periods. No cell phone reception means no 2G, 3G or 4G. Thus, with all internet access, we are unable to use full functionality of applications. Another challenge that a large portion of Africans face is that we have very basic digital literacy and this is probably due to the reasons that a lot of public schools have no computer labs and students don't have access to computers at home either. Hence, complex and unintuitive interfaces prove to be a hurdle. Tata costs in South Africa are also extremely high and they're also high in other parts of Africa. South Africa on this chart fits between China and Canada with about $10 per gig. You may say this is not too different compared to the U.S. However, this is not. $10 is not proportional to the earnings of South Africans who earn at least less than 50% is compared to employees in the U.S. or Canada. While smartphones are common in Europe and Northern America, South Saharan Africa lay in ownership. In South Africa, around 51% of people own a smartphone. However, this is below the medium of 59. Most countries like Ghana, Nigeria, Kenya and Tanzania are even lower. There is still a huge population of Africa that uses feature phones, which is more affordable than smartphones because they sell between $20 to $25. Being low-cost feature phones tend to have slower CPUs, lower RAM, lower storage, older OS versions and some of these devices can even be restricted to just 2G or maximum of 3G networks. Many contain outdated browsers and they often don't even have touch screens. Instead, they have a keypad or D-pad for navigation. At Casimets, where most of the students have either no or low-end devices, they go to tech hubs in the community where their desktop computers and internet is provided. However, there is a time limit for the utilization of these computers. The internet penetration rates are between 75% globally. In South Africa, it's about just 59%, whilst other African countries like Ghana, Kenya and Tanzania are below 40%. The infrastructure is lacking in under-resourced communities in South Africa and other parts of Africa. Hence, very few have constant internet access. Instead, most people end up using 2G or 3G networks where they are able, but it's sometimes there's no connection at all. If we look at the statistics for the top apps that I downloaded in Africa unlike something like the Google Play Store, you will see that light versions of applications when available always end up being at the top of these statistics. We need to optimize for fast-loading performance relative to devices. This means optimizing for CPU, memory, battery and bandwidth usage. Before we divide diving into some of these techniques to optimize performance, let's go over some important metrics typically used to measure performance of web applications or sites. These metrics can be measured at different phases of the loading cycle. So the first one is the first paint. The first paint marks the point immediately after navigation when the browser renders some pixels to the screen. Depending on the structure of the page, this could be displaying the background color or it could be the entire page being rendered. It simply depends on how the app was structured and how intentful they were about performance. We want to optimize most of the page being rendered. The first contentful paint is the point when the browser renders the first bit of content from the DOM which must be text and image or just any element. For site visitors, this time signifies when actual content has been loaded on the page and it wasn't just like any change. The first meaningful paint measures when the page appears meaningfully complete. The first CPU idle marks the first time at which the page's main thread is quiet enough to handle input. And the time to interactive measures when a user can consistently interact which means touch or click with all the page elements. So in the subsequent sections, I will be providing a high level overview of some techniques that we can use to optimize for performance within emerging markets. They will include, firstly, reducing the bundle size, secondly, server side or static rendering, the implementation of service workers, and number four, just some other smaller types. In each of these sections, we will reference the magic that we are optimizing for and we will also give examples within the context of EMBO we are applicable. So let's dive right in. In modern times, most of our web applications are heavily, heavily reliant on JavaScript and we ship so much JavaScript to users that it has become one of the most expensive resources on the web. We bloated our applications without thinking of the cost implication on both the hardware and the network of a user's device. The consequences of loading too much JavaScript on feature phones or low-end smartphones in emerging markets are substantial. On such devices, the JavaScript can end up blocking the main thread for a significant amount of time, thus increasing the time to interact with the application. In addition, the passing of the extra JavaScript can result in a breakdown of that thread causing applications to sometimes just run out of memory or hang or crash. This leaves the user with a feeling of extreme frustration. We hear she ends up clicking around the interface without seeing any effect. According to Eddie O'Mani's article, the cost of JavaScript in 2019, which I recommend that every person reads, he outlines how in mobile it takes three to four times longer for a median phone like a Moto G4 to execute Reddit's JavaScript compared to a high-end device like the Pixel 3 and it takes over six times as long on a low-end device like the Alcatel One X. Similarly, downloading loads of JavaScript and CSS files on a slow network connection increases the first meaningful pain time, thereby leaving the user with exactly the same feeling of frustration. According to Google's double-click, when comparing sites that load in five seconds compared to sites that load in 19 seconds, the faster site had 70% longer average session links, 35% lower bounce rates, and 25% higher air view mobility than the slower counterparts. That's a lot of increases. Knowing that the first meaningful pain and the time to interact is too high, if it's too high that users will leave our site and most likely not return, how can we then reduce the bundle site? So some very simple solutions include, the first one is minify in concatenating your JavaScript bold. We can improve on the overall performance of our sites and applications by minifying our JavaScript to reduce the file size and concatenating our relevant files to reduce the number of file requests. In MBCLI, we are fortunate that the JS files are already minified by default in production using broccoli-ugly file JS and that all our files are already concatenated into just one JavaScript file. However, we can improve on this even further with code splitting. So, instead of shoving all the JavaScripted ones, we can split the JavaScript by page, root, or component. This means that we can simply shove the minimum amount of JavaScript to prioritize what a user will need and thereafter we can lazy load the rest. So, we can fetch the additional bundles either in the background when the user is idle or in response to a user-initiated action. Whilst we haven't reduced the overall amount of code in apps, we have avoided loading code that the user may not use and reduced the amount of code needed during the initial load. Another common way that we end up loading our JavaScripts is by importing loads and loads of add-ons, we just keep throwing it in there and libraries into our applications. Instead, we could have simply gained the same functionality sometimes by writing just a small custom JavaScript function or by importing only a portion of that library. Importing only a portion of the library is now possible with tree shaking. With tree shaking, we can take advantage of static import statements to put in only the specific and relevant parts of ES6 modules, hence eliminating dead code. It's possible to utilize code splitting and tree shaking from the Embroider V2 package, but those that don't know, Embroider is a modern, fully-featured, bold system that works in tandem with MBCLI. It also newly embraces the ACMA standard for importing ES6 modules, which makes tree shaking achievable. However, it is important to note that Embroider is currently in beta and there are some risks to be aware of it using it in production. You can read more about Embroider and their status on the status page. You can't improve if you don't measure. So some of the tools that we can use to audit our site include the Google Chrome's Lighthouse tool for performance audits. There are three audits that will be useful to look at whilst reducing our bundle size and they are very aptly named. So the first one is JavaScript boot time is high audit and that reveals how much CPU time each script on the page will consume along with its URL. Then the unused JavaScript audit, which reveals the JavaScript downloaded by the current page, but it shows the JavaScript that isn't ever used. The Manify JavaScript audit will compile a list of unmanified resources that it finds on the current page. From there, you can simply take action by modifying those files manually or augmenting your role system to do it for you. So the second one that I can think of is the MBCLI bundle analyzer. The MBCLI bundle analyzer analyzes the size and the content of your MBCLI bundled output using a really cool and easy to understand visualization. Most specifically, we'll be able to see which individual models make it into your final bundle. So if you look at the picture, you'll be able to see that it shows which are the different packages that have gone into your bundle. We also can find out how big each contained module is, including the raw source, the Manified, and the G-sub sizes. Finally, by looking at the diagram, we can see which of the packages have made it into our bundle by mistake and then we can optimize our bundle size. The third one is bundledphobia. So before I actually install a bundle to my application, I'd like to know how a bundle will impact my size. Bundledphobia does this by showing the cost of an MPM package by providing the information on the size of the package, how fast it downloads on 2G, and how fast it downloads on a 3G network, and what percentage of other dependencies that package comprises of. Thus, we can then make informed decisions on whether to edit or not. Well, JavaScript's single-page applications can be quite snappy. Once they are fully loaded, there is this time between load and time to interact. Where users are usually presented with a blank screen. This is because for most single-page applications, the initial document returned by the server is empty, thus resulting in an increase in the first content for paint. So what if I wanted the best of both worlds? A quick initial load time, but also just snappy, successive interactions. This is achievable using server-side rendering or static rendering for all or most of the popular pages in your applications. This can result in an almost instant first content for paint, which is particularly useful for developing countries where network connections may be unreliable. Remember that the first content for paint, which we discussed in one of the earlier slides, measures the time from when the page starts loading to when any part of the page's content is rendered on the screen. Using this technique, we are then able to display useful information that is content to the user instantly. When our first content for paint is less than a thousand milliseconds, the users are really happy. When the value is between a thousand to three thousand milliseconds or one to three seconds, users are usually a little bit less pleased, but still pretty happy. But over three seconds, so say three to five seconds, starts causing frustration. So over five seconds, users have just completely lost interest. It's really simple to server-side render pages in Ember. And also, all we need to do is to just use Instant Install a fast booted Ember, which actually static renders the application and allows us to pre-render any list of URLs into static HTML files at build time. As a result, the pages can be served statically, and it has a fast first paint of the HTML content from fast boot. So all we do is we install Ember CLI fast boot, we install Prema, and then we can configure the URLs that we want to pre-render. Now that it's set up, we can simply even just see the HTML via our call. Since the most important value is here was the first contentful paint, we can run the performance audit once again, and we notice how the value decreases. We see the value from the Lighthouse Chrome tools. It's great progress that we are able to render our assets and our content optimally, especially to ease the burden of slow network connections, high data costs. In case of low-end devices, but what about taking it one step further? Service workers in caching can reduce users' data costs even further, and they can render pages even faster with the least amount of processing time. Not only that, but we can also render our applications offline for those sporadic connections during load shedding or whilst passing through poor network areas. A service worker is essentially a background script that runs separately from the main browser thread responding to events, including intersecting network requests, caching or retrieving regular resources from the cache and delivering push messages. The implementation of service workers is pretty simple using embedded add-ons. So here's a very brief look on how to integrate service workers into our application. The Dockyard Amber Service Worker documentation makes it really easy. Let's take a look at the following commands. So you'll see the first one is Amber Installed Service Worker, which just installs a service worker when the page loads. Then we do Amber Installed Service Worker Index and the Service Worker Accent Cache, which essentially caches our index and HTML page and all the other static assets. So at this point, if you have to disconnect your internet connection or turn into an offline mode in our browser's dev tools and refresh the page, then we can take it one step even further where we can install Amber Service Worker Cache fullback, which then caches any non-static resources like requests to the API. And just like that, our service workers would be up and running. We just need to browse the app for a few moments whilst online and will essentially be priming the fullback cache. And afterwards, we can put our browser into offline mode and try to load a page that we have visited before. And guess what? It should now serve your API responses from the cache, but we can make our applications even more functional. I mean, if I think about it, when I'm without internet during times of load chatting, I not only want to view data, but I might want to fill out forms or bookmark some data or maybe even send some information through to the application. I have the memory of a goldfish and once I close an app, I'm really not going to remember to come back to that app. So this is where Amber Pouch comes into play for us and it basically is data persistence. Amber Pouch allows data to sync automatically once a connection is restored. In the background, the data is saved on the client side using indexTV or Web SQL and we just keep using the regular Amber Data Store API. Once again, in order to test our improved performance and offline strategy, Lighthouse comes to the rescue. You can see your cache files and you can use network requests to determine where the request is getting answered from, is it from the cache or the API and you can also use something like Google Analytics to track metrics that will help us to assess the impact of our service welcome. Finally, we come to other performance tips. So there are a couple of techniques that I'm not able to explore as deeply but I'd still like to list some of them out because I believe that when they are used appropriately they will provide better accessibility for emerging markets. Some of them include, firstly, using SVGs we will be able to or out-optimizing images by using tools like Squish or Image Optimizer. HTML source sets are also really useful to serve different images to different visitors. A company called Fundspace reduced the image payload by 86% resulting in a reduction in load time of 65%. This improved user experience helped double Fundspace e-commerce purchases conversion ratio, cut bounce rates by 20%, increase mobile revenue by 7% and dramatically improve SEO. Optimizing and sending through only the necessary keys in our JSON payload can really just optimize our network requests in our JSON. We should serialize and compress adequately as well. In addition, adopting a technology like GraphQL would automate the optimal parallelization of data batches. That's increasing the performance. Instagram increased impressions and user profile scroll interactions by decreasing the response size of the JSON needed for displaying commands by 33% for the median and 50% for the 95th percentile. And finally, we can use techniques like adaptive loading to tailor the experience based on the user's constraints. Adaptive loading uses signals to determine the network, CBU core count and memory and based on those values, we can then conditionally load more highly interactive components around computationally heavy operations whilst not sending these scripts down if you're on a slower device. However, it's useful to note that the web properties that are used to determine network, CPU, core count and memory are not available via the web API to all, especially older browsers. This unfortunately does not make it extremely useful for African markets as yet, but it is a useful technique that most likely will be used more commonly in the near future. Using all or most of these strategies that have outlined allows us to build more efficient applications that will deliver an improved experience within Africa. However, our professional environment usually consists of teams that include multiple developers working on one application together. Within these teams, not all developers are knowledgeable or even mindful of performance. So, how do we go about proactively maintaining a codebase with the team for just optimal performance? This is where performance budgets integrated with the CI tool become really useful. So, what is a performance budget? A performance budget is a limit for the pages or components which the team are not allowed to exceed. Adidas Mani, the performance guru, mentions the three important metrics that we should use in order to incorporate this budget. The first one is milestone timings which are based on the user experience when loading a page. So, things like time to interact, first contentful page, it's the first page of metrics that I showed you when I started the presentation. We may need to pair several of these metrics together in order to represent our full story. Then there's quantity-based metrics. So, these are based on raw values. So, the weight of the JavaScript, the number of HTTP requests, these are directly correlated to a browser experience. And finally, we have rule-based metrics which are scores generally by tools such as Lighthouse or WebHTAS. They often just provide one single number on a series to grade the site. Furthermore, we can apply different budgets to our mobile applications, first our desktop applications, first device classes, because they're underlying hardware like CPUs and memory and the connection capabilities differ across these different experiences. So, an example budget on my personal website could include something like the home page must ship less than 170Ks of kilobytes of JavaScript it should include less than 2M of images on a desktop but maybe 500Ks of mobile on page load and then we can lazy load the rest afterwards. It should load and get interactive in less than 7 seconds on an Android Go which is one of the more popular devices in South Saharan Africa and also, maybe the score needs to be greater than 80 on a Lighthouse performance audit. These are just some examples of what I could put into my performance budget. It is useful to know that there are some standards which we can adhere to like for mid-range mobile devices with slow 3G connections a good target for first loads is to load the page and be interactive in 5 seconds or less. For subsequent loads, a good target is to load the page in under 2 seconds. This is where the developer has the opportunity to set the precedent to be inclusive to other emerging markets. But the most tricky thing about creating a budget is usually to come up with the performance metrics themselves. One way to do this is to use a calculator like performancebudget.io to get some baseline and then configure the budget based on your knowledge of the type of application being created as well as your target market. This here on the slide is performancebudget.io and it's a really good experience. Once we've crunched the numbers we want to also proactively stay away of these numbers throughout the development process. This can be done by integrating something like webpack performance hints which issues command line warnings or errors when the bundle size grows over the limit. This is perfect for your development process. Thereafter, once we start deploying, we can integrate it with the CI to automatically enforce size limits on pull requests for teams. So if the test fails, we're going to prevent the pull request from being merged. Some CI options that can be used is the bundle size CI and lighthouse boss. Sorry, lighthouse bot. Finally, I recommend also using speed curve for the reactive monitoring so that we can actually monitor some real users so that we can improve based on some baseline. As a result of using a performance budget integration tools, developers on the teams have performance at the forethought of their minds and hence they tune their development practices accordingly. In conclusion, yes, this graph again, there are really no downsides to make your application performance. There are only benefits. A performance application opens you up to a whole new market. This is millions of additional people. It is also proven fact based on statistics in case studies, some of which have cited in previous slides that performance app increases traffic to your sites and keeps users engaged for longer periods. On a fast site, users are known to consume more content. Some of these changes, especially server side rendering or even static rendering allows our apps to have enhanced SEO thus increasing our leads to sign up conversion rate. In fact, a study shows that rebuilding Pinterest pages for performance resulted in a 40% decrease in wait time, a 15% increase in SEO traffic and a 15% increase in conversion rate to sign up. Finally, for every traveller like myself, PWAs are hailed life savers. I mean who doesn't want to be able to check the train schedule or navigate around a city without any network. I know I do, but that being said I hope I've now convinced you to think about performance and expanding your apps to the African soil. Why not? Thank you so much.