 Ok, hola a todos. So, antes de começar, maybe some of you met me in the party yesterday and there was some confusion about what was the theme or why I was dressed like this, right? So, I wanted to clarify that it's a cover by Iron Maiden relating the 80s. So, this is what the reference was too. So, yeah. I'm glad to clarify for those who didn't know, yeah. So anyways, my name is Noel, I work at Moodle HQ, I joined in 2019 and actually this is my first time doing a live presentation so I'm very excited about that. And I work on the Moodle app. This is the mobile solutions team and if you want to speak with us, you can confine us in the Moodle stand in the first floor and if we are not there, you will find someone from HQ and you can ask for us and you will be able to reach us for any questions. So, what we do, we build the Moodle app in case you don't know what it is. It's a native application for Android and iOS that users can use to connect to the Moodle site. And it has some additional features that you wouldn't find in the LMS. For example, it works offline so you can download courses for offline use. It also has support for some plugins so it's not only Moodle cover functionality and it's open source and free to use. And we have right now like 8 million active users per month but this is something that changes every month and depending on how schools are using it and all that. But it's a big user base. We also, in the app's team, we also have some premium services. In this case, the program premium plans for the app. They give some additional features but this is something that the site administrators have to pay. So not every student would have to pay this. It's only that the school pays to give these features to students using the app. And we also have a service called Brandel Moodle app that you can use to have an app customised with your institution. But if you want to learn any more about this, you can visit these links or contact us. But this is not what I'm here to talk about today. Today I'm going to talk about testing and I'm curious how many people here would you say you test your code or you have tests? Ok. Maybe half the room, maybe less than half the room. So my idea here or one of my goals is to convince the ones who didn't raise your hand that you should test and that it's not as difficult as you may think. And you can start doing it in an existing project as you will see how we did it. So first a quick introduction in case anyone is not familiar with testing or what I mean with that. Testing is basically making sure that your code or your application is doing what you want it to do and that it keeps doing it. And there is some different types of tests to achieve this. For example, the ones that you should have more of they are called unit tests and the ideas that they test units in isolation. These units means like a class or something in code language. And also you usually have unit tests to cover each cases because you will have a lot of each cases and these are usually really fast to run. So it's more effective. And the next ones are integration tests. These are to test different units working together. The idea is that you would use these to cover complex interactions but they are also quite fast. Maybe not as fast as unit tests but they are still kind of fast, right? And finally we have the end-to-end tests which are the ones that you use to test the more important parts of your application like the login page or things that are critical, right? The drawback of this is that they are slower that's why we don't have as many. If not we would do all like this because they are more trustworthy. And the idea is that they work using different systems together not only small units or not even only your own project. Like in our case for example when we run this type of tests we are not only running in the model app we are also running the LMS score even though it's not what we do but to make sure that everything is working together properly, right? And finally, ideally you wouldn't have this but there is also manual tests. And this is basically people trying that it works or you as a developer before releasing a new version you make sure that everything is working properly, right? These are performed without automation and usually they are the most expensive so the idea is that you would minimize this, right? And in case you need some convincing about why would you test anything one of them is for peace of mind because with a well tested code base you can sleep at night you are more confident that your code is working properly also to avoid regressions basically this means that you can make changes to your app or your product without being afraid of breaking anything that already exists, right? Also to reduce bugs and with this I don't mean that if you do tests you will have less bugs in your production apps What I mean is that whenever you find a bug if you fix the bug and do nothing else it's possible that this bug appears again but if you fix the bug and also write a reproducing test this bug will never appear again so that is one of the good points of testing whenever you find a bug you can make a reproducing test so that it doesn't come back, right? Also for handshake overhead this is maybe a bit of a weird term but what it means is that as you keep adding features to a product the features may increase linearly, right? Like you have more features but they don't grow exponentially however the interactions between these features grows exponentially so every time you add a new feature you are potentially breaking a lot more things, right? but if you have tests that are covering these interactions it's easier to keep adding features and be safer like this and finally to not repeat yourself because the reality is that you are probably already testing like nobody writes code without executing it and saying that it does what it should it's only that we are used to test it by ourselves and we don't automate that but if you learn some of the techniques about testing in the end it doesn't take longer to test and automate the test so it's something worth keeping in mind as well that you are already testing is just that probably you are not automating it and in our case in the model app there are some challenges we have compared to testing for example the LMS Core so for example we support all devices all the way back to Android 5.1 and iOS 11 we also support all versions of the LMS also I have although I have to caveat this with saying that officially we only support the supported versions of the LMS so right now it's 3.9, right? but the application works with also 3.5 sites and upwards and we have not removed any code making it compatible so in practice you will be able to use it even with all their sites but if you find some bug that may be very difficult to fix maybe we will tell you that we will not fix it because it's not a supported version but in practice it works with all sites as well we also have offline functionality as I mentioned before and this for testing is important because it's not something that we do usually while we are developing the app most of the time we have internet connection so the offline functionality is not something that you would use normally on your day to day if you are not a user so this is also a challenge of something that you need to test and also media playback like videos or audio is something that in mobile devices is a bit more complicated than maybe in a browser but we also have some advantages like for example the model app doesn't have all the functionality that the LMS core would have it's mostly focused for students, focused for students we also have some functionality for teachers but it doesn't include for example things for administrators or course creation or course edition so that simplifies the functionalities also we do global updates and I wasn't sure if I should add these in advantages or challenges but in the end I added it to advantages because we usually only have to keep in mind one version of the app when we release the new version the previous version of the app is already not recommended so if anyone has a bug or something we tell them that they should use the latest version most of the time, right? so to have a sense of the scale of what this means, global updates normally when a model site is updated or when a new release of the LMS core comes out each administrator decides when they will get this new version in their site but with the model app it's not like this if some institutions have a BMA or a custom app then it works like that but for most sites that they are using the official model HQ app the application is updated for everybody when the new version comes out and this is some stats from 3 years ago from 2020 where doing a new Android release we would impact almost 7 million users on the same day, right? so this is why I mean with global updates yeah and basically the meat of this presentation is this part this is a timeline of what has been happening in the model app regarding testing and the first dot here is when the app was released in 2013 it was based on the unoficial model mobile you can, if you are interested, you can follow the issues in the tracker to see more of the history and this is some screenshots of what it looks like it has changed a lot since then the team size was one person, which was Juan Leyva who is today the head of mobile and total test, zero so in reality I'm sure he tested things to see that everything working properly but when I say zero I mean there wasn't anything structured or done in an automated way, right? then in 2016 with that version 3.1.2 we started doing manual tests in an organized way and in the first version we have of this had 156 tests so that you know what I mean with this and this is an example of a real spreadsheet we had at the time and we have a list of use cases of things that we have in the app and manually we go through each of them and we test it on Android and iOS and using different devices and different model sites, right? so as you can imagine this is very time consuming but it's something we do to make sure that the quality of the app is good and works properly, right? nowadays we have improved this a lot we have now a plugin we have developed for our internal use and it's basically the same as before but it's a bit more structure and easier to use we also have some instructions some QR codes to do a quick login, etc. but the basic idea in the end is the same we do all of these tests manually every time a new version of the app comes up and this is an example of the typical testing with different devices testing it in different operative systems, etc. and then in 2019 we finally started getting something related to automation and this was actually a community contribution by San Marshall from the Open University and what he did is that he added some VHAT test steps into code that worked for testing the Moodle app like VHAT is a technology that already existed for the LMS and it's basically a program that executes a browser and that's what a user would do like click a link, fill a form, etc. and it can do it in an automated way to see that the outcome is the expected and this is an example of what that looked like at the time and today is mostly the same so you just write the steps of what should happen and this is executed every time to see that everything is happening as you expected right? Later in June 2020 we finally added these steps into the app itself so we use this functionality to test the app that it worked properly, right? and this is something that has been changing a lot since then if you are interested in the technical details you can follow these issues and you can also read the documentation where you will find how to use this also for your plugins so if you are developing a plugin that has mobile support you can also use VHAT tests to make sure that it works properly and keep it working, right? Nowadays we use these VHAT tests into our integration pipeline so whenever we do a new pull request to add any code in the application we execute all the VHAT tests to make sure that it doesn't break anything and also we have some Jenkins tests making sure that these VHAT tests work with different combinations like with different versions of the LMS etc. and as you can see right now they take 70 minutes to run so this is why I was saying before that they are slower and you should minimize them whenever possible to make your CI pipeline quicker, right? Later on we added unit tests these are using jest and in this case the unit tests of the application may be more important than you imagine for the LMS core because the application is completely built using JavaScript and TypeScript so there is a lot more surface area that can be covered with this type of testing, right? You can also read the documentation in case you are interested in doing any of this yourself and this is an example of what this look like as I was mentioning one of the common things to do is to test edge cases like in here we are checking a URL of a model site and we are seeing how it would infer the correct model domain from this URL this is something that we use in the logging of the app and if we had to do a VHAT test to test each of these use cases you would imagine instead of 70 minutes it would be 3 hours or even worse, right? so that's the idea of covering edge cases with unit tests as much as possible and then in 21 we added some performance tests and this is something we are still working on but the idea is that whenever you change something in the application we want to make sure that the performance is not degraded, right? there is one tool called Lighthouse maybe you've heard of it and the stats we are tracking are similar to that so how much it takes to render for the first time how much it takes to login, etc. although these ones in the end we ended up removing them because we had them running for a year and there were more travel than they were worth like I think in a year they never helped us detect some regression in the performance but there was often failures that were due to random circumstances, right? so this is also something to keep in mind when you are testing is that your tests in the end they need to help you so if you have been adding tests and they are annoying and you never find a bug or anything thanks to them you should rethink your strategy because in the end tests are to keep you more comfortable with your code not to give you more headaches, right? and the latest one we added in terms of code is a snapshot test this is actually something that came out of the Moodle stand last year someone came talking to us and they gave us some ideas and we have implemented this in the app and you can also use this for the LMS even if you don't use the app at all, right? the idea basically is that we keep a screenshot of the UI of the application and we detect the changes that happen every time something changes in the code for example this one in the left this happened when we upgraded the font awesome version you can see how just visually you see the changes that happened this is something that you wouldn't detect just with a VHAR test because it's very specific, right? and also the one in the right is when we added some new profile fields to the user profile, right? and the idea with this is that when something like these changes you include the change of the screenshot as well in the commit so this also comes useful when someone is looking at the history of the commits and they say they see something changing the UI maybe they don't know what actually changed just looking at the HTML diff, right? but it's very useful that you also have these screenshots in the commit itself to see what actually changed in the UI, right? so basically that's everything we have done so far in the app itself and the latest thing is that we have integrated all this suite of VHAR into the tracker so something that usually happens is that changes in core break functionality in the app this ideally shouldn't happen because one of the testing instructions to make sure that the code that gets into core is working properly, is that it doesn't break the app but the reality is that something is very obscure I use case that if you don't know the app in depth you cannot find and it's end up breaking anyway so by doing this we expect to reduce that from happening and we are still fine tuning it so maybe if you get one of these errors you will see someone from the app's team that comes and gives you more details of what is actually happening under the hood and basically that's what we have done, right? and as I was telling you before even if you already have a code and it's something you have been doing for a long time and you have not automated anything it's never too late to start we have been improving it recently and we still have a lot more to go but we have seen that doing these things have improved a lot the confidence we have in our code and our process of testing QA and ensuring quality, right? this is what testing looks today you may notice it's an inverted pyramid so the opposite of what I said at the beginning that you should do but to be honest this is the reality ideally yes you would have the other pyramid but in a project like ours that started without tests if you have to start adding tests the more important ones are the ones at the top because are the ones that cover the golden path, right? and when you already have everything covered in the golden path little by little you can start adding things at the bottom, right? but if you can only choose a few it makes sense to choose the ones that are at the top so this is why this is the situation today and it's something we hope to keep improving we also have written in TypeScript and we have been fixing we have been improving how we use TypeScript and other than this the team size has grown quite a bit today we are 15 people to be fair some of them don't work on the code of the app itself they are more looking, communicated with clients, doing support, etc but there is more people who have eyes on the app today and also there is even one person, Issa, who is dedicated to QA and this is also very useful because we developers we are very used to the quirks of the app or we may understand why something doesn't work as you would expect but it's important to have someone with an outside view that can help you see the things like the user would and if you can help us you can join the beta testing program so you can join in Google Play for Android and test flight for iOS and you would get the new version of the app a few weeks before it's released and you can tell us about any issues that you find before it hits the final user, right? and finally that's it, thank you and if you have any questions, let me know there's a question there you're putting before and after screenshots in the commit how are you doing that? are they going in the message somehow or are you putting them in a resource file and committing them? yeah, there is a plugin so if you look at the GitHub repository for the plugin there is documentation and you can see how it works but basically there is a folder in your repository that is called Snapshots and it's a folder with images so whenever you run the test it generates this image so the first time you run the test in your machine it will generate this image for you and then the next time you run the test it will compare with it and if it's the same grade the test will work and if it fails it will give you three images the one you had before the new image and the one comparing the states that is the one you saw here so the idea is that you just commit the new one if it's a Nintendo change if it isn't then you fix the code and it should be working again do you write tests before you write a code or after? good question ideally I don't believe in TDD 100% so I don't believe you should always rewriting all your tests before but as I said at the beginning we already test while we are coding and the idea is to not repeat yourself so for some new features we actually write the test before and they are actually even useful for development for example if you are testing something in the login workflow and you have to develop it it's very annoying every time you change one line in the code to log out, log in again do everything right but if you have everything in vhat it's easier to debug and it's easier to develop so many times we do this we write the test before but some other times we don't just because it's quicker to do it without writing the test before and also as I've mentioned we have started adding tests recently compared with the lifetime of the application so there is a lot of code in the app that is not tested so that's my opinion on that ideally yes but we don't always do it so it depends on the use case yeah so you said that not everything is tested right now are you trying to achieve 100% test coverage that it's another topic ideally yes in practice I mean even if we could do it I don't know if we would want to dedicate so much time I guess it depends on the project the model app in particular the code base is really big so as you can imagine I don't think it's feasible to do it and even if we had it 100% test coverage is usually done with unit tests most of the time so you can still have bugs or something you know it's not so no I don't think we will reach 100% test coverage I think the goal would be to invert the pyramid that for sure but the 100% test coverage I don't see it anybody else you want it he's coming do you use fuzzing if we use what what is that randomly generating user input just nonsense no at the moment we don't maybe it's something we would like to do at some point but at the moment we don't so if someone is not familiar with what he's asking basically the idea is that if you are testing for example the UI and you always put the same hard coded thing it will always work but it's nice for example if you have a form that takes a date to try with random dates to see that nothing breaks with some date that maybe you didn't intend but this is something we are still not doing it would be nice at some point when you test you do only in English or do you test for other languages because things sometimes are longer at the moment we only do it in English at some point it would be nice to do it in different languages and maybe off topic desktop this version of model mobile still works we discontinued it because technically it's you are able to install APKs and iOS applications in the desktop now so we discontinued it because you should be able to do it with the mobile version of the app but the app is we do test it and we make sure it works for tablets as well so it should work in landscape and portrait I will just cut in here I'm on the LMS team and I've recently with some acceptance tests started using other languages say for instance you'll see with the filtering of the name the all button sometimes changes languages so if you say I select all it doesn't work if you're using like Japanese because it's not all it's something else you can still use different languages but you have to install the lang pack while writing the tests anyone else have questions I think we've got time for a couple maybe a minute or two ok, there is one there we have one more maybe I just wondered towards your deployment strategies how do you incorporate testing there because you mentioned you test along writing the code we have it written in github actions so actually if you go at the repository of the model app you will see there is a file called ci.yaml or something and it's there so you can look at the source code but basically it's a github actions you can configure some scripts to run and they run VHAT using the CI jobs anyone else ok, thank you very much