 All right. We're recording and we are live. Hello, everyone. Welcome to the workshop, Contributing to Reptaro. I hope that in this session, we can explain you how you can start contributing to Reptaro to the project and to the community. And we are all excited to have you as we are looking for more contributors to more people who want to get involved. So, who are we? So, I want to say a big hello from the technical steering committee of Reptaro. That is Adam, me, Ervin, Kevin, Nikola, and Will. Actually, Ervin is on the call and will introduce himself in a second. So, I want to introduce myself. First, I'm Christian, working as a software engineer in the open source program office at Sauce Labs. And I've been working on Reptaro for almost, I don't know, seven, eight years now as like a site hobby project. And now it became more and more my job, which is exciting. And yeah, Ervin, please introduce yourself. Hi, everyone. I'm Ervin, Ervin Heitzman. I'm working for the testers. And I've been working with RedRio, I think for about four or five years now. I'll be using the tool before I start contributing myself and later on develop myself to one of its members. Awesome. So, for this session, we wanted to give you a little project overview and want to tell you what kind of packages there exist, and how the code is structured, and how the project is structured, how WebTower is structured in different packages. And we want to show you what's the typical development workflow of the WebTower project looks like. Like, when you want to contribute, how you can do that, how you get started, how you run the test, and how you check out the repository. Then we want to explain you the governments and how we run the daily business of the project, and explain you what a contributor is, what a project committer is, and what we do as a technical steering committee. And last but not least, I want to give you some information about how you can contribute to the project and what's the best start for you is. And for the last hour of this session, we want to, you know, pick up some tasks and just start coding and start to contribute to any of the bugs that we have, or to anything that you would like to contribute to the project. And we are here for you and, you know, help you, you know, with questions and anything that you, you know, any confusion that you might have. So that should take two hours and I'm really excited for it. And so we start the session with the project overview and they give it to you, Evan. Thank you very much. I do think you need to update your slides because they haven't moved. Christian. Where are the slides now? At the contributing part. So we didn't see the updates, I think. Okay, let me check. I actually see project overview. Oh, you do? All right, then it's just me. Sorry. All right, so are you at the, how does the browser automation screen work? I know. All right, thank you. So WebDriver.io is a tool that can be used to automate the browser. And here you see an overview of how this actually works. So in between the browser and WebDriver.io can be a number of things. You can put the driver there, you can put a server there like Appium, or a grid, for example, which then in turns steers the drivers for a service like Appium. The drivers are used to automate the browser. And Appium and SelenDroid, for example, they use, are used to automate the mobile applications. There's also some mobile and hybrid apps, for example. And there you have to switch context between the two. And they will actually use the driver and Appium on the go. On the right side, you actually see the tools like Cyprus and Tescofee, Puppeteer, stuff like that. They are increasingly popular. I want to use this to go to the next slide, where we actually see a little bit different where we don't have the driver L anymore for the Puppeteer. For example, on the bottom, but on the top you see that WebDriver, the framework, the protocol that we use overall, actually needs to have this driver, the Chrome driver in this case as an example to talk to the Chrome browser. So whenever on the left you see WebDriver.io. So when WebDriver.io is started and it will run your test, and you have a driver installed, in this case, for example, the Chrome driver. They will use the WebDriver protocol to talk to this driver, and the driver will communicate that back to the browser. And there's different drivers for every browser. So we have the Chrome driver, Gecko driver, the Edge driver, and so on. When you look at Puppeteer, Puppeteer currently supports Chrome, and I think there's a beta or something for Firefox as well. This talks to the browser directly and it does this through a certain protocol, and this is the same protocol is actually used by the drivers. Can we go to the next slide? So when you look at the layers that we have in WebDriver.io, on the left side you see the protocol packages. As an example, we've put the WebDriver protocol here, but there are many more protocols. We have the Appian protocol, the JSON work protocol. So this is just a way of communication. You can take out a protocol and replace it with another, or you can easily add a new protocol if you would like to. WebDriver itself can be added or used as a library. We call this standalone mode. I'm not going to go into this, but it's a way that WebDriver.io can be used. And on the right side we have WebDriver.io as a framework, and WebDriver.io as a framework can be used to include the actual test run. So to highlight the way that the protocols actually are used on the next slide. Here's an example of how WebDriver can be used directly to create a session on Firefox. And to highlight how important the commands here are, is that, for example, the hash that you see when you do the send keys element keys, then the hash is actually a static part that is used in order to send these commands. And how this would look is like on the right left side we have the property which is the element hash. On the right side you actually have the element ID that will be used in any of the requests that will be sent to the browser. Next slide please. So when you look at POP Tier, there's another example here where we use the DevTools. DevTools is another name for what POP Tier is using. It's using the DevTools protocol. And the protocol is mapped to the WebDriver protocol in order to easily swap out the automation protocol for something else. The example that you see here is exactly like the, or almost exactly like the WebDriver protocol. And that's because we have a mapping for this that we can use so we can easily swap out one or the other. And next slide please. So when we take a higher overview of WebDriver itself, WebDriver.io uses the protocols under the root. As you can see here, WebDriver.io is imported as a library. It will then use the Chrome. And it will use different commands because all these commands are then under the hood, mapped to the right protocol of your needs. The protocol then contains the mappings that it needs to know. And it will automatically figure out what it needs to do based on that. We also have some cool things like we have a retry middleware where we automatically wait on elements where we execute the commands. When there's a still element reference, for example, we try to reflect the element for a certain amount of time. And if it succeeds, it will just continue. And if it will fail, it will throw an error saying that, well, even after retrying, we could not find this element for you. It can also wait for elements. So we have a refresh of the still elements. We can also wait on elements that we want to do an action on. And this only applies for actual actions like click and set input, etc. Some people call this support for lazy loading of elements. Next slide, please. So we also have the command line interface. On the left side, on the top left, you can see how you can actually install this. The command line interface is just a really easy way to create a setup that you like, including all these services, reporters, browser drivers, etc. That you would like to install. We also have some frameworks that we support, like, for example, Mocha, cucumber and Jasmine. And here it's very easy to create a really nice, easy way of installing everything that you need. You can also use the rebel interface that we have, which is a way to run a driver without installing it, actually. Next slide, please. So then what packages do we actually have? So when you look at the overview, we have some core packages, in which case this is, for example, the web driver and the DevTools protocols. Or packages in this case, which we do need to create the mapping that we just explained. Then we have the web preview package itself, which can be used to run without the test runner itself. It can be used programmatically or, like, importing it as a library. And then we also have the CLI, which is a core package needed for when you want to install any of the other packages. So then we have some helper packages that we use, for example, the config to create the configuration inside of the command line interface. And then we have a third that's used among many different packages. Then we have the protocols, the rebel interface that I just talked about, and so on. Then we have some reporters. These reporters are used to translate the actual test results and all the actions that go that happen along the way to your terminal. So for example, expand this functionality to output a file, for example, which contains all this information. You can change the log levels. You can add any kind of features that you would like, basically. People have created HTML websites with nice formatting and UI. It's basically, if you want to have something that's not there yet, you can easily create your own Reaper Porter and add it to this list. The services are basically a way of expanding the current functionality other than the reports. Services are basically like a small, small addition to WebGriver.io. I'm not really sure how to explain it any further because, like, the way that services work is you're pretty limitless, basically. There are services for the Applian service. We have for Apply tools, for a browser stack, so all the service providers that we have have seen really, really interesting features that people have added over the years. And if you think you have found something that's really useful, you can always shoot the team a message saying, hey, I've created this. I think it's really awesome. And I would like to put this forward and then we can add it to the website if it's a really cool feature. Next slide, please. So then we also have two runners. We have a local runner to run your, which is the main way of running WebGriver.io, but then we also have a Lambda runner. And this is to run WebGriver.io in Lambda functions. We have the framework adapters, which are the frameworks that I talked about, the cucumber, jasmine, and mocha frameworks, which are basically test runners, which are used to run WebGriver.io. When you use WebGriver.io as a library, it's a different story. It will not run this, for example. And then we have some other packages, which you see a little list of. Next slide, please. And then we have a small overview of the GitHub project. So on top you see the GitHub templates. We have some workflows there. For example, we have the GitHub actions there as well. There's some more down files to generate the project documentation. I think it's very, very clear how we separate all the logic. For example, we have an end-to-end test to folder as well. There's some smoke test there, which we will get into a little bit later. If you have any questions about this, please let me know. But I think that this is a very, very clear overview of all the files that we have in the project. Next slide, please. So then we have some WIO commands. This is just an overview that I took from the website. All the flow charts that we have available can be found on the link that you see on the top right, bottom right. So what we see here is that the Alcoma interface that we just discussed will take the arguments that you pass or if you pass any of these. And then you see that you can run the WIO command. There's just help, for example. And if there's a command included, it will take either one or the other route. We have a config route where we can create a configuration file for you. We have the REPL where we can run the web browser environment without installing anything. We have the install and the run. So whenever you're going to work on the project and you'll question yourself like, how is this all connected? Then the flow charts are a really good way to get a grasp of how everything is intertwined. Next slide, please. So now we've discussed the important things that you need to know before we actually talk about the code. Like we know how the GitHub page looks, how the project is structured a little bit, where you can find more information, how everything is connected. And we also know how the tool talks to the actual browser. So let's talk a little bit about code now. So how can you actually get started? First you actually check out the code. So we clone the project to a local directory by doing a git clone. And then we move to the directory. And then of course we install all the dependencies that it needs. And in this case, you can do a MPM install and then MPM run setup dash full, which will install any dependencies over the whole project that it needs or that requires. And then when once everything is installed, of course you want to know like, am I starting with a clean slate? So we would like to run the unit test and slow test for that you can run the MPM run test double dot coverage. You can also run these commands for just a single package if you'd like. So when you're working on a small isolated area, which does not connect in any way with the other packages that we have. So finding that in the in the read me or the contribute contribution markdown file that we have, how you can actually run these files for just a single package. And then the smoke test can run by doing a MPM run test smoke. On the next slide we see a little overview of all the guards that we so to speak have. We do some lifting, we have some dependency checker, we have some typings that we have, which are based on the type of typings. I think it uses the JS doc notation. We have some unit test, we have some smoke test and we have some end when test so depending on the change that you make. What, what's applicable here to run, basically. For example, when you you just change a little isolated thing that it might be that you just want to run the unit test that you've added and, of course, all the other unit test that you did. But sometimes you want to also run these photos and the end point as well. These are all random one, when you push your code to the When you create your pool request basically so the DCI city environment will also run this for you. So next slide, we see the Get ready for development slide I think now is that correct. Yep. All right. So everybody likes to work differently but I believe that Christian uses this way of working where he watches the files. So he does a wrong watch, which triggers a, it compiles a package for you when you make changes. And then you can run the Chrome driver on a specific port and using verbose. So you see the dash dash verbose to sign us it. Or I think verbose actually looking more. I'm not sure. But yeah, when you make changes, you can also do this with the, with the test room or I think where you watch for any test changes. So when you combine this whenever you make some changes you will see if your tests are actually passing or not. And I think with that, if we check the next slide, I think we have one more example. Yeah, we're pretty much basically ready to start coding. But there's one last slide where we have some test examples. And these files can be found under the, the examples directory in the GitHub project, where we have a small overview of different kind of examples where we have mobile testing, cloud connections, how we use multi remote page objects, all that kind of stuff. So if you have any questions on. Well, we have some examples there as well for you to look at if you have any questions. And I think that wraps wraps up the development part. Thank you, Evan. I will like to continue with the governance of the project. As I mentioned before, the project government pretty much codifies how we project maintainer handle the day to day business meaning how and to is allowed to merge full request was allowed to release new packages was allowed to do something else. And the governance really helps us here to codify that and make sure that we, you know, we treat everyone. Or we do this properly in a documented way. So when you read the first sentence of the project governance description, it says the project once as much as possible to operate using procedures that are fair open, inviting and ultimately good for the community. For that reason, we find valuable to codify some of the ways that the project goes about it's day to day business. We want to make sure that no matter who you are, you have the opportunity to contribute to whatever we want to make sure that no cooperation can exert and do influence on the community. Or hold the project hostage. And likewise, we want to make sure that corporations which benefit from a fellow also incentivize to give back this document to scratch various types of contribution contributors work within the federal project. So, though, the main reason why we have this governance is that we want to make the decision making fair, equal and democratic. There's no one who should just own the whole project. We want to avoid the bass factor effect, which means that, you know, whenever someone decides to leave the project and doesn't want to contribute anymore. There are other people that can take over that have access to code and do the packages. We want to also allow that every the project direction can be influenced by anyone. So whoever contributes to the project and invests into it. He should be able to into to influence it as well. And then we want the people again, because people that are engaged also promoted and, you know, that, you know, get something out of contributing to the project. And we've defined for essentially four different kind of roles in the projects, which are a users that are people who use an advocate with the value that contributors with someone who has contributed to the project in form of a code request. They are project committers that are people that have shown a constant record of contributions. And there's a technical steering committee, which essentially leads the project. Well, as a user, you pretty much, you know, you use web.io and anyone can become a user without even knowing it. Once you use web.io in your company, once you, you know, tweet about it and say how awesome web.io is, you become essentially web.io user and part of the community. As a next step, you kind of like use web.io a lot and like me wanted to, you know, think about contributing something, you know, you found something missing the documentation and you want to change that. So with that, you create a pull request and change something anywhere in the code. And that makes you automatically a project, the project contributor. So anyone who's done one or multiple pull requests becomes that. So once you show a lot, like a bit of engagement, once you have helped other people with issues, helped on the support channel and has have made a bunch of PRs to the project, you become a committer. And with that, we invite you to the web.io organization. You will become right access to the repository and you can help us even more by, you know, closing issues if questions are answered by committing code and so on. It's just one more level of engagement to the project that lets you become a project contributor. And last but not least, there's the technical steering committee, which are, which consists of people that have, you know, shown a high record of contribution with led certain initiatives in the project and have an overall understanding of the code. You become a TSC member once you have roughly committed over 20 qualifying pull requests in whatever shape or form and you become, you get nominated by one of the existing technical steering committee members, which is, you know, fairly easy. So once you're part of the project committee, of the technical steering committee, you can release packages of the project and you have even more steering direction of it. So now that we know the code, once we know, now that we know how we can contribute to certain things, how we can run, how we can pull the code and how, you know, how my role in the project looks like. Let's find out how we can contribute, actually. Before we go into the topic though, we should answer, you know, why should I contribute to the project? You know, why should I spend my free time to contribute to a project that I use at work? So I, there's, you know, there are a lot of incentive for different kind of people. I put some of mine in here where I would start with giving back to the community, I think is one of the biggest incentives for me to contribute to the project. I know that I use almost everywhere an open source project where people have invested their free time into it so I can use it for free. And I think it's just important to be a good citizen to just, to just give back to the community and help the project that I use the most. It also helped me when I started working on WebDivero to understand the framework better. And as I write tests in my day-to-day business, it helped me to understand where the errors in my tests come from and how I can fix them. When I just, you know, when you just use the WebDivero as a framework, you sometimes, you know, I'm not aware how things work under the hood and this will really help you to understand that. It also helps you to understand what are the limitations of certain automation practices that you do because as you get more familiar with the limitation of the framework, you also understand better what you can do and how you can test certain things. It's of course always good if you are able to influence the project, you know, if you have a specific requirement in your day-to-day job where you want to contribute, you want to have that be done by WebDivero. Being a contributor or a technical student committee really helps you to easier add or propose such features and implement them. Contributing to open source in general helps you to improve your coding skills as well as build up your reputation that you can leverage for your own career. And last but not least, you meet wonderful people on the way and it is personally really fun to do. Christian, sorry to interrupt you. I know we didn't discuss this, but I just wanted to mention that actually all of this really applies to me. Like when I started contributing to WebDivero, I actually got my current job as a result of all the hard work that I did because it showed people saw this. And someone who was also active back then in the WebDivero project actually invited me to their company back then. So this is like a perfect example of how these things can levitate you or leverage you up to better places. Absolutely. So to get started, there are various varieties of ways how you can contribute to the project. Most of the people think of contributing to open source is just you have to contribute code, but that's not always true. We are really looking, every project looks in how they can improve the documentation. And as you use WebDivero more and more, you find places in the documentation where you think this can be actually improved to help people understand this area better. You can help us out on the GitHub support channel where we have over 5,000 people, I think, asking questions every day. And we are just a handful of people and we want to help everyone out there. It's always good to create educational content. That's how Kevin actually joined the project and became a technical student committee member. He wrote the Learn.WebDivero video course. He created that and he has a bunch of great YouTube content where he explains how WebDivero works. So he's an awesome guy. You can also contribute by just spreading the good word via Twitter or any other social media. You can help us discover bugs and create bug reports. You can make feature requests if you think something could be added to the project and it's useful. And just be creative. I know you have all talent and you can just apply your talent to the project and have it out. To get started on the issues, we have a labeling system that allows you to quickly find bugs that you can start working on. We highlight all bugs with the bug label or enhancement label when this is something that is new and needs to, is a new feature. I would like to highlight the first-timers-only label which has a good description on how this specific problem or feature has to be implemented. It gets you a better understanding compared to some other issues that need more information. Good First Pick is similar to that where it's a good first issue that has a limited scope and helps you to get your feet swept. And every issue that you see that has a help wanted is something where we as a team don't have enough time to work on this. So we actually actively looking for people that want to get involved in this. If you create an issue, there's a couple of good practices that you should follow. One of them for instance is that it's always good to follow the issue template where we ask you to provide the version number and enter reproducible example. Which is really important for us to reproduce the problem and the bug on our site. Otherwise it is really difficult to understand how and where to fix the problem. And with a minimal reproducible example that can really make a big difference. Think about that. If we should help you to fix the bug, then we kind of need to understand where we need to fix it. And like this reproducible example does that. And you don't need to really like copy all the code that we are working on to like a new GitHub repository. It really just helps us to have like a simple file that can reproduce the issue or have a simple test run a project that reproduces the issue. Another important part is providing error logs and as much logging as possible. Use the just format for that or any other place where you can dump logs. And especially if you import these logs into the issue thread, make sure you use a proper markdown format with the three ticks. Otherwise it will be really difficult to understand where the logs started and where they stopped. Of course always provide a descriptive title and description. There are a lot of issues that just say what doesn't work and this not really tells us what exactly is going on. And at the end of the day for questions we kind of like canceled the support for questions on GitHub because we made the experience that on the GitHub support channel we can much easier interact with you through the chat. And there's also a bunch of people that can help you out more than just the collaborators on the project. So let's say here's a simple example. If you want to start working on any on the back or issue, you go on the repository website. You filter the issues based on the good first pick or first time as only issue and you pick up one of them. And really if you know if you see something that is not well described enough or you need some more information to solve this. You know always be free to post in the issue your questions and either the issue the person that has created the issue or one of our contributors will help you out. There's also another way to contribute to the project which is looking at the roadmap and you know providing a new feature to the project. Even though we have a roadmap it doesn't necessarily say that we that we completely are focused on that it kind of like helps to steer into the right direction. But if you come with an idea where we say yeah that definitely makes sense to add to the project and to the framework. Then we definitely add this to the roadmap without question. But the current roadmap description or the current work my place gives you kind of like a hint where you could get involved. There's some interesting projects for instance. Where is the WebDivo Fiddle platform where we want to build a website where you can run WebDivo code in the browser connecting to a cloud provider or to you know a Docker somewhere in the cloud to execute the test. We already had something like this and we saw it really valuable. And so if you are a front-end engineer that likes to build websites then you know help us out with the Fiddle platform or you know if you'd like to create these videos then you know help us out. Build a video for certain command or for a certain section in the documentation page. And they are all like you know the direction where you know can get involved with your own ideas and your own you know suggestions. And making the PRs really as easy as you know making the pull request create a fork and pushing through your fork on GitHub and then make a pull request to the master branch in the web browser approach. And we currently don't really have a format on how these pull requests should look like. The most important part for us is that you know the tests are passing and that the change that you're proposing makes sense. And then you know it goes as quickly as saying looks good to me we merge it and it will be part of the next release. And you're not alone in this journey. If you want to start contributing if you have you know issues with that you can always go come to the Gita support channel and instead of asking questions on how to use Repdivo. You can think that are the contributors directly if they if these questions are relating to a contributing to contributing to the project. We also have a slack channel and again if you ever if you come to cause an issue where the you know the issue document information is not complete and you have questions to it. Never hesitate to just ask for information in that issue. And with that I want to also announce a new thing that we want to start in the project which is the contribution office hours. So if you have an issue that you want to work on you can schedule one on one session with me. I have blocked two hours a week for that. And we have we can like block one hour a week to just actively work one on one on this particular back or feature or whatever you want to contribute. And I'm happy to help you you know help you along the way. With that said let's get started. So I hope that somehow explains how you can contribute to the project and how we can get started. I would like to ask you to you know start looking into the pro story and fill with the issues for a good first time as only I recently added a couple of good issues that you can start with. We have for instance fixing the test that we currently have that we're currently skipping in our unit test. There are some issues and bugs that needs attention that have a fairly limited scope. And there are some you know typescript issues that we can resolve or you can just think about any other contribution that you might want to do. And we are happy to help you on the way here via chat or you can just unmute yourself and ask questions if you have. And yeah, we are here to help you out. Does anyone has questions so far on the presentation or anything related to the contribution process. Okay. I do see that we have a question from Quinn in chat. Miss the development workflow I think is it still the case. The development workflow. Yeah, I can. Yeah, I can actually showcase that in life. Let me share a different screen. So okay. Can you see my ID and stuff like that. For me it says that you started screen sharing but I don't see the screen. Let me try that again. Desktop one share. I think it's actually sharing it on the not in zoom but the Kiko chat maybe. I think the others can see it but I don't. I don't see the we need to try something else. Let me just. Yeah, it's visible for everyone but me. Do you see my ID now. Yep. Okay, perfect. So here on the. I've checked out the web code. What do you of course do as. As everyone has explained before once you have checked out the code and you can you should install the dependencies with npm install. This should install the project dependencies that we use to to work on the project like young learner and not not young but learner and all the dependencies that we have to create all these packages. And what you then should do is run setup minus full, which we resolve all the dependencies of the sub packages that you see here. Reptile uses a mono repo system based using the learner framework. And so all the packages that we have in our mono repo and that we officially, you know, come from the web developer organization live in here. Everyone besides the assertion library actually, but other than that everything looks here and. You see that the format is pretty much the same as with npm the wdo minus dot reporter is then the at wdo slash dot reporter on npm. So once you have run that that actually can take up to a minute or two depending on how fast your system is because there's we already have we are like 36 packages now and resolving all the sub dependencies can take a while. And once you have that you create. That's typical with the. Okay, I can have a team accession yeah that's great because then I can have multiple multiple terminal sessions here, which I need, because I want to run the watch command which watches all these all the JavaScript files in the project for some time. And then you have a package. So every, every time you make a change. You cable is triggered and compiles the files again. So that you can, you know, have a really short cycle of making changes testing it making changes testing it. And then it continues watching the files and when I want to, you know, do changes on everything I usually start up Chrome driver, because it's, you know, it's really fast to start up the Chrome session and make changes to it so you can use that whenever as well and make tests on Firefox, but I usually start from butter. And then we have the example section, which ever mentioned before that is for us as contributors, not only well a good for documentation purposes but also to run certain scenarios where you want to change something and you want to make sure that it works and we don't want you to spend a lot of time creating that example for you, we have a bunch of things already there. We have a script that runs at the end. We have a bunch of scripts, some other scripts that connect to a different cloud vendor like browser stacks or slabs. We have a script that runs the DevTools service and make some performance tests. We have a script for multi remote where it's a little script that locks into a chat and make some interactions and connects to work how to see it channel. So you can use that if you have changes for multi remote, page objects, if you want to test how page objects work, but I mostly go into the WIO section which has an example for the test runner. And I usually use a mocker one because I usually write test a mocker. So I would go in my terminal I would go to the example directory to the WIO directory and it has a package JSON. So if you look into the package JSON of that specific directory, it has all the prepared commands in there to run the scenario for a specific framework. For my mocker example, I just need to copy that and say npm run test mocker. So I wanted to see that Chrome driver now hits off and my Chrome you don't see that probably yet, but in the background my Chrome starts and my test passes. The test is really minimal. It just essentially opens web.io and asserts the title of the page. So this is your basic setup. This is all you need to get started. Let's say we want to change something in the spec reporter. We have the spec reporter here. We can see it, the output of it. And let's say we want to modify what's what's the prefix of every line of this reporter. For that we look into the code of the spec reporter into the source code. The source code once compiled, moved the file to the build and this is being used and published also to npm. But we are working in the source code in the source directory. The package has two files. So we look into the index jazz, which is essentially the reporter. So here it is a function that is executed once the runner ends, it's called print report. And this reports the header and then an empty line. The results, the amount of test passes and the failure if something is broken. So then let's look into where the preface, I see something here like a preface. Where does the preface comes from? The preface is here. And it gets the environment combo based on capabilities and whether or not it's multi remote and the CID. And so we can just go ahead and change something here saying hello world. Press safe. You see that it was automatically recompiled. And if we now run the test again, the Chrome browser starts up. Small test goes. And there we go. We have changed the spec reporter to have an hello world output. And this you can do with pretty much every reporter that we have with every service that we have and every utility thing that we have. You just start your, you watch your files, you start Chrome driver. And I can even show an example without Chrome driver. As everyone mentioned, web, can run with two protocols. We're going to call as well as the phone. Using puppeteer. So if I close up. Chrome driver. There's no driver listening on web. And what is smart enough to detect this. And still actually not open the browser. Because. It wants to connect to local host and port for form. So setting this essentially says whatever. Hey, I want to connect to a web driver server. But since I don't have that, I comment this out. And now we have no information to connect to and uses puppeteer under the hood to run Chrome with puppeteer. This takes a little bit longer because Chrome, starting from with puppeteer is not that fast, but it does the same. It runs Chrome. With the same commands that we use a web diver, but using the code afterwards protocol. That's essentially it. And any questions to that. Awesome. And I can back to the slides. Okay, if you look, if we take a look into the issues. I think also, as well, if you, if you want to start working on issue, make sure that you make a note that, you know, you are taking this on. And I'm assigning that to you, otherwise will be more than one person works on the same issue and this might be confusing and we, you know, someone's work would be for no reason. So if you start working on issue, make sure you, you comment on it and say, Hey, I'm taking this on and working on it so I can sign it to you. But essentially, let's say look into the first time is only something that is really well good to start is. If you want to start with, I guess a really good way to start looking into the packages and I'm sending the code is and help with this specific ticket. Where we are looking into someone who who helps us documenting the week me files the week knees of every project, some project that we have. So if we are looking into these packages, you will see that, for instance, WDO minus CLA has almost no documentation to it. It would be helpful for people for users to understand what this package is doing. So you, it would be great if someone could write some external documentation, how to run specific commands, or how to what the commands are. Actually, we could orient ourselves to pretty much copy and paste what we have here. Because I guess it makes sense to. Yeah, we have all this documentation here. And someone could just, you know, copy the content from here and move it into the week me of the CI package. So whenever someone looks on MPM and looks for the WDO CI package, he has some more information than just this little note. Furthermore, they're like the WDI logger wallet. It has a will like a good description of how you use this internal tool and how you can work with it. But they're like other packages where this is not really well described. I would say WDO protocols is well described. Let's see. All the services are usually well described because we, we copy the content from the week me into our documentation page. So if this is this automatically will be released in the website. But for packages that we use internally for instance WDO config, it has also almost no description. And we don't have like that. They're different ways or you can document that. I think the most important part is how it's documenting the interfaces that this package provides. So if someone wants to use it internally to build a feature, you have some information to work on. This would be bad. So there are multiple, you know, independent contributions that are based in this ticket. Another one that is has a fairly high level scope, low level scope is the issue on how to move the docs to docker's arrows. The web double project uses docker's arrows IO to build the documentation page and they have recently released a version two to it, which I think might be already out of. The beta state, because I don't see any notification on that in here anymore. So we want to also update our documentation to use docker's arrows to so you can have us here on that. What else. There's another documentation on how to run tests in get actions. This is already well described because we have the Jasmine boilerplate that already uses get actions to run web cover tests. If you have an example here. Checking out the code installing. Installing the project and then run the command. Yeah, there's an issue on the reporter that cannot. It has a problem when the dev tools protocol is used. It seems to not emit certain information. This could be also something that someone can work on for free. Does anyone have questions to one of the issues for free to unmute yourself and just say something already creating the pull request question. It's awesome. See. Oh yeah, there's also. We also have some other packages. Like the expect that there are the assertion library that we embed into the test runners since version six. This is less than a different repository because it has not that many dependencies to other packages. So here we have for instance, the code coverage of 63%, which is way below expectations, the main repository I think has over 99%. I think a good coverage is key. If we want to release with confidence. So you can just, you know, click on this code cough button, which will tell you the areas where we have not enough coverage to click you on sauce. And then you can see how much percent is already covered and I think the mattress. Don't have a lot of coverage to see this function. This function has no coverage at all. You can use, you can use this to make to, to, to have testing this part of the code because it's going to not test it all. And every single, you know, contribution to code coverage is super important and helps us to release the package with much more confidence. So, you know, with the amount of tests that we have in red, that it is very unlikely that we break anything if the tests are passing. And if they, if we still introduce a problem, or if we still introduce a regression, then it's, it's awful by not writing the test point. And maybe can also go a little bit more into the specific tests in general. Let me switch back to my IDE structure, share screen. So we have a bunch of tests that Irvine has described that you can use to make sure that, you know, you can run certain parts of the code or you can just test if your changes are working. In general, every sub package has a test folder where you see tests for every specific file. Usually, you see for every file, a test to it in the test for for. For YouTube chairs, there's a YouTube test chairs. And in this test value, you see that we write tests using just as test framework. And you can just go ahead and, you know, take some code and write essentially test for it. Those are the unit tests. Usually, yes, Lin test should give you should work out of the box if you use an ID that supports it. So if I had to semicolon, which is against our coding standards, or not coding cell by the guns our way how we write the code in this project, then it will tell you immediately the error and the problem in your ID. If, and if not, you can always run. If I save this, you can always run npm run tests. No, this is wrong directory. So I go back from the main npm run tests. And we use test yes lint. So this will run the linter for the whole project and should hopefully file my problem that I have introduced there. There we go. Exo semicolon in this test file on line 79. If we move this, we're good to go. If you want to run the specific unit tests. That's right. Let's say you want to add one specific test to your feature. I always go ahead and say, okay, I write a test that I only want to execute. So I added with test dot only. And then I write some, some useful description of the test and say, here's my tests and I just say console a lock. Let's go. And while I'm developing actually I never care about ESLint and other things I just write code and fix all the ESLint issues at the end. So we have one edit one single test in one of the sub packages. What you don't want is to run the unit tests to for all packages at the same time, because that will just take a minute. And I want to make quick changes and see if my unit tests are passed. So what you can do is you use the just CLI to exactly address your specific test. And so what you can do is MPX. So execute just which will automatically use the just that you install with repair value. And you say, I secured the test of the WTO CLI package in the utl file. And since I want to see how I progress in writing my tests, I want to watch my changes with the watch argument. And since we always measure the coverage, you can ignore that and just silent that by saying coverage reporters are cuff. So this will not create a huge list of all the files that. It doesn't take the coverage from all files. It just takes it. It just it doesn't take the coverage at all. Or just for the reporting is really minimal so that you can see everything in one screen. So everyone this it will only run this test that I have sent tests that only everything that have here everything else is will be skipped. And we see this in here takes a while. And then I see my some useful description that asses, but I don't see my color world. Let's see if I run this again. Okay, in that sense I sometimes also just comment things out, because they don't have enough display here. So just doing commenting every other test out and I see everything that I have here. So now I can write my tests and say, use the just assertion to say like expect one to be one that should pass. But if I say one to be two, it should fit. And yeah, it takes it takes a long time with it because it creates coverage if I remove that coverage directory. Yeah, that's a that's a problem of having a lot of unit test files. Takes a long to get the coverage but you get the idea that you know now you can work on your individual function that you write. And one of the things that I like to do is if we just take a random function out here of the class. It's basically something that's reasonable of size. It's a dysfunction. I usually copy it into my tests. So I have the code that I want to test at the same time of writing the test so I can exactly do executed in a way that returns an expected value. And then I can then I exactly see how I can cover every line. And it helps me, you know, to not have to switch back and forth between files, which takes a lot of time I want to see the code that I'm testing at the same time while I'm writing tests, which is personal preference but it might help you to, you know, write these tests a little faster. So at the end, we enable all tests again, run it and make sure that running these tests in a row also passes. There we go. Evan, do you know anything else I could look into? No, not right now because I'm still watching a black screen so. Okay. But let me think of something in the meantime. I think that for me personally the flow charts are very helpful for when to get an idea of where you have to start or where you have to look. So I'm adding a grasp on the issue that you're looking at. It's oftentimes the trickiest part. And then it's like deciphering all the little changes in the code and go along. I understand the code, how it's structured, which we've already discussed alongside with the flow charts should give you a pretty good idea of how to where to get started. Can you see my screen now? Maybe it works now? No, but it was working fine for the others. So I think it's a me issue and a you issue. Can anyone confirm if the sharing screen works? I can see your screen. Perfect. Not currently. Okay. Yeah, maybe leaving for a little bit to see if. You can just rejoin the session and might start working in. All right. Okay, so yeah, I guess that was a good suggestion to go through the flow charts. That someone actually contributed to the project, which was really awesome. Someone who liked this library and liked to make flow charts and he helped us to make these flow charts. And we were just happy to introduce them into our documentation page. So it has flow charts for different kind of for one for the CLI for the CLI tool where you see the different commands that can be executed and what this does. So let's say here you say WTO run and you say we'll install one of these commands. Then there is another flow chart here where you can click and it shows you what happens if you for instance say install. If you say W install, it asks you for the type and the name of the service and installs the package using MPM or Yine, whatever you have installed, and then tries to add it to your WER config. This needs to be taken like this needs to be used carefully because however you have formatted the config, we could make a long search and replace. So always check if we have done the modification to your config file properly, but essentially it helps you to add a service without making changes in the code. Then we have the config command where you get a bunch of questions asked and then based on you want to run in sync or not in sync, it will install W or a sync or not. And if you use it with dash dash Yine, it will use Yine to install it to use that package manager to install everything. And then at the end it creates a WER config. And you can actually map that directly like the person who created that actually looked into the code and looked what's happening behind it. So you can basically see this as like a high level overview about the code that's happening. On the test execution side, we have what's easier to start with. Okay, yeah. So when you kick off a test with a test runner, we have a launcher class in the CLI that starts an instance, a worker instance. So when you're running on what kind of runner you have, and clearly we only have the local runner, it uses that interface to start an instance. For the local runner, this is a process, a normal process on your machine where a note script is being executed. So think about if you start with the test runner, you have a WTO process, and then for every test that you run, there's a separated process where the test is being executed. The local runner is responsible for starting this local process and the worker, and pretty much listens to messages that the test runner sends the process, and for instance, now run the test, and now get me all the results. So that we can get all the information from the worker and display them either in the standard out of the test runner, or propagate them to the services and reporters. So there are various of messages that you can see in the code, and we, you know, the local runner sends a post message to the sub process to start the test run, and some arguments are being passed into the sub process. And so if you, you know, have a service that changes the capabilities in the unprepared of your service, these information are being propagated into the worker process. So with that, you can change the, for instance, the connection details of your test run. Let's say with the Chrome driver service or with the Appium service, we change the port and the host name to connect to Appium or BrowseStack, for instance, or Chrome when you have that service installed. And yeah, then the, every runner, every runner, like a local runner or the lambda runner that is still experimental and truly not usable. They use the NWDR runner package to initialize the framework to initialize marker, jasmine or cucumber, and kick off the process, listening to the audience, propagate these events to the reporter and make sure that the process or the browser is shut down gracefully and all the information are passed in proper. Yeah, the test execution in general, either with the, this is for the test runner as well. Yeah, this is what happens in the NWDR runner. So if the local runner has started a worker process and starts executing, it starts executing the runner, which essentially initiates all the test reporters, it initiates the frameworks and the services. So, you know, if you said my test framework is marker, then it loads the WDR marker framework. It initiates all the services before session hook, as well as the before hook, and then it runs the test framework. So the test framework, let's say the NWDR marker framework has a run function in it, and this one function will initialize marker at the end and starts the browser. And when we start the browser, we check if there's any indication to start it with webdiver. If you have, you know, a hostname and a port set up in your config file, then we know we want to connect to a webdiver server. If there's no driver running and there's no indication for a driver, then we start to run the session with PAPTIA. We initialize the browser global, the browser variable to the global instance. So you can just type browser.command in your test. You don't have to, you know, set up the session by yourself. It's all done by the test runner, by the WAA runner package. And then it checks if the test has been run successfully. It propagates the messages to the reporters and back to the main process to show off the information in the standard out. And that's almost it, prints the summary and kills the worker session at the end. So it's the WDIO CLI kicks off the local runner, the local runner kicks off the runner, and the runner kicks them off the framework. It is complicated for sure, but it needs all these layers to properly abstract the complexity away and have this separated in multiple different packages that we can replace with each other. So we can replace every framework from cucumber to marker by just modifying the config. And we can, we want to be able to just allow you to run test locally or lambda function at some day where you can, you know, you should be essentially be able to run thousand tests apparel without your using your CPU on your machine. So that's why it needs these layering and all these complex functions. But what do you, the easiest part to do is just, you know, follow along the code, get like an idea about what every package is doing and then see where you can apply the changes to. That's an interesting point, Christian, because whenever you will pick up an issue that's that has to do with a package that we have, or a service that we have, then you'll notice that every package requires its own expertise. If somebody that's working on the cucumber framework has to have some cucumber knowledge, if you are going to be going to be working on a reporter, for example, the lawyer reporter, then you will have to know a little bit about a lure. And it's for all packages this way. Yeah, good point. Like I don't, I haven't contributed to the lawyer reporter to cucumber a lot and I always need, I always hope that someone else can, you know, jump in that has more, more knowledge about cucumber and the lawyer for the name that I have. So, you know, you can, if you like the lawyer, you can just look into what the lawyer put is doing, have us out with the box and you'll make the lawyer put a better over time. That would be awesome. And here's, again, a general project overview, which is really well described. So you have the as I mentioned before, the CLI, which kicks off the test. If you say WDRIO run and you play a conflict file, then it uses the launcher to get information from your conflict file. And based on your run information, which is currently always local, it starts WDRI runner. And that runner initializes the services that you have in your conflict file, the reporters that you have in your conflict file, and the framework that you define to run your testing. And then it uses, initialize the session with Reptavio and attaches the browser instance to the global scope so you can run and use it in your test. That's pretty much it. Anyone with questions to specific area of Reptavio, no matter if it's code or, you know, how we create the documentation or anything to the contribution process. If not, I would use a chance to, to go back into the IDE and deploy or work on the documentation or say something about the documentation page. So Reptavio has the documentation in the code. We have in the docs directory, all the documentation pages that you see in Reptavio slash guide. I think that's it, right? This is Reptavio. Everything that you see in, when you click on the dox link on the top. So we have the API markdown, the documentation for custom services. It's all written in markdown. We have the flow charts here. And we have the API documentation too. And then that's pretty much it. There are, we have scripts that then look into the Reptavio package for, and look, scrapes all the commands that we have to find and pretty much creates out of this information, the documentation for a specific command. So for instance here, this is the implementation for the custom selector strategy. And on top of the file, we have a comment block where we have the documentation and a little code example and some media information for parameters and the return values. And Reptavio uses all this information to generate the documentation as you can see it in the docs. So let me see. So it uses this, it parses this example block and creates a nice section with a highlighted code snippet to show how to use the command. And we use the parameter section here to create this little table that tells you what kind of parameters are expected and what they do. So that's actually interesting. We see, we see here that there's no description for the strategy name and the strategy argument. And so someone could, you know, take this and add the documentation to it by just modifying this file and adding a proper documentation to this parameter. So it's a strategy name should be a strategy describe strategy name for the custom, the custom strategy applied. I don't know something like that. And for the strategy argument, as we can see here, this is a function. So actually we should know this is any argument, because once we add a selector strategy, give it a name, we apply a custom function to it that will actually be executed in the browser. So this can be any argument you can pass in a number to give you the third element of something, depending on what your custom strategy is. And then you can see here, the parameters that are applied to that are applied to strategy to the strategy to the strategy. So we can save this. And now if we build the docs, and you can do this by going into the website directory where our website is located, and just run npm start, it now compiles all the markdown files, it downloads the service we've missed from external services, it compiles the SAS scripts or the SAS styles, and starts a website on localhost 3000. So I will change my screen to look into the browser. So you should now see the browser. And this is now the website running on localhost 3000. And if we now click on API and go into the custom dollar function, we see now that the documentation has been applied to this table. We can compare to the current deployed websites on webdiver.io, custom dollar sign, and we see that these table cells are empty. And now with the change that we made to the specific command, we have some information about the parameters. So I don't want to do make this change to everyone on this call in the workshop, they can, you know, make a suggestion of how this documentation can look like, based on the, you know, example, to also free to add the dot here, and some other documentation that can be useful for this command, and make a change for this simple documentation change. I mean, we can just do this. Let's say we create a new branch, it check out minus B to create a new branch, and then say, push my initials and then custom dollar, I think branch name should not start with dollar, just custom doc change, any random branch name will do. Then you commit your changes that you do to the command. Check what you have committed. And then just say have a descriptive message for your commit. We don't have any rules here like an other project, because there's we don't have any automations around commit messages yet. So just be free to say update docs. I'm not seeing these changes, I think, Christian. Oh, right, right, right. Okay. You shouldn't back to my ID. Thank you for mentioning that she has been back to the ID. So, again, I just checked. What I what has changed, then I added with git at the file. And then now I make the commit. I have like some shortcuts here but essentially it's the same. You say commit update docs. And then you can push it to your remote branch to your fork. If you don't have access lights to the repository yet, if you're not a project committer, you need to use your own for. So just say, you know, if you have, you can check your forks, say git remote mainstream, if you want to add a fork, your fork, you can say git remote ads, then say upstream, that's your main and then get enter your git URL from your fork. So it would be get up.com slash usernames slash web browser. That's usually the URL for it. In my case, I'm, I'm can use the main repository so I just say git. Push should do the trick. There we go. Get push set upstream rich and I've been a member of the steering committee for so long. I didn't even do this before. I was working my own fork to me else. I mean, yeah, there's no reason why not to continue using your fork. I just, yeah. Yeah, I know. And then since recently get up allows gives you the link that sends you to the page where you can make your for requests. Interestingly, it has no changes to different branch. CB. I think for if, if you want to show this, then you have to switch again. Oh yeah. Good point. Stop sharing, change here. Share this much. Okay, didn't. It didn't copy this properly. There we go. This is a problem when working with this shell. This is a thing that you usually get. This is a full change. So you say update dogs. To custom. To the command custom dollar sign. Then make a short description. About what you have changed. Say it's a documentation change or you have a bug fix or if you have something that doesn't apply to it, just feel free to act a custom. A custom checkbox, whatever. The checklist helps us to, you know, make sure that you've read the contributing mark guidelines. You added tests if necessary for documentation. It's usually not. You have added necessary documentation and you have added proper type description, which is also not appropriate for applicable for this type of change. And then for the comments, you can leave this out. And then. I think we're ready for joining us. I think we will, you know, I will not make this. There's progress because this is something that you could do. Adding this little documentation change, which is a great first way to entry to contributing something to work. There's a bunch of commands. Switch back to the ID. There are a bunch of commands where we could definitely use some better documentation and more examples. Let's say, I don't know, switch commands. Or in the element part, you know, we usually have one line of description that can be enhanced, whatever you find useful. But I hope this will kind of like show how you can, you know, set up the website and run the website and see how your images through the documentation will be applied automatically. All right. Christian. Yeah. I'm not sure if Quinn is still here. I think he already left. But he was asking if the video will be posted somewhere. Exactly. We will post the video on the contributing page on webdiver.io. So on the top bar, you see a contribute link. And we will, once we get the recording, we will either upload it on YouTube or the, I think the foundation will, we'll host this video somewhere and we will end that video on that contribution page. All right. So I'm guessing still no questions from, from anyone. Did anyone started to work on a specific issue or topic? Yeah. Yeah. Justin is started is going to start something. And we'll report back if he faces any issues. So that's great. Do you know what you will try picking up? I was just going through the issues that are raised. So like, I was like thinking I'll pick up the depth tools issue, which is like recently raised. Like I think there was some issue with the nightly application. So it was just going. Yes. Yeah, that's great. Any, any of the step tools project would be awesome. And, you know, again, the best thing to start is just write yourself a minimal script that, you know, runs the deaf tools package. I can, let me check. She must be, whether how my setup usually looks like. So this is my local checkout. And I, you see here, I've filed a B and C, which are random files that I create for myself. And in a, for instance, I use the deaf tools package to just run the simple script. And, you know, if I want to test if the depth of package is able to connect to Firefox, I can, you know, just work on this, my example script. And over time, these scripts evolve and evolve. You know, it helped me to debug certain things and work on certain things. I played B.js that have a minimal example to connect to source labs. If C.js that have an example to run some things, I guess Chrome or Firefox with some binary setup. You know, just have a simple script somewhere on the wood directory that you can always execute. Like here I could just say A.js and it should try to launch Chrome and run the specific script. Maybe it's a good idea to actually add these to the, to the project for people to just get started with. Yep, absolutely. I guess that what could be cool to add them to the, not end to end, to the DevTools example or truly what would, what would be the best way? Or a standalone folder where we just have some standalone scripts like here? Yeah, maybe because if you put it separately, then I think it's clear what they can be used for. If you put them alongside the examples, it's not really an example. It's a script that can be used to debug while you're working on something. Yeah. But I've honestly never done this to, and it's, I think this is very useful to do. Like the simple script here or? Yeah. Yeah, absolutely. Yeah, I mean, you know, if you, usually if people make changes, they sometimes also try to link the packages in here to their current projects where they, where they discover the problem. That could be possible too. I've never done this, but they're, so I don't have a documentation for this. So if you, if you work like this, free free to add the documentation to us or just, you know, just file an issue and with the documentation and we can find out where it fits into it where, where we can put it on the web. Yeah, I can, I have tried to do that, but honestly I've ran into some issues before. But yeah, I can certainly do something about what I know so far. Yeah, I find linking projects hard because sometimes your changes are multiple sub projects. And this makes it difficult to have to link all the packages, which is why we have the smoke tests, for instance, in slash tests. There's one runner file that. Uses the, the launcher to launch WebDivero test runner test. So you see here, it launches this specific conflict, conflict file with this adapt or modification to it. So we want to run this, these test files. And in these directories, you can check like the test.js. It pretty much you runs a user test, how someone could have it in his, in his project. But instead of calling an actual driver, it uses an internal helper package that called WebDivero mock service to return a pre pre-created response. And so can look into that. So this WebDivero mock service. It defines mocks for all the protocol commands that we have. So you see here, every time that WebDivero calls to get title command, it always returns 200 with this response. And so if you, if you run your mock tests or your, your smoke tests, calling browser.getTitle will always return the same response. And you don't need any driver or anything to run these tests. These are like unit tests, but they run fully end to end without the networking part, without making actual requests. So this is really helpful if you want to test these interactions with services and reporters, and if they work really well together. Like we have tests here that make sure that the commands are executed synchronously or asynchronously. As you can see here, that you can use async as well without problems. We test our middleware, where we set up a specific set of responses. And then we execute the commands. And based on the set of responses that we expect to happen, we see that for instance, the STL element is being fixed automatically. For instance, for that one, you can see this scenario is a custom command that is being added by the mock server. And you can find it here. It's added in the before hook because this WebDivero mock service is like the source service or any other services, the service that you can use to do certain things on special life cycle events like before the test starts. And here we have the scenario that defines this set of responses. Like for here, the first time that you try to query an element, we turn with a valid response. The second time you do that, say no such element. So it means we found the element, but once we click on it, it says if we cannot find it anymore. And this is usually happens when there's a STL element exception. And then after that, we find it again with the new element ID and allow to click on this specific element ID and make the click happen. So this is all described in this little scenario, which uses knock as a stubbing framework. So you can use the mock service to, you know, run the full test runner, WTO test runners to similar to what we have in the example. So actually without having to set up like the driver and yeah, without having to set up the driver. So you can run this by saying MPM run test smoke and say to filter it, you can have as an argument the name of the function here. So Mocha test runner. If we apply this, it only wants to smoke test for the Mocha test runner. So four files are being executed and they pass. I think it's worth mentioning that what you see over here are not the WebDriver.io commands, but it's actually the WebDriver commands that are mocked, right? Yes. So WebDriver.io is actually executing all the commands and doing the requests, but it's doing the request to WebDriver and there it's stopping. And I think the request is not really actually happening because the rule HD, the request module is like mocked so that the request doesn't go out and knock takes like makes the mark and returns the expected response. So you see that this.command.deleteSession or getTitle, these are all mapped to the protocol commands that we have defined in WDIO protocols. For the WebDriver. So here you see all the events, let's say getTitle. This is a protocol command for this WebDriver endpoint and we can modify it by calling with the service, by calling this.command.getTitle and then we get the knock instance where we can use their interfaces to say always return 200 with this response. I think it's maybe worth showing the getTitle.wio.command. This is protocol command so there's no code behind it essentially. But what I'm trying to say is like if you show the WebDriver.io.getTitle.command then you will see this get used in that command which might make it more visible. So in one of the smoke tests we had essentially this which under the hood calls browser.getTitle. Which is the WebDriver.io.command. But then in turn it uses the WebDriver.getTitle.command. Which is then mapped to a specific URL and what we showed in the protocol mapping that URL can actually be called with a tool for example Postman as well. So it just calls that you're doing like requests, easy to be a request. That's what Erwin has explained in the layering, so the first initial layer when you use WebDriver the package itself has just five or four files really. And it just does it just runs a command based on the information it gets from the protocol that I just have shown from the WebDriver.json where we have the command name and the parameter doesn't expect. So here where do we have a parameter here we expect an x, y, width, height variable and based on that we can do some validation of the parameters that you have provided and essentially it just makes a request at the end. That's all that the WebDriver package is doing. It provides an interface and it gives you the bare response back where then WebDriver.io can go which then WebDriver.io can use to make some more complex commands. Let's say the keys commands where we either use the send keys commands that still existed when the jsy protocol was the thing and now it uses the perform action command or the new window command that does some fancy things to create a new window and switches to it automatically. So it uses the execute command to get window handles protocol command and to switch the window protocol command to actually this is the WebDriver command but the WebDriver command to open a new window and switch to that window. Switch window is similar where it switches to the window uses the protocol command and then there's some abstraction on top of it just like a layer on top protocol commands and for death tools to complete this we have implemented all the WebDriver commands but have put the puppeteer implementation behind it. So for element clear which clears the input element we just say oh we have a specific command for that a specific JavaScript that we execute on this page to clear that input using some parameters the click command is a little bit more complicated but it pretty much just clicks on the command oh no this is element click yeah it gets okay it's with type name and if not it says element handle of clicks so it uses the puppeteer function to click on that element but it is wrapped around some mechanism that in case a dialogue opens that will be handled properly so yeah I think we are almost on top of the hour and any questions before we close the session I hope this was somewhat useful for you if you have some time after the session I would like to ask you to give your feedback to the session in the document on the Kiko platform let me share switch my screen again just up to so I think this one is shared now so here on the Kiko platform where you found the link to the workshop session you have a section on session feedback and please provide some information about how you like the session or some more information how we can improve next time when we give this workshop again I hope so feel free to add any feedback there yeah I guess besides that thank you all for participating and I think I will stop the recording