 So, hello everyone, I will be presenting my package called Roger, it aims to facilitate web scrapping in R. Before that, a few words about myself, my full name is Mohamed El Foudal I am a PhD candidate in economics and I tweet under Mo Foudal. So let's dive into an example right away, here you can find the link to a web page that ranks the highest revenues in the cinema industry. First I'm interested in extracting this information, all the titles of the movies in this web page. In Roger you only need two elements, two inputs, so you need to take the link, web page link and the HTML or CSS element that references the needs information. So in this case, the titles. So what we can do, we can for example inspect and look for the tab or we can just use a great tool called selector gadget which gives you using a click, which is quite amazing, I mean the CSS selector that you need. So you click here, you can see the highlight, you just have to copy this field and paste it into the Roger function. So the most basic function in Roger is a scrap, it takes at least two arguments, the link of the web page and the node. So we have copied the node here, we have seen how to do that, the web page and we use a scrap link node and we got our 200 information. Just to check out, I made the head data so we can see that we have Avengers, Avatar, Titanic which corresponds. Why did we get 200 observations? Because the web page displays only 200, suppose I want to for example extract all the titles like from one to 1000, it's possible and it's quite easy to do so, you just have to find a pattern in your link here, so like so suppose I click next page, you can see that I have 200 here which states that the web page displays movies after 200 and then if I click next page I got 400, 600, I think until 800. So each web page increments with 200, the number of observations, we can just using the scrap function in conjunction with the past function or the glue package, we can just replace this variable element with a vector and we got the four links and we can scrap all the website. So here you have an example, I have used the glue package, I have modified the changing element with a sequence of numbers from 0 to 800 with an incrementation of 200 of course and I have run it in scrap, I got 1000 elements, so it's that easy, you don't need to use any map or for loop or whatever, so we can also use a function in a Roger called table scrap, you can just provide the link of the web page where there is an HTML table and you get it, it's that simple, but now suppose we have a website but the table is not in HTML format, here for example if you go to the EMDB website you want let's say to extract the title of each movie, his rating and the release at year, but of course you want to do that in a tabular format like you want a data frame with the first column let's say the title, the second column the rating and the third column the year, you can extract with Roger a table directly from a website using the tidy scrap function, so what it does the tidy scrap function is it takes a vector of CSS or HTML elements and the corresponding names and it will extract directly a table for you, so that's quite interesting, I think I'm running out of time, so thank you very much and if you have any question please feel free to reach out, don't hesitate to check out the documentation, there are many other functions that can be helpful when scrapping data, thank you again.