 We'll give you the next one. This is Jayjitya. He is going to talk about a KDE, a wiki client, right? Yeah. Go ahead. We're kind of a bit behind, so hurry up. So, this is my project for the GSOC 2016. The name is the Wikipedia and desktop client. So, first of all, what is Wikipedia? The Wikipedia is a collaborative textbook and initiative taken by KDE people and open source contributors. It is entirely run by volunteers like me and you people. Our goal is to create easily available textbooks for higher school educations so that people can easily study mathematics and other complicated subjects easily. Contents are written by both students and professors. Initially, the contents are usually written by students when they are learning for their exams or studying for a test and professors when they get time, they review those notes. So, why was the client needed? We believe in knowledge only grows if shared. The problem with Wikipedia was that it was only available as a website. There was no application for it on Android devices or any other platform. So, in order to reach as many platforms, we needed a cross-platform client. So, we decided to use the Qt framework for it. The technology that we have used is the Qt framework to build the UI and perform the cross-prime ability on all the platforms. C++ is used for the backend and it mostly performs all the logic for the client like downloading images and maintaining connection with database and querying the API. The next is JavaScript and QML. JavaScript and QML is used for creating the UI for the application because the Qt widgets are not really compatible with Android devices or mobile devices. So, we have used the QML and Qt Quick along with JavaScript to create the UI for it. And the next is SQLite database to keep the record of all the information that we have like how many images are downloaded and whether the page is fully synced with Wikipedia or not. So, development phase. I started by reading the Qt documentation. Actually, I was not familiar with Qt, so I started by reading documentation, practiced some code, tried some of the examples. Then I head over to the Wikipedia API, read the documentation again, perform some queries on browsers by making simple STTB requests using JavaScript and usually perform a query and get back JSON, then parse it to see the HTML pages. So, when I was confident enough, I started to build a prototype for it. So, this is the initial prototype for it. As you can see, it's very simple UI in the beginning. It's currently can save page, load a local page that was saved. It can delete a page, it can update a page. Like simple button UI. So, it was just a prototype. Even for the adding a page, we had to click another pop-up for the add page and then manually had to write a name. Like I was writing that time. So, even the spelling of the page had to be correct or it would not be able to form a proper query for the API and it download will fail. See, I'm just copying the names of the pages and providing it to the prototype of it. So, even it was a prototype. It actually was able to perform some pretty basic stuff really good like downloading all the images and dependencies of a page. Some pages were not able to display properly because Wicked on the API didn't provide the CSS for it. So, I had to find a way to find the necessary CSS for it. So, currently I'm loading a local page that is saved in a file system and I'm browsing through it. So, if you notice, this page is saved often and it pretty much look exactly same as the web version of it. Except the CSS part because Wicked on the API did not provide the CSS needed for it. So, this page had a lot of dependencies and all the images were in SVG. So, I actually had to make all the queries to all the images then convert them to PNG and then downloaded locally. Entry was also made in the database to keep track of how many images were actually saved or not because sometimes it happens like you don't have a good internet connection, some download may fail. So, for that reason, I have a database for it. So, deleting a page, I had to write the ID for the page. It was too much manual, it was not at all user friendly. You have to provide an ID to offer a page, then it will make a query through the database and if the code is found, it will delete it from the file system and from the database entry too. So, the challenges that I faced at the prototype was making a cross-platform UI. As you have seen, the UI was really bad. It was not at all usable, even for a normal desktop system and the UI had to be cost from for the mobile devices too. Saving pages offline would already depend. Like I said, Wicked on the API didn't provide the CSS or the JavaScript needed to display the pages. So, I have to find a way to get all that data somehow. Next was making the search feature. As you have seen in the prototype, I had to type the name of the pages, then click on save page, then it was able to save a page, which is not at all user friendly. It had to be like a Google search, like we type a page, it displays the result to us and then show, take us to the page that we need. Next was the internal changes inside our organization because Wicked to learn was still in the beginning stages. There was a lot of changes with the API at that time. So, every day there was some changes with the API and I had to redo some of the work to make my client compatible with the API. Now comes the version 1.0. So, this is the home screen, the current version of Wicked to learn desktop client. As you can see, it is pretty much better from the last prototype. It has a home screen, a side panel where you can perform all the actions and it's cross-platform too. So, it currently isn't on Linux and it has the same UI in the Android devices too and works exactly same. So, the client architecture, how does my client work? The Wicked to learn client act as a middleware. It makes a get and post request to Wicked to learn API, fetches the data from the API, gets a JSON back and then pass the JSON to get back the HTML from it and when that shows that in a web view. And if a user decides to save a page, it will again make a query to the API, save all the images in the file system and make an entry into a database. And that pretty much is how the Wicked to learn, that is the architecture of the client. So, let's see the demo of the client now. So, as I said, this is the home screen of the application. So, I started by making a search of one-time pad. Some of you might be familiar with it. One-time pad is a cryptographic term used for encrypting a message or files also. So, last time if you have seen the prototype, I had to explicitly type the exact name for the search and it was used to save page. But now there's a search feature in-built in it. So, I have liked this page. So, I'm going to save this page offline in my local file system. So, how does the application decide? Because it is a cross-platform application, we need to somehow find out where will be the public directory of that particular OS. For example, for Windows, it is in common directory, public directory and documents. For Linux, it might be different. And for Android, it is different. So, that thing have to be founded by finding a standard part for all of it. So, what I did was write a small piece of code that used to detect the OS type and then found the common part where the file should be saved. Like for, in my case, it was in shared directory. And these are the same pages in offline in my file system. As you can see, the pages have the IDs, not exact the name, because there are chances that page might have same name, but their IDs will always be unique. So, I decided to keep the files with their IDs instead of their names. But when you will see the pages in your client, it will be with the name of the pages so that you may not get confused. Next, try some difficult page. So, this page had a lot of images and SVGs with it. And now, let's see how it will save it. So, I clicked on Save offline button. And now, currently, it is downloading all the images and making entry in the databases. Here, you can manage your save pages. Suppose, you might want to update your pages. There is a revision ID with every page attached. It will check into the local database, then make inquiries to the Wikipedia and API and see the revision number matches. It will not do anything, but if it changes, it will delete all the old data and then again download the new data and save it in your file system. So, I have deleted one page. So, that's pretty much how the client works. Now, I am demonstrating the page in an offline mode. I have disconnected my internet to demonstrate how the page will look in an offline mode. So, it looks exactly the same. It has all the CSS and all the images with it. It can now work even when you don't have internet connection. Just save the pages that you like and read it on the go. So, how you can actually contribute to it? So, I would suggest starting by going to the Git repository and cloning it into your system, download Qt framework and configure it, build the client locally on your system. The good thing about the client is it's very generic. The way I have coded it is that if you want to make your own client for it, suppose you have your own wiki, like KD has their own version of media wiki that they use for making documentation. If some developer likes to make a client for it too, he will just download the code, he will just change some of the API links and some regular expressions and the client will work flawlessly for that client too. For example, you take the Wikia, the online fandom site, which uses the modified version of media wiki. So, if you have access to Wikia API, just make a queries to API, get yourself familiar with the Wikia API, then make some changes with the client like making queries and database, then it will work for the Wikia too. So, it's quite generic. It can work with most of the sites that uses media wiki API. So, I have tried it even with Wikipedia and it works fine too. And please join Wikia too in general. They have a very good initiative for it. I actually contribute in another way too. If you are not with coding, you can actually do the translations or write your own courses on Wikia too. Thank you.