 Hello everyone, my name is Karthik and I'm a third year computer science undergraduate from India. This summer I participated in Google Summer of Code with KTE and I worked on the project of facial recognition work for improvements in DigiGam. So my journey with KTE actually started in December 2019 when I came to know about season of KTE even though I didn't get selected in it I'm still glad that I participated because it I think it helped me clear the initial hurdle of contributing to open source. So even though I didn't get selected I spent the month of January and February contributing to KTE I specifically contributed to ROX which is a graph theory editor and along the way I found a ton of helpful developers who really helped me whenever I was stuck on any problem they were always there to help me out in the IRC chatroom. So thank you for that and when the list of organizations was ultimately announced I immediately knew which one I wanted to contribute to. So my project with DigiGam did not focus on introducing a single major feature but instead it aimed to address the existing issues that already existed in the facial recognition work field as well as introduce a couple new features which would help improve the user experience. So some of the features that I introduced are I added a new category of faces called ignored faces which the user may use to mark certain faces as I don't want the algorithm to recognize this. I also implemented a new image sorting algorithm which would help which would allow images which have been newly recognized by the facial recognition algorithm to appear before the other images so that the user can pay more attention to these faces. I also implemented automatic categorization of faces based on similarity to one another. So instead of just talking about these features I think it would be much more effective if I showed a demo of these. So I have my local installation branch pulled up here and this is my album of people which I'll run facial recognition on so the results of facial recognition may look something like this. So this is my set of photos that have been recognized by the algorithm so these tags represent the various people that I have that I have recognized myself. So after the recognition is complete the user may head over to the person to a particular person and it would show the faces of just that person. So this is the current existing state of Dejagao. Actually let me show a different example yes like in this photo so in this photo in this person the green overlaid faces are faces that have been newly recognized by the algorithm. So the algorithm is asking me to either confirm this face or reject this face whereas these other faces have already been confirmed by me the algorithm made these suggestions based on these images. So you'll notice that these faces all appear mixed between one another and there's no order. So this can be very tough for the user especially if you have a very large set of images so the user will have to go from this image to this image to this image. So instead I implemented a new sorting algorithm which the user can access to just sort all the faces which have been newly recognized before the already existing images. So now the user can just select all of these and confirm that them at once if they are correct. However there remains another issue in the unconfirmed view the unconfirmed view is responsible for showing all the new results across every single person that I have recognized. So this view has the problem that even though I have sorted this view in order all these people will appear mixed between one another. So for example this person and this person have been recognized to be the same by the algorithm yet they appear very far apart. So it would be very helpful for the user if we could categorize these into sort of if we could pre-group these and then the user could make all the decisions between a single view itself instead of going to every single view one by one. So how we can do this is I implemented a new categorization order as item separation separate items. So this is now the default in the upcoming digital this would be said to be the default so you can separate these by faces. So what this leads to is all faces which have been recognized to be which have been recognized to be the same person they have been recognized to be the same person because they show some degree of similarity between one another. So all of them those will get grouped under that person name. So I can select all and then I can do this I can move on to the next person I can select all and then I can select them. So this would be much easier for the user to do. Apart from these features I've also introduced the ability to now reject suggestions. So earlier we could either confirm a suggestion or delete the face entirely but now I've introduced a new feature so that we can reject it that is tell the algorithm that this is a face but it's not the one that you're suggesting so I can remove it. I can also as I mentioned earlier I can confirm all these faces. So I've also implemented a new sorting order in my people view so you'll notice that M is alphabetically appears before O and earlier this complete view was sorted alphabetically but now what I've done is that these faces these put people have new faces in them. So I've pinned them to the top. So this is dynamic in nature for example if I'm to confirm two faces from this or three faces then this should go this will go below this. So these are all dynamic in nature sorted in order of priority. So that's also helpful for the user as if this complete view is people is very large then the user can immediately access which the people that are for higher priority and they require his or her attention. So that's about it from my side. So like thank everyone for listening and I would also like to thank KDE and the developers of the Chikam for selecting me for this opportunity. Thank you. Hi everyone I am Deepak Kumar at Google Summer for 2020 student for the G-Complets project. Today I'm going to speak about my Google Summer for 2020 project details and my contribution to the G-Complets so far. So let's begin with the presentation. So I've been contributing to the G-Complets project from the past one year. In the beginning one of my major contribution was to add tutorials into the odd event activity. So in the odd event activity I added tutorials to teach the child about what is mean by odd numbers and what is mean by even number. The need of this was that if a child doesn't know about what is mean by odd number and what is mean by even number and if he directly starts the activity to play then that can be difficult for him to play and learn about what is mean by odd number and what is mean by even number. So I've been also been selective as a Season of KDE 2020 student. So my Season of KDE 2020 program includes to improve multiple datasets of login activity and add multiple datasets to the balance case activity, balance scale with KDE and balance scales with ours. So I have started my work during the Season of KDE to improve the multiple datasets to clock game activity. So in the clock game activity there are already multiple datasets with two different level selection. So in order to fit into the range learning program I need to add the multiple datasets with five different level selection. So after finalizing with the mentors about the multiple datasets of the five different level selection I added that to the clock game activity. I've also added the OK button to the clock game activity to check the answer once the clock is set by the child. Afterwards I worked on additional multiple datasets to balance scales, balance scale with KDE and balance scales with ours. So all the registry activities shared a common code base from the scale mode activity. So the scale mode activity have a common code base for all the three activities. So for the addition of multiple datasets I only need to make the changes to the scale mode activity in order to load the multiple datasets for all of the different balance scale activities. So firstly I made the code changes and then I implemented the multiple datasets to balance scale. So after I tested that it was working fine. So afterwards I added the multiple datasets to other two balance scale activities also. So this is the multiple dataset streams of balance scale with ours. On the left you can see that there are different weights available and there is the weight of the gift. So the child need to balance the weight of the gift according to the different weight. So here the main goal is to teach the child about different about arithmetic operation to how to make the arithmetic operations and division in order to balance the weight. So on the right side you can see that this is a multiple dataset stream of balance scale with ours activity. So here you might have seen you may see that there are different stars available for each of the multiple datasets. So the each of the stars indicates the difficulty level of that multiple datasets. So for the two star here means that the multiple datasets you put for the children of age between 5 to 8 years. So coming to my Google summer report 2020 project. So my main goal of my Google summer report 2020 project was to implement multiple datasets to several memory activities. Here are the given image activity and case count activity. So I started my work during the Google summer report coding credit to add multiple datasets to memory activities. So there are total of 20 memory activities but I don't need to add the multiple datasets to all of the memory activities. I only need to add the multiple datasets to 10 to 12 memory activities and other memory activities needs to use the default datasets. So while adding the multiple datasets to any of the memory activities I need to make sure that the change of the code base will not affect the memory activities with the default dataset. So I made the code changes in a way that it should load the multiple dataset and the activities with no multiple dataset in a proper way. So there are memory activities as an immediate memory game to be added multiple datasets and the rest of their way based on the arithmetic operations such as addition memory game, multiplication memory, subtraction memory. The arithmetic operation memory is such as for the addition memory game there were in total two modes for each of the activity one with the ducks and one without ducks. So the multiple datasets contain for both of the modes. So the next activity I worked on adding some multiple datasets was mirror the given image. In the mirror the given image I added multiple datasets with three different level selection. The first was the small grid of the grid size three by three. The second was the medium grid with grid size five by five and the third was the last grid with grid size seven by seven. So here you can see the multiple datasets being of the animation memory game. So here are the cards with two of the instrument butterfly images. So the child needs to count the number of butterfly and match with the equivalent card. So this was all about my presentation. Thanks and have a nice day. Hello everyone my name is Sharif Zaman and I'm a G-Fox student for Crida and my project is SVG mesh gradient in Crida and my mentors are Dimitri and Othera. So let's get started. So what are SVGs? Well SVGs stand for Scalable Vector Graphics. It's a vector image format for 2D graphics. What I mean by that is a vector image format is defined in terms of points on a Cartesian plane which in turn those points with those points you can make lines, you can make circles, you can make triangles etc. Whereas at their color parts raster images are just a grid of pixels and the information about each pixel, how each pixel should look. So one big difference between vector graphic and a raster graphic which is often stated is if you zoom scale them you can see no pixelation or no loss of quality in vector graphics but you can see pixelation happen in raster images. That's the one big feature for vector graphics. Other features are like they're very lightweight and they're customizable, that is you can control them, you can control the color, fill color, you can control the gradient, you can change the font type etc. So that is SVGs. So the next thing is what are mesh gradients? So by definition mesh gradients in SVG are based on an array of coons patches. So basically a grid of coons patches in it. So what is a coons patch? So if you have, let's say you have four basic curves and you place four colors on each corner and then you interpolate between them, the surface you get is a coons patch. In the image on right there are four basic curves, one, two, three and four and there are four colors on each corner and when you interpolate between them the final render you get is a coons patch. When you have an or when you have an array of them it's mesh gradients. So my project in basically was had four made, there were four made objectives. So one of them was to parse mesh gradients, second was to render them, third was to save them and fourth which I'm still working on is to create some tooling around it so artists can create mesh gradients in Krita. So finally what's my motivation? What was the motivation for this? Well primarily they are very easy to use for an artist and they can use it to create lifelike drawing. What I mean by that is they can clone real world objects with mesh gradients easily and within a few minutes given the right tooling. So like this pepper in this case which is rendered in Krita which I got from Inkscape, both stories I didn't create it. So as you can see this pepper looks fairly real and this is all due to mesh gradients. So that's a big feature of mesh gradients. Other thing is we can use mesh gradients to simulate other types of gradients. So it's not just one thing. Another thing, another motivation is it's fairly mature standard like it exists in PDF, post script, popular Cairo, Inkscape supports it, Adobe Illustrator supports it. So it's a fairly mature thing and another object, another motivation was to provide a second implementation. So right now mesh gradients aren't part of the SVG2 specification because it hasn't been released. It's the part of the draft. If with help of with the second implementation it can probably help it to get into the standard which will help artists which will make us all happy. And the final reason is it's fun. I like to map. I like Krita. I like Krita's team. So why not? So that's all for me. Thank you for having me. Namaste and greetings to all. My name is Sashmita Raghav. I am a second year undergraduate student and I'm doing my bachelor's in computer science with specialization in artificial intelligence in Amrita University in India. I began contributing to KDE in December last year when I first came to know about the season of KDE program. The KDE community has been nothing but welcoming and encouraging for beginners like me. And I couldn't have asked for a better community to be a part of. I have since then been an active contributor for to KDE in life. For those who are unaware KDE in life is one of the most popular non-linear easy to use video editing software. Today I'm going to talk about the work I have done as a season of KDE and Google summer of code student with KDE in life. My season of KDE project dealt with improving the color palette for the timeline clips. The timeline is that component of KDE in life where the user manipulates the audio as well as video clips. So what exactly are clips? Clips are containers of different file types in timeline. The timeline consists of clips of different types namely audios, video, title, images, slideshow and color. The application earlier had default colors only for audio and video clips with all clips except audio having the same color. Assigning different colors to each type of clip made it convenient for the user to differentiate between each clip type while working with a large number of audios, videos, titles and images. I also worked on adding visual feedback for whether a clip had effects or not or if it was a proxy clip or not. As can be seen in this slide the clips other than that of audio which is in green has the same blue color. With my project the title, image, slideshow clips also got their own default colors along with the video and audio clips. Also now the proxy videos and clips with effects could also be easily identified within the timeline with the proxy and the clip effect thumbnails. Now I would love to talk about my Google Summer of Code project. This summer I worked on my Google Summer of Code project that is adding basic subtitling support to QdnLive. So first of all what exactly are subtitles? Subtitles are text derived from either a transcript or screenplay of the dialogue or commentary in a video. Videos in QdnLive are edited by applying filters or effects. However the application is limited in its ability to customize and edit subtitles. How are subtitles actually handled in QdnLive presently? At present the subtitles are handled as an effect namely using the subtitle effect. The effect uses an FFMPG filter to burn the subtitle file onto the respective video. The user however is unable to customize the subtitles according to their own convenience. This summer I worked on adding this support by extending the functionality of the existing subtitle effect thereby giving users more choices over subtitle customization. The project is implemented in four basic steps. The first step is to develop a parser to read the subtitle file uploaded by the user. I worked on adding a parser for two widely used types of subtitle formats namely the SRT and the ASS formats. Next the past subtitles have to be managed. So the next step is to add subtitle support to handle the past subtitles. This includes the handling of the addition, deletion and modification of the subtitles. Once the subtitles are parsed from the subtitle files and handled properly the basic front end had to be developed to enable the user to edit the text and the duration of each subtitle. This involved creating a separate track specific for the subtitle and introducing a new type of clip, the subtitle clip. With the basic subtitling support added to the KDN life with the above three steps the final step will be to add features to enable customization of the appearance of the subtitle text. So with this I would like to conclude my talk. I would love to thank the KDN community for this opportunity to present my project today. Stay safe and thank you.