 Okay, then let's start with the first speaker, which is Maria Niku from the Finnish Literature Society. So, hello everyone, I'm from the Finnish Literature Society, and unfortunately we are not members yet, but we are joining soon, so that's going to happen. So, Finnish Literature Society was founded in 1831 for the advancement of written Finnish, in fact the Finnish word was invented for the purpose. So, we have large archives of handwritten Finnish documents, especially from the 19th century, and so we are in a good position to develop models for, public models for Finnish handwritten texts of the era. And while there are already several models for Swedish language from 19th century, such models do not yet exist for Finnish. So, we are taking 19 to early 20th century diaries written in Finnish by significant cultural figures, and going one writer at a time. And we have two goals, firstly to produce automated transcriptions that are good enough for fuzzy surges, in other words with distances of no greater than two characters. And the longer term goal is to create models with multiple hands that would be useful with many kinds of Finnish handwritten materials. So, mostly, or at least judging by our first example, Asperin Harpkula, the diaries are pretty easy in terms of layouts and even lines and so on, but there are some challenges, namely the first one is that diaries often span several decades, which means that the style of writing changes due to changing conventions of writing and the writer's age. And that creates the problem of selecting the training material so that it is representative of all the materials, but not too large compared to the total size of the materials. And the second problem or challenge with the diaries is that they are normally written for own use, which means that they are careless, so to speak. For instance, with Asperin Harpkula, the letter A is often left open, which means that it looks the same as A or U or N. R and S look the same, H and K and so forth. So this kind of random carelessness is understandable to human readers, but this is a different kind of situation for the machine. So the results so far, we are only just starting with some of the first models created is that we have a model of about 40,000 characters and it's looking quite promising with smallest character error rate at about 5% a bit under. But as to our first goal, it isn't good enough yet. As you can see, there are a number of words with distances of over two letters, which either does or doesn't matter depending on what the user search for. So it matters a great deal for personal or place names, but at the moment, I'm not at all certain that there's anything to be done to improve the results. One of the means maybe that the National Archives kindly gave us some 600,000 words, if I remember correctly, of Finnish training data. So we'll have to see if it works. So that's about for me. Thank you.