 Okay, let's continue. Deutsche Einheitskursche in the middle or a German unified shorthand is still in use in official shorthand system in Germany and Austria. So yeah, to come back to our question, can Transkibels deal with shorthand? From our experience, we can say yes, it can and it's still not working here. Can we switch? No? Nope. Yeah, yes, it can. Yeah. So, and we have to prove it. So we have been, or we will be able to train some promising models by now. You can see two examples here. It's still work in progress, but we can say that Transkibels can prove its smart capabilities in this case. Here, just some examples. It can differentiate between long head and shorthand. It recognizes different positions of characters within the writing space and so on. But there are still two big challenges, despite all the others that shorthand brings. As always, there's a small amount of prescribed materials so that you can create your ground chute and few experts that can assist you to creating your ground chute. But nevertheless, you can address these challenges by optimizing your workflow as in many cases. And here, just to point out three things. You can, in the case of shorthand or shorthand in German language, create synthetic training data and add to your ground chute. You can, or it might be useful to reduce your learning rate during your training model training. And of course, at the end, you can, or it might be useful, or it might be useful, not, it might be, it is useful to correct your output and refine the output using additional tools. So we can continue. Speaking of using additional tools, we played around with the GPT-4 for post-correction of shorthand HDR results and our results were mixed. So with the Deutsche Einheitsquadschrift, as you can see here, the quality was very good. So almost all errors were corrected by the language model, as you can see with the green marks down below here. But regrettably, the important Gabelsberger shorthand system, the results were really bad. So the model for Gabelsberger has a higher CR, which means that the transcription is worse than the Deutsche Einheitsquadschrift HDR results. And here, GPT-4 hallucinated, yeah, really, really bad. So we cannot use it for Gabelsberger right now. So to conclude, shorthand is tough, but it works to some degree. The reducing the learning rate during training seemed to help. And synthetic data, we do have synthetic data for Deutsche Einheitsquadschrift, but we don't for Gabelsberger. It would be good to add synthetic Gabelsberger training data for our training, right? So regarding the role of large language models, if an expert can correct it, and if the CR is low enough, LLMs will help, but if the CR is not good enough, it won't make any sense to use it. That's our experience, and we are hoping for transformer-based models for shorthand in the future. Thank you. Thank you for this very insightful talk on shorthand. Any questions? Yeah, for the people over. Yeah, thanks for this interesting talk. And how did you create synthetic data? There's a tool online where you can input longhand, and it will convert it to a shorthand to Deutsche Einheitsquadschrift. I think we have one more. We have a slide with... We have prepared some more slides for you. Ah, back. Okay, yeah, you have different tools, which you can use. One of these, for example, is on the left side. That's how it looks like. And then you have here where you can find it, and this one you can use. Sorry. This one you can use, you just put your digital text, and it transforms your digital text to an analog shorthand. Just put your text, and you have also on this page, you can find some prepared data already, like the whole books, which were transformed to synthetic data. This is only for Deutsche Einheitsquadschrift. You have also tools for Stolzisch-Rei system, but for Gabelsberger, no, there is not yet a tool available online. Yeah. Other questions? There is one on the left. Okay. So maybe they have a question. Thank you. Can you explain how you exactly use GPT? That's the question. Well, for the time being, we just played around. We connected prompt engineering and asked GPT nicely to, and then we interpreted, we conducted a qualitative interpretation for the time being. If this goes to a production stage, of course we would use the API, but for the time being, it's just the same prompt for Deutsche Einheitsquadschrift and for Gabelsberger, yielded vastly different results.