 Among the essential features of the inverted classroom model in its mastery variant are formative electronic tests that provide the link between the digital phase of content delivery and the subsequent in-class phase. These formative tests, the so-called mastery worksheets, are not only used to find out whether our students have mastered the digital content but their results are also employed to influence the activities during the in-class phase. A high mastery level does not require much frontal input in class and leads to more intensive practicing, whereas a low mastery level requires more reteaching activities. By the way, the average mastery level in classes offered via the virtual linguistics campus is around a high value of 67% that is the in-class phase can be dedicated to practicing and deepening with very few selected frontal parts. But what types of assessment do we take to generate the mastery level? Well, like most digital teaching scenarios, we have been employing multiple choice tests where you have either one or several choices. In this example taken from the worksheet on language evolution, you simply have to find one or four alternatives. Such simple tests have in many cases on the virtual linguistics campus been extended to what we call multiple choice plus, where the choice influences the set of alternatives. In this test on distinctive features, students have to construct a matrix making yes-no or plus-minus decisions. Nevertheless, such tests have neither been standard ways of assessment in our field, linguistics, nor do they involve sophisticated questioning principles. Multiple choice tests can often be passed by selecting the correct answers by mere guessing or even accidentally. So we had to think about alternatives. One alternative are our input tasks, where questions have to be answered by means of simple text or in some cases even by means of short text passages. The problem is that despite using elaborate parsing mechanisms, machine-based evaluation is not 100% secure. Thus, we only use input tasks in those situations where answers can be kept short or unambiguous. But how can we get rid of the unwanted multiple choice formats? One idea is to make use of the multiple choice questions and then normally four to six suggested answers but not present all answers at the same time, but successively. We have termed this alternative test format dynamic multiple choice. In such tasks, only one of the up to six possible answers is presented at a time requiring an immediate judgment without having seen the rest. In other words, such an assessment type combines single true false judgments with multiple choice. Let us demonstrate this task live. As you can see in this dynamic multiple choice task about proto languages, here is first of all the question with only one possible answer and our students have to decide whether this answer is true or false. If their choice is correct, that's it. If it is false, well, then the next answer is suggested and so on and so forth. Here is another question. See what happens. You will probably agree with me that such a test format requires a much higher understanding of the topic than multiple choice proper and that the questions cannot that easily be answered by me a guessing. The dynamic multiple choice format involves several interesting side effects. First, we can use our former multiple choice questions without any change. All we have to change is we need a new template for the presentation of the questions. Furthermore, and this is another advantage, each question can now be used several times since in the case of failure, the correct answer will not be shown. So you can use the question again. And last but not least, you can apply to evaluation mechanisms an inclusive one where each correct true false decision is incorporated. Let's say you have four possible answers, then each correct one is worth 25 percent or an exclusive evaluation strategy where only the entire question is graded as either correct or wrong. As a consequence, simple multiple choice tests are no longer an option on the virtual linguistics campus. They have been replaced by dynamic multiple choice tests in most cases. These tests are far more challenging for our students. They involve more than just accidental clicks and they can expand the set of questions of classical multiple choice tests considerably. Nevertheless, they only constitute a transition towards more intelligent testing formats where the machine can reliably evaluate free user input and allows test formats that can more precisely make statements about the student's mastery level. The virtual linguistics campus team will continue working on such assessment formats. Thanks for your attention.