 The aim of structured expert judgment is uncertainty quantification when data are lacking. The ability of experts to quantify uncertainty is thus a central element of the classical model. The key idea of the classical model is the objective evaluation of expert assessments of uncertainty. The method proposes calibration questions to be used for the evaluation and aggregation of expert assessments. The calibration questions or seed variables are questions whose answers are known to the analyst, at the time or shortly after the elicitation, but should not be known by the experts. Similarly to the questions of interest, the calibration questions should regard uncertain quantities for which experts can provide their assessments. The calibration questions need to cover the same domain expertise as the questions of interest, as much as possible. That is because an important assumption of the classical model is that expert's performance on the questions of interest is the same as the performance on calibration questions. To consider a simple example, suppose the question of interest is the following. What percentage of India's population will be resistant to antibiotics by 2025? Then the following question, how many cases of antimicrobial resistance were reported in the state of Kerala in 2017, can be an example of a calibration question. Finding good calibration questions is a crucial step in any expert judgment study. A very important role is that calibration questions should not be almanac questions. We consider almanac questions those that regard information which can be easily recalled by experts. This is of course domain specific, but to give a silly example of such an almanac question, consider the question of interest, when will men land on Mars? Now the question when men landed on Moon would be a question which every expert and probably non-expert would know, and will therefore not be an appropriate calibration question. Usually data coming from official reports or data which are not publicly available are used for the calibration questions. The ideal scenario is when the answers to the calibration questions are known soon after the elicitation. For example, suppose the annual report with statistics of high interest for the study is known to be released in September. Planning the elicitation a couple of months earlier would then be ideal. In this case, the data are referred to as predictions. Unfortunately, this is not always possible. Then one would have to rely on data already available. The data are then referred to as retradictions. As mentioned before, good sources are official but not yet public reports or recent and again not publicly available data. Sometimes the calibration questions cannot be chosen from the same domain as the questions of interest. This can be simply due to the fact that there are no available data yet for that given domain. Think about newly developed drugs or technologies for example. In this case an adjacent domain is chosen for developing calibration questions. Developing good calibration questions is not an easy task, but it's probably not also the most difficult thing you will have ever to do. Plus, the effort of developing good calibration questions will be reflected on the quality of expert data. The same advice from Roger holds for developing calibration questions. One can do it well or can do it badly. Good luck developing good calibration questions!