 If you are using generative AI systems in education, make sure you use them responsibly. For me personally, here are three rules of thumb to keep in mind when using generative AI systems. First of all, be careful. Careful with what you put in. Careful with your own data and the data of others. Do not enter personal information, personal photographs, company secrets, or any other potentially sensitive data as prompts into these systems. You should assume that everything you enter ends up on a server in California and is out of your control. Also, be careful about which generative AI systems you use. You'd better use systems based on data that is collected and processed in line with your own ethical standards. And be careful with the output of these systems. Don't believe everything that these systems produce. Don't treat these systems as a trusted friend and certainly don't blindly take their advice on anything of importance. Second, be transparent. Don't use generative AI to fool others, be it lecturers, students, or the general public. Always mark AI-generated material as AI-generated and be open about the process through which you arrived at a particular result. Know down the prompts you use and consider putting them in the caption of your figures or the appendix of a paper. If you embark on a long writing project where the use of AI is allowed, make sure that you document from the start which parts of the text are AI-generated so that you can also, months later, be fully transparent about what your own role and the role of AI was. And third, take responsibility. Generative AI may provide great tools, but it is certainly not error-free. And importantly, an AI is not a person that can be hold accountable. Every text, image, sound, or other product that you create with AI you have to be ready to take responsibility for. And that means making sure factual statements are correct, that sources are properly credited, that unlawful and unethical output is deleted, and that discriminatory content is avoided.