 Generative AI is powerful new technology, but it also brings ethical issues that need to be taken into account. First of all, there are worries about the generated text and images. They may reflect harmful biases and stereotypes. For instance, if you generate images of doctors and nurses, often most doctors will be male, while most nurses will be female. If you generate text, you may see harmful stereotypes about gender, ethnicity and other personal characteristics. And the output may be unsafe in other ways, for instance by encouraging violence, by giving people unsafe medical advice, or by being hurtful or hateful in other ways. In part, to address such worries, the companies and institutions developing Generative AI have collected data where harmful output has been manually labeled as such. But such data collection has in turn raised concerns about the treatment of those workers, who have often not been paid and supported very well, while having to work with upsetting material. Yet another class of worries has to do with the fact that much of the training data has been scraped from the internet, without consent from the original artists, writers and programmers. Many of these original creators are demanding consent, credit and compensation, or are insisting that, at least from now on, new Generative AI systems allow at least for an opt-out. A third concern with the training data is that it is heavily biased towards English and a few other high-resource languages and images from rich Western countries, making much less of its benefits available and accessible to people in other parts of the world. Finally, a concern with the data collection of big players in this field has been that the privacy of the users has not been protected well enough. A third issue has to do with sustainability. The amount of computing power needed to train Generative AI and to run the models for millions of users is enormous. An important worry is therefore that it will lead to a large carbon footprint. Finally, there are many worries that the use of Generative AI will affect society as a whole. People may deliberately misuse its power, for instance, to generate fake news at a large scale, or to influence individual voters or consumers by presenting them with custom-made messages. It may be used by students to cheat with homework assignments, or at exams, or by researchers to generate fake articles to advance their careers. And even when the generated contents themselves are unobjectionable, the differences in access to Generative AI might have adverse consequences, because it leads to a change in the balance of power. The balance of power between small companies and big tech, between citizens and governments, between high-resource languages and low-resource languages.