 A computer simulation can produce wonderful images. But apart from this, what is it good for? And how much can we trust the outcomes of simulations and other methods based upon artificial intelligence? These are the questions that fascinate me. My name is Klaus Beisbott. I'm professor for philosophy of science at the University of Bern at the Institute of Philosophy. I started my career as a physicist comparing simulations of dark matter in the universe with data from galaxy surveys. From a conceptual point of view, these simulations are pretty simple. They are based on physical theories and a story that people can understand. But during the last two decades, you have seen the advent of so-called machine learning. Here, the computer isn't pre-programmed with a fixed model, but rather learns or develops a model itself, for example a neural network. These methods have proven extremely powerful, for instance in classifying tumors in scant images of the breast. They will have a profound impact on medicine. But we don't really know how they work and what they do. They are black boxes, as people put it. This is why there is a quest for explainable AI or interpretable software. But what exactly might explainable AI be? What kinds of explanations may increase our understanding? These questions are the center of a research project we have just started at the Institute of Philosophy. For decades, philosophers of science have accumulated insights on explanation and on the strategies that increase our understanding. We apply their work to the debate on explainable AI. The black box character of novel machine learning applications is a particular concern when it comes to the choice of a therapy. Patients want to understand why they undergo a particular treatment. And they want to be treated on fair terms. It's a problem if a network specialized in diagnosing skin cancer works for white skin, but not for black skin. In this way, digitalization and medicine raises ethical issues. One response is machine ethics. It says, if robots take decisions of ethical significance, they should learn to follow moral principles. But how exactly might this work? Which moral principles should be implemented in machines? And which decisions should be left to humans? These are important questions that need a broader discussion in society. As philosophers, we can provide useful input for this debate. At CAME, researchers are aware of the ethical challenges raised by artificial intelligence. The last few days, I have in fact discussed the algorithmic fairness with two colleagues from computer science. This exchange is vital for us and I believe beneficial for the scientists. As philosophers, we can't address the methodological and ethical challenges raised by artificial intelligence if we don't understand what's new with it. So I'm extremely happy that CAME allows me to engage with other researchers from a wide variety of research fields.