 That leads well to the next topic about does the end justify the means and this raises a question of ethics, but not only we see there is a problem of transparency, there is a problem of an open conversation, there is a problem of what we can say or talk about or not. And for this session, I'm pleased to have with us Daniel Lundler, who is a professor at Sorbonne University, member of the French Academy of Moral and Political Sciences, and he will be accompanied by Arthur Stream, who is vice president of corporate development of Selectis, a biotechnology firm and a member of the Cordemine. So Daniel, I let you set the scene for this debate. Thank you. Thank you very much. So ethics is important. Why is it especially important in healthcare? Because on the receiving end, our people who singly and collectively have a lot at stake are a captive market and are vulnerable. And because on the providing end, in both public and private arenas, the budgets are enormous, as are the opportunities for enrichment. And because, finally, research and clinic are intermingled, yet pursue different agendas, raising serious conflict of interest issues. That much is obvious. The question before us today is whether the advent of hugely powerful, disruptive technologies alters the problem situation, and in what ways. Part of the problem is globalization, which both amplifies these technologies and is largely enabled by them. Their governance must accommodate interdependence between nations on pain of remaining ineffectual, and intergovernmental ethics is no simple matter. But first we must ask a more basic question. What is ethics? We know an ethical issue when we see one. When we hear about handicapped children having been injected cancerous cells to further research programs in oncology, our ethical bell gives out a loud ring. When we find out that Boeing let the 737 Max fly after the first crash, although they knew what caused it, our bell sounds again. These are cases of what we think of as clear violations of ethical norms. A different sort of case is exemplified by end of life decisions in intensive care units. Ethics is involved, we clearly sense, but in the form of dilemmas rather than violations. Being familiar with a phenomenon doesn't entail being clear about it. Knowledgeable sources struggle to provide the definition of ethics. The best I can offer today is that of philosopher Joseph Raz. Ethics is the endeavor to give substance to the abstract category of the good. To give substance can be understood in two ways. If we allow ourselves to look back in time, we can imagine a moment where oncological experimentation on handicapped children was seen as a dilemma, not a violation. Physicians looking for a cure were laboring for the long-term benefits of humanity and pondered about whether this noble end justified the means. Going back just a little further, it perhaps did not occur to physicians that it may raise any ethical issue at all. It is precisely that sort of case which gave rise birth to the field of bioethics. And what these examples show is that ethics isn't just about making sure that established ethical norms are followed. It is also, in fact for the most part, about creating and discussing the norms to be established. Moral codes connect these two ways of giving substance to the good. They provide a temporary conclusion to the search for norms and they make precise what it is to violate them. The Ten Commandments specifies what it is to honor the good in a number of generic familiar situations. It may be thought that such a code of conduct suitably amended and completed should suffice. It is important to recognize that it does not. First, because no code can come close to covering all the types of situations that people, organizations and societies run into. Second, because when new possibilities arise and new practices emerge, they often require fresh ethical treatment. The existing ethical blanket, so to speak, cannot be stretched to cover the new territory. And this is precisely what technology brings about. New possibilities and new practices. The more powerful the technology, the more areas it can penetrate, the more numerous the possibilities, the more actlandish and possibly transgressive the practices. The potential for disruption is even greater when cutting edge innovations converge, creating synergies that defy extrapolations. Examples in the health sector abound were about to hear from Arthur about genetic engineering and the ethical red line of germline modification. And in the next session, about enhancement and the goals of transhumanism. The commodification of DNA sequencing raises a series of ethical conundrums bearing on privacy violations and incidental findings. E-health can lead to the accumulation of untoward amounts of personal information on some or all members of a population with the attendant risks of surveillance and control or unequal protection and coverage. Generalization of systems of e-health can cause increased inequalities, either because the underprivileged lack access to the minimum skills to navigate the system or because the more opulent sectors of the health system can afford the best up-to-date information and apps. Or again, because personal face-to-face care might increasingly become a privilege and so on. So what is the right time for ethics? It is often suggested that intractable ethical issues arise when technology is allowed to release new tools before due consideration is given to what consequences may follow. Look at artificial intelligence, which is at last wondering how it can be redirected toward the good. Look at the internet, which is due for a reset according to critics, including its founding father Tim Berners-Lee and our speaker in the next session, Carlos Marrero. Look at digital social networks whose destructive effects are well-known. Notice also that these three are mutual enablers. So the suggested cure in the face of these examples is to think first. But this is generally inapplicable. One reason is that before the technology is at least somewhat developed and deployed, debating about its potential risks remain abstract and general. No consensus can be reached. Even when one can begin to discern the shape of the proposed device or setup, it is impossible to foresee how, once deployed, it will interact with other novel systems emerging at the same time. And yet, more importantly, it is impossible to guess what scenarios will play out as society at large and communities take hold of the new technology. So the right time for ethics is neither after nor before. It is now. Ethics is a permanent feature of human action. It is guided by action as much as it guides it. It is an ongoing task that proceeds by spurts on the fly as fresh challenges are brought about by new types of situations arising, new practices crystallizing, new expectations being expressed, new understandings emerging. Finally, how can ethics find its place in today's technological surge? The present technological wave creates purgent problems for ethics and at the same time it makes things especially difficult. The responsibility for developing new technologies rests on a miniscule group of people with exclusive access to knowledge, power, and money and who answer to virtually no one. Deployment involves governments and thus to some limited extent, by democratic representation, a larger set of people. In practice, however, the decisions rest essentially on the technocratic structure. The social gap remains immense. Just as wide is the temporal gap. By the time a technology which has been selected for development and deployment hits the world, it has gone from emerging to already entrenched. And previous ways of doing things or inhabiting one's surroundings have been foreclosed. And as I said at the beginning, governance is the large extender global affair. National policies are mutually dependent and must be coordinated in order to have any lasting effect. Indeed, there are many obstacles standing in our way. Anthemological fatalism may convince too many people that any attempt to change the course of events is futile. The battle cry of putting humanity first, founders on the issue of who do we take humanity to be? Values, situations, and priorities differ. And we know from experience then that when push comes to shove, ethics tends to be an afterthought. In the face of these obstacles, we need to be imaginative and tenacious. But there's no reason to despair. We must be naive. We're witnessing a vigorous pushback as a matter of fact against fatalism. I do have a worry though. We also need to be patient for, as Joseph Rass puts it, and I quote, the new forms of the good take time and require the density of repeated actions and interactions to crystallize and take a definite shape, one that is specific enough to allow people to intentionally realize it in their life or in or through their actions. What we're witnessing in AI, robotics, and above all biotechnology is the beginning of a revolution or so we are told. The rush to dominance by nations, corporations, scientists is underway. In such a moment in history, how on earth can we be collectively persuaded to slow down so to leave time for the new forms of the good to take shape? This is the question with which I lead you. Thank you. Thank you Daniel. Brilliant overview and introduction. Thank you very much. I'm sure you've triggered a lot of thinking within our participants and broader when we communicate.