 I would like to summarize like five points that we have been discussing. It's about consumer trust is a crucial milestone that needs to be reached to enable further investments and the development of AI technologies. Another one is about the root of fear and mistrust around the AI system for the lack of understanding of the technology, a way to reduce fear and increase trust into educate the consumer and highlight AI's supporting functions and I guess we will emphasize a little bit more in that. A third point is trust can also be built by ensuring good governance and transparency within companies and by focusing on explainability. Another one is about progress in the field of AI can be sped up through the collaboration. Collaboration can be insured through the practice like open source code, open data and sharing information. And last but not least AI models reflect and amplify bias present in training data. The bias originates from text and compass sorry composed by humans which explicitly or implicitly contains social stereotypes and norms. By removing gender bias from training data, downstream applications will become fair and more equitable. Let's put it that way. First I would like to talk about that we rather talk about cognitive technology than AI because if we talk about artificial intelligence, if we talk about artificial joints, organs or plans we pretty much think about as neutral as possible. But if we talk about artificial intelligence we're not tending to talk about really copying the human brain. It's more about augmenting the human in his weaknesses. So what I definitely not talking about when I talk about AI is about science fiction, about terminators and actually neither about super intelligent what today even doesn't exist in the labs. So today we're talking pretty much about about purpose single purpose driven algorithms, system that are specified to handle as single or limited tasks. We call them either narrow or weak AI. Additionally we also talk about reinforced learning system, a training method based on rewarded desired behaviors or punishing undesired ones. So in general a reinforced learning agent is able to perceive and interpret its environment, takes actions and learns through trial and error. Important to me is to add here that AI are learning systems based on data that comes from human beings, their environment and the sphere of our action. So it's up to the human being behind to ensure the accuracy of the data fed into the system. An algorithm itself is not biased, it is a human being, it is the data that we provide to the system and it is our responsibility to reflect and spot potential biases. Learning is always an iterative process. Indeed AI can literally support and empower all sustainable development goals of the UN. From accessibility to education, prevent hunger and poverty, solve health issues and many more. But here I would actually love to hand over to my colleague Roland Sigurd to discuss some great reuse cases. Yes we already heard from Dalit that AI is mainly a wonderful tool to analyze big data. I think we all accept that today computers can do much better calculation with big numbers and that's exactly the same. So we have now more and more tools which can analyze all the data we collect through different senses. Typical example could be if you want to really measure the biodiversity in the rainforest for example, you first have to collect data. This can be done more and more with drones, for example, flying over the rainforest, but then you have to analyze this data in order really to draw conclusions and to interfere at the right element. Probably a more closer application would be to do the same thing in fields, agriculture fields, so that you can have the best intervention to have wonderful growth of your plants without really spreading a large number of pesticides. Data allows you to really analyze and know exactly where to interfere and where not to interfere. I'm convinced that through these type of applications we can reduce the out bringing of pesticides by probably about 99% and have much healthier food, much less pollution of the environment thanks to a combination of data, analytics of this data and of course the humans which actually have to feed and bring in this data. Artificial intelligence and very often other robotics is always linked with these science fiction films we have seen and still are going on and of course some of these movies are threatening and so today we are far from having these type of robots. I think artificial intelligence or these cognitive systems are still mainly doing big number crunching of data where these computers are much better and they can complement our weaknesses. We are not very good in doing this but we are extremely good in finding creative ways. Now it's probably not only science fiction sometimes it's also the industry and even probably scientific people which are bringing out wonderful stories about their newest results from research which then if people are not looking with a somewhat critical eye on it gives gives the feeling that these machines are taking over. I think we are extremely far from having machines taking over and I think we have also to understand that today artificial intelligence is really only calculating with numbers. These systems will never have a self-confidence and self-development because they are not living creatures. These are machines which are handling data and they are handling them according to what we ask them to do. I think we are probably all responsible. I'm a scientist and I think what we should do as scientists we more open to really speak to the public at broad. So I'm doing this myself I'm giving a lot of lectures so that people actually can understand what is going on with in the research lab what implication this might have. Of course all this development in artificial intelligence will change our life. We'll hopefully in most cases have a very positive influence on our life but people have to know what they have to expect in the near future. They should actually understand how this will come and it's not something which comes from one day to the other. So I think it's something which is very complex technology especially if you think about robots with artificial intelligence part in it they take a long time to be actually applicable. So it's not something which is just around the corner it's something where the society can learn and can evolve and therefore we I think we should not be afraid. Thanks first of all I would like to underline everything with Roland said and an additional point I would like to emphasize here is like what is trust? Trust is actually based on our culture background on our ethics on our morals. In Europe we have different views on ethics as for example in China. So saying that in my opinion we need a common sense like we have for human and children's rights. This is pretty much driven by the UN and as Roland said every single person needs to stand behind it. We need to take the responsibility. So one of the most important thing and I think this is very nice discussed in the use cases is the accuracy of data. The accuracy of data also is the underlying to trust the results and there one of the things I always like to say is now we have the chance to unbiased bias. It is about an iterative process looking at the results asking ourselves what did we wrong was the data wrong or does it come just for a simple different culture? How can we match these things? And these are very core topics that it's up to the human being as individuals but needs to be driven by the governments and definitely by the UN as mentioned like the human and the children's rights.