 the open source and explainable AI. And then I'm a master student, and actually I just finished my thesis and I also research then in the FKI or Germany Center for Official Intelligence. And then today I want to share about the open source and explainable AI. And then I have a little story when I was working on my thesis and which is the hard part of my life because a lot of things to do in the research lab. But I got a lot of help in the open source software. So when I'm looking for something, I just see, oh, I got this on the open source and then I got a lot of things in the open source. So I feel that I need to contribute back to the community. So I just want to share the knowledge to the community, especially to the open source community. And then I want to share about the open source and explainable AI. Okay, before we are going to the explainable AI, we know that since the era of deep learning, AI or artificial intelligence has had a strong open source traditions. Many important models, data sets, framework were done in open source way. And then the open source is the important part of the development of the AI technology. And then there are several things why the open source is important. And then when we use the open source software, we have a more control which we are able to examine the code and change part of the code. And we can also use the software for any purpose that we want. And then we can also study and learn from the expert by looking at code which publicly available. And then we can also share the knowledge to the other person, to the community. And we can also avoid the same mistakes to which the other persons get. And then it makes us learning and it's a place for training how to make a better software and AI system. And then the open source also more secure because anyone can feel, modify and correct the error on the code. And if many people contribute, the issue can quickly fix and update it. And moreover, the software also stable because the code is publicly available or publicly distributed and it is fit for the long-term projects. And then the last one is the community collaborations. And in our approach, we usually produce, test, use and promote software that we loved. And then the open source has a long history and this is a brief, memorable milestone. And then learning the history helped us to understand the past and the current state of the open source software. And then when we look back to the history, the concept of free information sharing in the technological ecosystem existed long before the computer itself. And then it's from the automobile industry in 1911, Henry Ford, auto for automobile won a challenge again, the pattern which tried to monopolize the automobile industry. And Henry Ford lead away the era of open collaboration especially in the automobile industry. And then during 1950 until 60, the widespread of sharing code, sharing so good was high. And most of the software produced by the academics and the lab which has longstanding traditions of open sharing and collaborations. And then in 1969, Arpanet, a precursor of the internet was released. It leads the easier exchange of the software code using the internet network. And then Linus Torvald in 1991, released the Linux with a fully-modifiable code which GNU listens to projects. And then in 2005, a simple versioning control like Git released to the public for software projects. And then since the era of deep learning in 2012 until now, AI has a strong open source traditions and many important AI system, framework, model and data set were done in open source way. And then when we look over the last few years, the widespread use of deep learning and artificial intelligence has experienced significant growth. And we also have a more than 17 billion devices with a trillion of sensor are connected to the internet. And they continuously generating streaming of the data. And the AI system can use this data and automate the process of decision-making. And AI system also improve rapidly in their performance as in language understanding, image recognition, speech recognition, and et cetera. It also impact many aspects of businesses, companies and organizations. And then many companies and organizations want to leverage the AI system. But it is not easy to trust the machine to take the decision, especially in the domain that needs a human expert, such as in the healthcare, transportation like a self-driving car and law and other sectors. Sometimes the decision result is not too important rather than the process of creating the decision itself. And in some recent model, like a deep neural network and we can only see the input, the model and the output but we don't have any knowledge about the internal process of it. And this is called the black box problem. Now we take a look at the black box problem. In the black box problem, in AI refers to the issue of understanding how machine learning choose a specific decision or predictions. In many cases, this model are complex, meaning that it can be difficult or even impossible to determine the reasoning behind that output. For example, here, the first image is cat, not a bottle. But we don't know how the machine said that this is a cat, not a bottle. Even this image contain both cat and bottle. And then in the second example is in a self-driving car. In machine, the machine just stop. But there is no explanation why the machine just stop. Is there any pedestrian or car which detected in their lighter sensor or et cetera and why the car choose a specific decisions. So the lack of transparency in our model can be a significant issue in the field such as healthcare when the stacks are high and decision need to be justified. In the medical image show here, is this B9 or malignant cancer. The decision of our model can also lead to error, bias and discriminations. As it might not be clear why certain decision are being made by our AI system. So to address this black box problem, we need explainable AI model. It can make a better transparency and explainability in our AI model. And then explainable AI refers to the techniques to make the decision result of the AI system can be explained and be understood. Explainable AI can help expand utilizing AI system in a critical and sensitive domain where several criterias must be met. We set only the high accuracy. And it different from the black box concept which cannot explain why this model choose the specific decisions. And then when facing the real problems, the challenge is not how to build a complex and sophisticated model, but how the machine can be understood by the human. And consequently the transparency and explainability of our model become a critical factors. Thus we need transparency and explainability in our AI system. So the human can understand why and how the model choose specific decision, not only just the accuracy of the model. So the user in the future can understand why the model choose specific decision and why not. And they will also know when it succeeds and when it failed. They will know when to trust the machine to take the decisions and know why the machine learning algorithm got an error. And then when we talk about the machine learning algorithm, there are several machine learning model. The deep neural network for example offer enormous advantages in terms of their accuracy, but they lack the ability to explain their results. The degree of explainability, the higher the quality of the results, the harder the model to explain it. And then the lower the quality of the results, the easier model to explain. The scrubs show the model performance versus the model explainability. And most widely used algorithm of machine learning are depicted here. And then from the simple model like a rule-based model, linear model and decision tree, until the complex model like a deep learning, GAN, CNN and RNN. The ideal solutions is to get high explainability with a high performance. However, the easier to explain the model like a linear model, rule-based model and decision tree, the lower performance in those in general. So in contrast, when we see the complex model like a deep learning and example model, they can achieve higher performance, but it's hard to explain their decisions. And then several methods of explainability has been proposed in deep learning model. One of them is saliency map. Saliency map is an important concept of deep learning and computer visions. And saliency method explain the decision of algorithm by using the input component. It assign value that reflects the importance of the input and their contribution to the decisions. The saliency usually use a heat map. It will show the hotness level of the regions that refer to the image which have a big impact on prediction to the specific class. And then the example of saliency map like a class activation map, grad cam, LRP and et cetera. In healthcare, it can be used to process the medical image. And in robotics, we can use the object detection and et cetera. It can also be used in the self-trapping car to detect the car and road and et cetera. And then next, we are going to the new trend of the AI, the generative AI. The generative AI is a class of machine learning that can learn from the context such as text, image, idea in order to generate new content based on the input prompt. In contrast with the common machine learning algorithm which learn and produce the decisions, the generative AI produce artifacts as on output which can have a wide range of variety and complexity. The generative AI has a large number of parameters. Moreover, the generative AI are able to process multiple prompt modalities. When we see in a deep, many of the generative AI use model like a generative adversarial network or then variational auto encoder, RINN and transformer model. And then the image show here is the transformer model. And then the transformer model is a deep learning model that adopt the mechanism of self-attentions. And then differentially waiting, the significant of each part of the input data. And it used primarily in the natural language processing and computer visions. And then the core component of the transformer is the embedding layer, attention, and feed forward blocks. And the attention block map the input into the query k and value and matrices and split into the array of heads. And the multiple heads concatenate it and create the multi-head attentions. So the important question is, how we make explainability in generative AI based on the transformer? And then explainability in generative transformer have become increasingly complex with a large number of parameters and their ability to process multiple input modalities. When we see the generative AI, which based on the transformer, we need to check the explainability in their model. Through their increasing size, transformers are exceptionally challenging for the explainability. However, the most explainability adoption in the transformer focus on their attention in the last lecture. And then let's take a look at the first image in here is the example of the explainability based on the cross entropy score. It showed the probability of the next word prediction value using the cross entropy. And then the second image is here is the example of the multimodal prompt. When we prompt the image in here, the generative AI, generative AI will answer the content of the image. The example of the answer content is a lonely cabin on the edge of the lake with a truck nearby and the kiss. It's of the answer also show the heat map image, which highly contribute to the specific answer. And when we use the image generations. But in top of that, it's still open research problem. And then so the last thing in the sessions, I'd like to take a key for the community. The community, especially the open source community, has a bigger role. And the explainability AI try to solve the black box problem in the machine learning. And we as a community can contribute together to make a better safety and trustworthy in our AI system. We can also building a foundation of an AI system for the future as the Linux era, which come from the open source community. And it become the foundation of the modern technology nowadays. And thus, there is a chance for constructing the foundation of the next generation of computing with an AI with the open source community, building safety, transparency, and reliable for the future of the technology. Thank you.