 Hello, my name is Koush Varshni, I'm a Distinguished Research Scientist and Senior Manager at IBM Research. I'm here to answer three questions about AI governance. Governance is an interesting word. It means control. And if we think back in history, steam engines had this part called the governor, and it regulated the flow of steam, and it was there to make sure that the system remained safe. And when we say AI governance, it's the set of practices to keep an AI system under control so that it remains safe as well. So it includes things like respecting regulations, curating data, and creating fact sheets that explain how the system works. Some of the risks posed by generative AI and LLMs are actually the same as traditional machine learning. So things like fairness, transparency, robustness to attacks, they're all still concerns. But then there's some new ones as well. So new risks that are emerging with generative AI include things like toxicity, harmful behaviors, bullying, these sort of things. And then there's also hallucination. So where the model is putting out things which kind of sound like they would make sense, they're plausible, but they're actually factually incorrect. And we've created a taxonomy of common harms that can mitigate these behaviors with data curation, with fine tuning, prompt tuning, prompt engineering. So all of that is really the way to address them. But this is just a starting point. So if we start and think for a second, governments, enterprises, all sorts of other organizations will have additional constraints to build into their LLMs, so that they follow applicable laws, industry standards, social norms, and so on. And understanding the models is also quite different, because we no longer need to know exactly how the models make their decisions and predictions. This is what we used to call explainability. But now instead what we need to be able to do is trace an LLMs generative output to the user's prompt or to its training data. So source attribution is the new explainability. Safety guarantees with generative AI are very difficult because there's no one-size-fits-all approach. Every organization needs to define their own values, their own behaviors that they find acceptable and unacceptable. For example, once you know the context of your chatbot's deployment, you can start to reason through the relevant risks. This could involve taking existing policy documents like laws, corporate policies, or other rules and using them as instruction data to teach the model how to behave. And interventions must also be scalable because we're dealing with such large models and even larger data sets. And at IBM, we're designing tools to address the unique threats posed by large language models and other content-generating foundation models. We're also devising new methods to limit chatbots from leaking personal or proprietary data and hallucinating wrong or irrelevant information.