 Businesses increasingly rely on AI to help make important decisions, but for AI to be trusted, it must be able to explain how it reached a recommendation. IBM Research is committed to building explainable AI and bringing it to the developer community. The open-source AI explainability 360 toolkit offers IBM algorithms, along with demos, tutorials, guides, and other resources. Here's how a bank could use the toolkit. Different audiences require different kinds of explanation. When a customer's loan application is rejected, a bank's data scientist can use an algorithm from the toolkit to check whether the AI is properly weighing risk factors. Meanwhile, another algorithm can help a loan officer analyze whether customers with similar scores repaid or defaulted on their loans. A customer service tool built using a different explainability algorithm could explain risk factors and prioritization and suggest how to improve the customer's credit score. With the AI explainability 360 toolkit, IBM Research helps the open-source community build more transparent and trustworthy AI. We invite you to use it and improve it.