 Hi, I'm Somya, a researcher at the MIT IBM Watson AI Lab. We're presenting a tool at NeurIps this year that reduces bias in an AI model without having to retrain it. Our method allows you to rank all the data used to train your model according to how much it contributes to a particular bias decision. Let's say you have a biased content moderation algorithm that flags all internet commons as toxic based on things like race or gender. Our technique can find the data points in the model that's causing this bias to show up and force the model to ignore them. Our method lets you fix the bias in your model without having to retrain it. Now, this is a huge deal if you're working with foundation models, say with billions of parameters trained on enormous amounts of data. Retraining such models can be extremely expensive and time intensive. We're excited to see if this technique can be applied to other situations where you'd want to understand how a model's data are influencing its decision.