 One sort of future challenge with the evolution of AI technologies is that we're not only in a world where we're going to have AI assisting our decision making but increasingly making decisions for us and evolving in a way in which we may completely cut the human out of that decision-making process. So I think sort of shifting our trust in a human you know human oversight of decision-making to automated decision-making is one really interesting research area moving forward. One interesting example right now would be some of the decision-making algorithms that are used in college admissions. So increasingly unstructured data from places like social media activities is being used in decision-making tools that are playing into whether or not a student say is predicted to stay at their university for all four years and complete their education or you know it may be used to assess other other factors that are considered. So that's a real transition from having like a purely human review and oversight process to using some kind of metric that is completely codified. So if we continue on that path and we say well you know eventually these technologies are becoming more efficient or may be in some cases more fair even in their outcomes at what point does it become okay to completely remove the human oversight from from those choices.