 So I think we need to be really concerned about the way that AI and big data and machine learning will reproduce existing forms of structural inequality unless we pay a lot of attention to that and to counter that inbuilt bias. So for example, we're building systems to help us make decisions about all kinds of things, everything from whether someone gets medical insurance to how long someone is going to be set in score where they'll be called in pretrial detention across all aspects of life. We're doing that in part based on training software systems with large data sets that were themselves assembled through historically biased practices. So for example, we're using these systems to help us allocate policing resources by feeding them data sets of police records which we know were historically assembled through racially biased policing practices where police were making more stops and more arrests and more stopping risks and so on and so forth in communities of color. So unless we're really careful, AI is going to reproduce existing forms of bias such as racism, heteropatriarchy, cis normativity, bias around gender identity, sexual orientation, disability, both physical and cognitive language bias in terms of first language ability. So throughout every sphere of life, from the way that we interact with our devices to the way that major decisions that will impact our futures are made, structural biases get reproduced in these systems unless we think really, really carefully about how to counter that.