 The term deep fakes refers to a collection of tools that one could download and use to be able to do among other things, insert your face into a video. Some of these videos are probably so good that the average person would not be able to tell the difference. If you're interested in looking at this problem, they receive a lot of imagery, a lot of video and image data, and they would like to be able to know whether that data has been modified or altered, and that's what this program is all about. We collect a lot of these video samples and then we use these fake video examples to train our artificial intelligence machine learning algorithm. Basically what it means with training is we have some source of data, any repository it can be like that, and we start feeding it videos and more videos, and basically what our system does is it analyzes frame by frame, it starts the information at the frame level, puts it together in the temporal sense, and then it creates a model for that, and it can create a model that is more and more complex as long as it sees more video. By analyzing the video, we can see whether or not the face is consistent with the rest of the information in the video, and if it's inconsistent, we detect these subtle inconsistencies. We can then flag that video as being suspect. I think our approach is unique that we feed it more data, it's going to get better with time. So as long as we discover more big fake videos, we can put them there, and it's going to spot them, so it's going to be robust with time and data, so it gets better and better all the time. It's part of a larger problem we're working on here at Purdue, this whole area of media forensics, being able to detect whether an image or a video has been modified, tampered with. These tools are getting better and better, and we're not the only team in the US or even in the world working on this problem. It's a hard problem, and we're going to keep working on it, because we like working on hard problems.