 So this is Joey, the director of Media Lab. And I'm just telling him what our project is and what we're working on. So in this corner over here, we have Akran working on the prototype. And we're working on creating a data set essentially so that we have different artifacts of American sign language, facial expressions, and gestures, and body positions. We're going to basically crowdsource some of that information through a game app. What we've seen in some of the translation apps or some of the voice recognition apps, a lot of the technology there has gotten better. And part of the reason is, is because there's data sets to support that language. And we haven't seen that with American sign language. So we're hoping to contribute to the field in that way. And it's a good place to start right here. So we can have the prototype right here. So we have an app, very, very simple. But it allows you to record a video of you signing up. And then you're happy you just upload it. And then you'll be able to tag it with all the text is, with version of sign language. So you want to copyright, which is creative commons, should be creative comments attribution or 0, CC0. Then you don't have to put attribution. And then a model release. Because otherwise, when you try to use the data later, people won't have the rights. We're thinking of the Duolingo model as well, like that kind of format where you'd be able to capture both sides of that communication where you can take from one language, verify it in another, and then give it back to have it verified the opposite way in reverse. Part of the challenge is that the sign language community is so small. It's such a small subset of population. So we're hoping that people who are learning sign language, who are interpreters, so that we can engage more members of the community to kind of have more hands on the project.