 Apple and scientists at UC Santa Barbara released an open source multimodal large language model. Multimodal means it can do text and images and sometimes other things. In this case, it's text and images. Imagine like you've got a plate full of chocolate donuts with sprinkles and you say, let the donuts have strawberry glaze. And the model will add the glaze, leaving the sprinkles on top and the chocolate visible just underneath on the edges. I think the future is of this stuff for good or ill is in real time you saying donut, whoop, there's a donut. Very basic. Turn it a little bit, turns. That's what Apple's model is doing here. Exactly. If you're asking me from just sort of a workflow, what we're doing prior to this is really, I mean, it's impressive on its face. So you're like, whoa, look, I just made a pirate ship or whatever it is you made. But then you took forever kind of to do it. This is going to that's going to feel like dot matrix printers pretty soon. We're going to be laser into this stuff, and it's going to be real time, natural language, and that's very interesting.