 There's been a trend in foundation models where the models have been getting larger and larger and larger every year. It seems like each year a new, dramatically large model comes out. Now, you might think that bigger is better. And while that is true to an extent, recently that trend has started to come apart. And in particular what we're finding is that often smaller models that are trained on the right kind of data. So for instance, if you wanted to have a model that was a specialist in one particular kind of text or one domain, you can train a smaller model. And what this tells us is that often a small expert can outperform even a larger, general model. Sort of like asking an expert to help you out rather than just simply trying to find the smartest person you know and hoping that they know something about that topic. So as we see foundation models evolve in enterprise, we're starting to see a trend where a set of smaller specialized models may be more suited to task.