 We are going to present an oral paper titled Interpretable and Globally Optimal Predictions for Texture Grounding using Image Concepts. This paper is an excellent example of the close collaboration between IBM Research and the University of Illinois, Urbana-Champaign through its Center for Color Computing Systems Research, T3SR. Given an image and a textual phrase referring to an object in the image, Texture Grounding identifies the location of the corresponding object in the form of a bounding box. Existing algorithms generally formulate the task of selection from a set of bounding box proposals obtained from deep-net based systems. In this work, we demonstrate that we can cast the problem of texture grounding into a unified framework that permits efficient search over all possible bounding boxes. Hence, the method is able to consider significantly more proposals and doesn't rely on a successful first stage hypothesizing bounding box proposals. Beyond, we demonstrate that the trained parameters of our model can be used as word embeddings which capture spatial image relationships and provide interpretability. Our students have access to world-class experts in all the areas of our concern. And my collaborator and my counterpart, Dr. Jin Zheng, has been extremely successful in helping our students and faculty access the technology and the people and also even in many cases the source of real problems and understanding of the real challenges.