 Hi, my name is Kush Varshni. I'm a research staff member with IBM Research AI. I'd like to tell you about one of our upcoming papers at NIPS. It deals with recommender systems, and in particular, it takes demand into account in two different ways. The first way is because of the fact that when dealing with product recommendation in contrast to media recommendation, the fact that someone doesn't purchase something doesn't necessarily indicate that they don't like it. So we've taken that into account. But the second way that we take demand into account is via the fact that oftentimes people will purchase large ticket items, such as fridges or televisions, and then get flooded with recommendations for more of the same immediately. But that is a failure of the underlying algorithm because for many item categories known as durable goods, there's actually a long time that people take between purchases within the category. So in our approach, we estimate both the inter-purchase duration for item categories as well as the inherent appeal of a product to a user. And by doing so, we can offer recommendations that are both timely as well as just good in general. And we've tested this on a few data sets from the commerce domain and shown superior performance, both on the form utility, which is the inherent appeal of the product, as well as on the time utility, which is the right timing for making that recommendation. And this has been done in comparison to six state-of-the-art recommender systems. The main technical challenges we've had to overcome deal with the size and dimensionality of the data, which could deal in millions of users and millions of items. We're able to approach those sort of data sizes by relaxing the inherent tensor completion problem into a matrix completion problem and relaxing the original nested hinge loss formulation into a form that we can solve using an alternating minimization that takes the sparsity of the underlying data into account.