 At the NIPS conference we will present the novel scheme that enables the use of accelerators such as GPUs or FPGAs which have typically a very limited memory capacity for the training of machine learning models. So the challenge here is that the training data is typically too large to fit inside the memory of such an accelerator. Since if you want to take advantage of the superior compute power of this accelerator you need to be selective on which data you train on. And what we prove in our work is that if you're smart about which data you work on and if you take advantage of the non-uniformity of the training data you can speed up this training process. There are many fields where very large amounts of data are being continuously collected everyday like social media, IoT, sensor data, mobile applications. In these fields our technology which we developed in this paper can allow us to very quickly train models on this ever increasing data sets. Because we can train so fast this can even allow us to frequently retrain the models which allows us to adapt to events as they happen in real time. This has the potential to benefit all data science practitioners working in academia or industry. I would say the biggest challenge was to take this theoretical idea and put it into practice in the context of limited memory accelerators. So the main difficulty was to come up with an implementation that effectively uses all the available compute resources. So what we have developed in the end is a generic reusable component that can be used for training a large class of machine learning models in heterogeneous compute environments. The faster training which our technology offers has a significant benefit in the cloud and the reason for that is that in a cloud environment you normally pay for resources on an hourly basis. So if you want to use a single GPU you pay for that in terms of how many hours you use it for. So with our technology if we can train in one hour rather than ten hours that translates into a significant cost saving.