 Which machine learning models are heavily affected by the curse of dimensionality? In high dimensional space, as we increase the number of dimensions, the relative distance between points vanishes. And this can have a significant effect on models that rely on distance metrics. Example one would be the k nearest neighbors, where the nearest point to this is either one of these two points, but it's difficult to tell which truly is the nearest, as there's no sense of neighborhood in high dimensions. This similar argument can be made in the case of k-means. The distance of this point from the centroids marked with x is difficult to measure. And hence, the model will perform poorly due to overfitting. Decision trees is also affected by the curse of dimensionality, as two points are extremely far away that we can draw decision boundaries in almost any arbitrary way. And hence, any small changes in data can lead to change in the decision boundary, which can also lead to overfitting.