 This research proposes a new system that combines human gaze and machine vision to accurately predict the user's locomotion mode. The system uses dynamic time warping to align temporal predictions from different modalities and then makes a decision about the timing of locomotion mode transition. This allows for more flexible decisions regarding the lead time of the transition. The system was tested on five participants and achieved over 96% accuracy in predicting the user's locomotion intent. These results show the potential of combining human gaze and machine vision for locomotion intent recognition of lower limb wearable robots. This article was authored by Min Han Lee, Wuxuan Zhong, Edgar Lobotone, and others.