 The detection of a fallen person, FPD, is a critical task for ensuring individual safety. Deep learning models have shown promise in addressing this challenge, but they are hindered by several issues, including the lack of adequate use of global contextual information, poor feature extraction, and high computational requirements. These shortcomings lead to low detection accuracy, poor generalization, and slow inference speeds. To overcome these difficulties, the present study proposes a new lightweight detection model called Global and Local You Only Look Once Light, GLYOLO Light. This model integrates both global and local contextual information by incorporating transformer and attention modules into the popular object detection framework YOLO V5. A STEM module replaces the original inefficient focus module, while rep modules with reparameterization technology are introduced. Additionally, a lightweight detection head is developed to reduce the number of redundant channels in the model. Finally, a large-scale, well-structured dataset of fallen persons, FPDD, is created, along with an evaluation of the model's performance using the BCE loss function. Experimental results demonstrate that GLYOLO light out. This article was authored by Yuan Dai and Weiming Liu. We are article.tv, links in the description below.