 This paper proposes a new approach to activity recognition which uses video compression to generate feature vectors from compression variables. These features are then reduced further by removing the temporal domain, resulting in a single feature vector per video. Additionally, the motion vectors are also reduced to a single feature vector by projecting them onto principal components. Finally, this data is used to train a classifier, achieving high accuracy rates of 68.8% and 74.2%. The main advantages of this method are its simplicity and the use of low-dimensional features and simpler machine learning algorithms. This article was authored by Obata ESA and Tamer Shanable.