 The social distancing research project involves over 30,000 cameras and over 10 terabytes of data. For our recent paper, we analyzed around 11,000 of these cameras. The purpose of this project is to gauge whether computer vision is ready to analyze network camera data and to see what the data tells us about social distancing. We knew there were other methods of analyzing social distancing like using location-based services such as Google Cloud Mobility Reports, but we thought analyzing through network cameras could provide a different look into social distancing given that the data is unbiased in at least one way where people don't have to opt into any kind of location tracking. Our team is currently working on computer vision for mask detection so we're getting more high resolution data to detect whether people are wearing masks or not and we're also working on expanding our camera network. This study discerns whether social distancing is being followed in several types of locations and geographical locations worldwide and could potentially serve as an early indicator whether another wave of infections is likely to occur soon. I'd say that my biggest takeaway from this team is learning how to think as a researcher and not a student.