 This paper proposes a new approach to benchmarking the accuracy of machine learning models used to analyze large amounts of COVID-19 data. By simulating errors commonly found in sequencing platforms, the authors were able to evaluate the performance of various models under different levels of noise. Their results suggest that certain models are better suited to certain types of noise, which can be useful when choosing a model for a given application. This article was authored by Sawan Ali, by Krim Soho, Alexander Zelikowski, and others.