Surveying (Dis) Parities and Concerns of Compute Hungry NLP Research


Many recent improvements in NLP stem from the development and use of large pre-trained language models (PLMs) with billions of parameters. Large model sizes makes computational cost one of the main limiting factors for training and evaluating such models; and has raised severe concerns about the sustainability, reproducibility, and inclusiveness for researching PLMs. These concerns are often based on personal experiences and observations. However, there had not been any large-scale surveys that investigate them. In this work, we provide a first attempt to quantify these concerns regarding three topics, namely, environmental impact, equity, and impact on peer reviewing. By conducting a survey with 312 participants from the NLP community, we capture existing (dis)parities between different and within groups with respect to seniority, academia, and industry; and their impact on the peer reviewing process. For each topic, we provide an analysis and devise recommendations to mitigate found disparities, some of which already successfully implemented. Finally, we discuss additional concerns raised by many participants in free-text responses.


    title = "Surveying (Dis) Parities and Concerns of Compute Hungry NLP Research",
    author = {Ji-Ung Lee and Haritz Puerto and Betty van Aken and Yuki Arase and Jessica Zosa Forde and Leon Derczynski and Andreas R{\"u}ckl{\'e} and Iryna Gurevych and Roy Schwartz and Emma Strubell and Jesse Dodge},
    journal = {arXiv},
    year = {2023},
    url = ""