Contextualizing gender disparities in online teaching evaluations for professors.
Xiang ZhengShreyas VastradJi-Bo HeChaoqun NiPublished in: PloS one (2023)
Student evaluation of teaching (SET) is widely used to assess teaching effectiveness in higher education and can significantly influence professors' career outcomes. Although earlier evidence suggests SET may suffer from biases due to the gender of professors, there is a lack of large-scale examination to understand how and why gender disparities occur in SET. This study aims to address this gap in SET by analyzing approximately 9 million SET reviews from RateMyProfessors.com under the theoretical frameworks of role congruity theory and shifting standards theory. Our multiple linear regression analysis of the SET numerical ratings confirms that women professors are generally rated lower than men in many fields. Using the Dunning log-likelihood test, we show that words used in student comments vary by the gender of professors. We then use BERTopic to extract the most frequent topics from one- and five-star reviews. Our regression analysis based on the topics reveals that the probabilities of specific topics appearing in SET comments are significantly associated with professors' genders, which aligns with gender role expectations. Furtherly, sentiment analysis indicates that women professors' comments are more positively or negatively polarized than men's across most extracted topics, suggesting students' evaluative standards are subject to professors' gender. These findings contextualize the gender gap in SET ratings and caution the usage of SET in related decision-making to avoid potential systematic biases towards women professors.