Automated post scoring: evaluating posts with topics and quoted posts in online forum.
Ruosong YangJiannong CaoZhiyuan WenJiaxing ShenPublished in: World wide web (2022)
Online forumpost evaluationis an effective way for instructors to assess students' knowledge understanding and writing mechanics. Manually evaluating massive posts costs a lot of time. Automatically grading online posts could significantly alleviate instructors' burden. Similar text assessment tasks like Automated Text Scoring evaluate the writing quality of independent texts or relevance between text and prompt. And Automatic Short Answer Grading measures the semantic matching of short answers according to given problems and correct answers. Different from existing tasks, we propose a novel task, Automated Post Scoring (APS), which grades all online discussion posts in each thread of each student with given topics and quoted posts. APS evaluates not only the writing quality of posts automatically but also the relevance to topics. To measure the relevance, we model the semantic consistency between posts and topics. Supporting arguments are also extracted from quoted posts to enhance posts evaluation. Specifically, we propose a mixture model including a hierarchical text model to measure the writing quality, a semantic matching model to model topic relevance, and a semantic representation model to integrate quoted posts. We also construct a new dataset called Online Discussion Dataset containing 2,542 online posts from 694 students of a social science course. The proposed models are evaluated on the dataset with correlation and residual based evaluation metrics. Compared with measuring posts alone, experimental results demonstrate that incorporating topics and quoted posts could improve the performance of APS by a large margin, more than 9 percent on QWK.