Login / Signup

Not our kind of crowd! How partisan bias distorts perceptions of political bots on Twitter (now X).

Adrian LuedersStefan ReissAlejandro DinkelbergPádraig MacCarronMichael Quayle
Published in: The British journal of social psychology (2024)
Social bots, employed to manipulate public opinion, pose a novel threat to digital societies. Existing bot research has emphasized technological aspects while neglecting psychological factors shaping human-bot interactions. This research addresses this gap within the context of the US-American electorate. Two datasets provide evidence that partisanship distorts (a) online users' representation of bots, (b) their ability to identify them, and (c) their intentions to interact with them. Study 1 explores global bot perceptions on through survey data from N = 452 Twitter (now X) users. Results suggest that users tend to attribute bot-related dangers to political adversaries, rather than recognizing bots as a shared threat to political discourse. Study 2 (N = 619) evaluates the consequences of such misrepresentations for the quality of online interactions. In an online experiment, participants were asked to differentiate between human and bot profiles. Results indicate that partisan leanings explained systematic judgement errors. The same data suggest that participants aim to avoid interacting with bots. However, biased judgements may undermine this motivation in praxis. In sum, the presented findings underscore the importance of interdisciplinary strategies that consider technological and human factors to address the threats posed by bots in a rapidly evolving digital landscape.
Keyphrases