Login / Signup

What Loss Functions Do Humans Optimize When They Perform Regression and Classification.

Hansol X RyuManoj Srinivasan
Published in: bioRxiv : the preprint server for biology (2023)
Understanding how humans perceive patterns in visually presented data is useful for understanding data-based decision making and, possibly, visually mediated sensorimotor control under disturbances and noise. Here, we conducted human subject experiments to examine how humans perform the simplest machine learning or statistical estimation tasks: linear regression and binary classification in data presented visually as 2D scatter plots. We used simple inverse optimization to infer the loss function humans optimize when they perform these tasks. In classical machine learning, common loss functions for regression are mean squared error or summed absolute error, and logistic loss or hinge loss for classification. For the regression task, minimizing the sum of error raised to the power of 1.7 on average best described human subjects performing regression on sparse data consisting of relatively fewer data points. Loss functions with lower exponents, which would reject outliers more effectively, were better descriptors for regression tasks performed on less sparse data. For the classification task, minimizing a logistic loss function was on average a better descriptor of human choices than an exponential loss function applied to only misclassified data. People changed their strategies as data density increased, such that loss functions with lower exponents described empirical data better. These results represent overall trends across subjects and trials but there was large inter- and intra-subject variability in human choices. Future work may examine other loss function families and other tasks. Such understanding of human loss functions may inform designing computer algorithms that interact with humans better and imitate humans more effectively.
Keyphrases