Knowing the Facts but Choosing the Shortcut: Understanding How Large Language Models Compare Entities
Large Language Models (LLMs) are increasingly used for knowledge-based reasoning tasks, yet understanding when they rely on genuine knowledge versus superficial heuristics remains challenging. We investigate this question through entity comparison tasks by asking models to compare entities along numerical attributes (e.g., ``Which river is longer, the Danube or the Nile?''), which offer clear ground truth for systematic analysis. Despite having sufficient numerical knowledge to answer correctly, LLMs frequently make predictions that contradict this knowledge. We identify three heuristic biases that strongly influence model predictions: entity popularity, mention order, and semantic co-occurrence. For smaller models, a simple logistic regression using only these surface cues predicts model choices more accurately than the model's own numerical predictions, suggesting heuristics largely override principled reasoning. Crucially, we find that larger models (32B parameters) selectively rely on numerical knowledge when it is more reliable, while smaller models (7--8B parameters) show no such discrimination, which explains why larger models outperform smaller ones even when the smaller models possess more accurate knowledge. Chain-of-thought prompting steers all models towards using the numerical features across all model sizes.
@Article{LLSW25,
author = {Lehmann, Hans Hergen and Lee, Jae Hee and Schockaert, Steven and Wermter, Stefan},
title = {Knowing the Facts but Choosing the Shortcut: Understanding How Large Language Models Compare Entities},
booktitle = {}
journal = {arXiv:2510.16815},
editors = {}
number = {}
volume = {}
pages = {}
year = {2025},
month = {Oct},
publisher = {}
doi = {10.48550/arXiv.2510.16815},
}