Samuele D’Avenia will talk about one of his recent works, the abstract of which is reported below.
Several recent works have examined the generations produced by large language models (LLMs) on subjective topics such as political opinions and attitudinal questionnaires, with a particular interest in controlling these outputs to align with specific users or perspectives.
This work investigates how irrelevant contextual information can systematically influence the outputs of Large Language Models (LLMs) when generating opinions on subjective topics. Using the Political Compass Test as a case study, we analyze LLM-generated responses in an open-generation setting and conduct a detailed statistical analysis to quantify shifts in opinions caused by unrelated contextual cues. Our findings reveal that some of these elements can predictably bias model responses, further highlighting challenges in ensuring the robustness and reliability of LLMs when generating opinions on subjective topics
When: 4/07/2025, h 11.30
Where: Sala conferenze – 3rd floor