This is the conclusion reached by computational linguist Myrthe Reuver in her research .
Recognition
Reuver’s research focuses on methods from Natural Language Processing (NLP), in which AI is used to analyse language. Reuver: “We looked, among other things, at stance detection. This involves an AI model determining whether a text contains arguments for or against a particular topic, such as stricter abortion laws or U.S. gun legislation. The results show that large language models are sometimes quite capable of identifying such positions, but they struggle with nuance, complexity, and coherence within specific debate topics.”
Added value through human–AI collaboration
A striking outcome of Reuver’s research is that collaboration between AI and social scientists leads to better results. Reuver: “We found that, for detecting sexism in text, collaboration between an expert and a large language model worked best—and even better than the expert alone. We saw the same when developing a dataset on climate-related stances. Social science theories were invaluable in helping to explain the nuances of the debate to the AI model.”
Dutch infrastructure
The research is based on large-scale experiments with AI models running on supercomputers via the Dutch SURF infrastructure. According to Reuver, this local computing power is essential—particularly in light of the current debate about dependency on Big Tech for AI applications.
Democracy
Reuver’s research ties into public debates on filter bubbles and polarisation, as well as the responsible use of AI and large language models. Reuver: “My research shows the limitations of AI models in detecting diverse perspectives. At the same time, it demonstrates how large language models can make a positive contribution to public debate and democracy.”