Thursday, October 17, 2024

People distrust AI-generated headlines

Must read

AI-generated headlines tend to be considered less credible by humans. (archive image)

Keystone

People distrust content generated by artificial intelligence (AI). According to a new study, readers are less willing to trust headlines if they are labeled “AI-generated”.

According to study leader Fabrizio Gilardi from the University of Zurich (UZH), such references should therefore only be used with caution. “It would make more sense to distinguish between problematic and unproblematic content than between AI-generated and non-AI-generated content,” Gilardi told the Keystone-SDA news agency on Friday.

According to Gilardi, AI hints meet with a lot of approval. However, very little research has been carried out into the impact of such labels on readers’ perceptions.

For the study, which was published in the journal “Pnas Nexus”, Gilardi and his UZH colleague Sacha Altay therefore conducted an experiment with almost 5,000 people from the USA and the UK.

False assumptions

The results showed that labeling headlines as “AI-generated” reduced the perceived accuracy of the headlines and the participants’ intention to share them. This was independent of the actual truthfulness of the headline and also independent of whether it had actually been created by an AI. However, according to the study, the effect was three times smaller than for headlines with the word “false”.

According to Gilardi, this skepticism is probably due to the fact that many readers would assume that headlines labeled as such were written entirely by AI, without human supervision.

However, this is unusual in journalism, said Gilardi. Such references also lead to a reduction in the belief in correct content.

SDA

Latest article