Linguistic features of AI mis/disinformation and the detection limits of LLMs
发布时间:2025-12-22
点击次数:
- 影响因子:
- 15.7
- DOI码:
- 10.1038/s41467-025-67145-1
- 发表刊物:
- Nature Communications
- 关键字:
- large language models; mis/disinformation; computational linguistic; information governance
- 摘要:
- The persuasive capability of large language models (LLMs) in generating mis/disinformation is widely recognized, but the linguistic ambiguity of such content and inconsistent findings on LLM-based detection reveal unresolved risks in information governance. To address the lack of Chinese datasets, this study compiles two datasets of Chinese AI mis/disinformation generated by multi-lingual models involving deepfakes and cheapfakes. Through psycholinguistic and computational linguistic analyses, the quality modulation effects of eight language features (including sentiment, cognition, and personal concerns), along with toxicity scores and syntactic dependency distance differences, were discovered. Furthermore, key factors influencing zero-shot LLMs in comprehending and detecting AI mis/disinformation are examined. The results show that although implicit linguistic distinctions exist, the intrinsic detection capability of LLMs remains limited. Meanwhile, the quality modulation effects of AI mis/disinformation linguistic features may lead to the failure of AI mis/disinformation detectors. These findings highlight the major challenges of applying LLMs in information governance.
- 论文类型:
- 期刊论文
- 学科门类:
- 交叉学科
- 文献类型:
- J
- ISSN号:
- 2041-1723
- 是否译文:
- 否
- 发表时间:
- 2025-01-01
- 收录刊物:
- SCI



