EN

张新生

教授   博士生导师  硕士生导师

个人信息 更多+
  • 教师英文名称: zhangxinsheng
  • 教师拼音名称: zhangxinsheng
  • 所在单位: 管理学院
  • 学历: 研究生(博士)毕业
  • 办公地点: 教学大楼828
  • 性别: 男
  • 学位: 博士学位
  • 在职信息: 在职
  • 主要任职: 西安建筑科技大学,管理学院,副院长
  • 其他任职: CNAIS理事 中国系统工程学会会员 陕西省电子学会图形图像专委会委员 CCF会员

其他联系方式

通讯/办公地址:

邮箱:

论文成果

当前位置: 中文主页 - 科学研究 - 论文成果

Linguistic features of AI mis/disinformation and the detection limits of LLMs

发布时间:2025-12-22
点击次数:
影响因子:
15.7
DOI码:
10.1038/s41467-025-67145-1
发表刊物:
Nature Communications
关键字:
large language models; mis/disinformation; computational linguistic; information governance
摘要:
The persuasive capability of large language models (LLMs) in generating mis/disinformation is widely recognized, but the linguistic ambiguity of such content and inconsistent findings on LLM-based detection reveal unresolved risks in information governance. To address the lack of Chinese datasets, this study compiles two datasets of Chinese AI mis/disinformation generated by multi-lingual models involving deepfakes and cheapfakes. Through psycholinguistic and computational linguistic analyses, the quality modulation effects of eight language features (including sentiment, cognition, and personal concerns), along with toxicity scores and syntactic dependency distance differences, were discovered. Furthermore, key factors influencing zero-shot LLMs in comprehending and detecting AI mis/disinformation are examined. The results show that although implicit linguistic distinctions exist, the intrinsic detection capability of LLMs remains limited. Meanwhile, the quality modulation effects of AI mis/disinformation linguistic features may lead to the failure of AI mis/disinformation detectors. These findings highlight the major challenges of applying LLMs in information governance.
论文类型:
期刊论文
学科门类:
交叉学科
文献类型:
J
ISSN号:
2041-1723
是否译文:
发表时间:
2025-01-01
收录刊物:
SCI
发布期刊链接:
https://www.nature.com/articles/s41467-025-67145-1