中文
Profile
VIEW MORE
>>欢迎咨询报考2026年硕士/博士研究生<<        张新生(1978~),男,博士,教授(博导),管理学院副院长。2009年12月毕业于西安电子科技大学,获得博士学位。2010年10月晋升为副教授,佛罗里达大学访问学者(2013-2014),2016年12月晋升为教授,现在西安建筑科技大学管理学院从事教学和科研工作。近年来主持国家自然科学基金1项、国家社科基金后期资助项目1项,教育部人文社科规划项目1项,陕西省重点产业链项目1项,陕西省自然科学基金3项、陕西省社科基金2项、陕西省教育厅自然科学基金3项等,主持横向项目6项,并参与了多项课题的研究工作。主要研究方向包括:智能社会治理;管理智能决策与优化;能资环(能源、资源、环境)智能管理与优化...
zhangxinsheng
Professor
Paper Publications
An advanced model integrating prompt tuning and dual-channel paradigm for enhancing public opinion sentiment classification
Release time:2025-09-07 Hits:
DOI number:
10.1016/j.compeleceng.2024.110047
Journal:
COMPUTERS & ELECTRICAL ENGINEERING
Abstract:
Sentiment analysis of online comments is crucial for governments in managing public opinion effectively. However, existing sentiment models face challenges in balancing memory efficiency with predictive accuracy. To address this, we propose PRTB-BERT, a hybrid model that combines prompt tuning with a dual-channel approach. PRTB-BERT employs a streamlined soft prompt template for efficient training with minimal parameter updates, leveraging BERT to generate word embeddings from input text. To enhance performance, we integrate advanced TextCNN and BiLSTM networks, capturing both local features and contextual semantic information. Additionally, we introduce a residual self-attention (RSA) mechanism in TextCNN to improve information extraction. Extensive testing on four comment datasets evaluates PRTB-BERT's classification performance, memory usage, and the comparison between soft and hard prompt templates. Results show that PRTB-BERT improves accuracy while reducing memory consumption, with the optimized soft prompt template outperforming traditional hard prompts in predictive performance.
Indexed by:
Journal paper
Volume:
123
ISSN No.:
0045-7906
Translation or Not:
no
Date of Publication:
2024-01-01
Included Journals:
SCI

Pre One:An ALBERT-based TextCNN-Hatt hybrid model enhanced with topic knowledge for sentiment analysis of sudden-onset disasters

Next One:Imbalanced Text Sentiment Classification Based on Multi-Channel BLTCN-BLSTM Self-Attention