Tags

Type your tag names separated by a space and hit enter

Multi-granularity heterogeneous graph attention networks for extractive document summarization.
Neural Netw. 2022 Nov; 155:340-347.NN

Abstract

Extractive document summarization is a fundamental task in natural language processing (NLP). Recently, several Graph Neural Networks (GNNs) are proposed for this task. However, most existing GNN-based models can neither effectively encode semantic nodes of multiple granularity level apart from sentences nor substantially capture different cross-sentence meta-paths. To address these issues, we propose MHgatSum, a novel Multi-granularity Heterogeneous Graph ATtention networks for extractive document SUMmarization. Specifically, we first build a multi-granularity heterogeneous graph (HetG) for each document, which is better to represent the semantic meaning of the document. The HetG contains not only sentence nodes but also multiple other granularity effective semantic units with different semantic levels, including keyphrases and topics. These additional nodes act as the intermediary between sentences to build the meta-paths involved in sentence node (i.e., Sentence-Keyphrase-Sentence and Sentence-Topic-Sentence). Then, we propose a heterogeneous graph attention networks to embed the constructed HetG for extractive summarization, which enjoys multi-granularity semantic representations. The model is based on a hierarchical attention mechanism, including node-level and semantic-level attentions. The node-level attention can learn the importance between a node and its meta-path based neighbors, while the semantic-level attention is able to learn the importance of different meta-paths. Moreover, to better integrate sentence global knowledge, we further incorporate sentence node global importance in local node-level attention. We conduct empirical experiments on two benchmark datasets, which demonstrates the superiority of MHgatSum over previous SOTA models on the task of extractive summarization.

Authors+Show Affiliations

Fintech Innovation Center, Financial Intelligence and Financial Engineering Key Laboratory, Southwestern University of Finance and Economics (SWUFE), Chengdu, 611130, China.School of Finance, SWUFE, China.Fintech Innovation Center, Financial Intelligence and Financial Engineering Key Laboratory, Southwestern University of Finance and Economics (SWUFE), Chengdu, 611130, China.School of Business Administration, Faculty of Business Administration, SWUFE, China.School of Business Administration, Faculty of Business Administration, SWUFE, China.Fintech Innovation Center, Financial Intelligence and Financial Engineering Key Laboratory, Southwestern University of Finance and Economics (SWUFE), Chengdu, 611130, China.Fintech Innovation Center, Financial Intelligence and Financial Engineering Key Laboratory, Southwestern University of Finance and Economics (SWUFE), Chengdu, 611130, China. Electronic address: yuzj@swufe.edu.cn.Fintech Innovation Center, Financial Intelligence and Financial Engineering Key Laboratory, Southwestern University of Finance and Economics (SWUFE), Chengdu, 611130, China.

Pub Type(s)

Journal Article

Language

eng

PubMed ID

36113341

Citation

Zhao, Yu, et al. "Multi-granularity Heterogeneous Graph Attention Networks for Extractive Document Summarization." Neural Networks : the Official Journal of the International Neural Network Society, vol. 155, 2022, pp. 340-347.
Zhao Y, Wang L, Wang C, et al. Multi-granularity heterogeneous graph attention networks for extractive document summarization. Neural Netw. 2022;155:340-347.
Zhao, Y., Wang, L., Wang, C., Du, H., Wei, S., Feng, H., Yu, Z., & Li, Q. (2022). Multi-granularity heterogeneous graph attention networks for extractive document summarization. Neural Networks : the Official Journal of the International Neural Network Society, 155, 340-347. https://doi.org/10.1016/j.neunet.2022.08.021
Zhao Y, et al. Multi-granularity Heterogeneous Graph Attention Networks for Extractive Document Summarization. Neural Netw. 2022;155:340-347. PubMed PMID: 36113341.
* Article titles in AMA citation format should be in sentence-case
TY - JOUR T1 - Multi-granularity heterogeneous graph attention networks for extractive document summarization. AU - Zhao,Yu, AU - Wang,Leilei, AU - Wang,Cui, AU - Du,Huaming, AU - Wei,Shaopeng, AU - Feng,Huali, AU - Yu,Zongjian, AU - Li,Qing, Y1 - 2022/09/05/ PY - 2022/01/24/received PY - 2022/07/02/revised PY - 2022/08/25/accepted PY - 2022/9/17/pubmed PY - 2022/10/26/medline PY - 2022/9/16/entrez KW - Extractive document summarization KW - Graph Neural Networks KW - Multi-granularity heterogeneous graph attention networks SP - 340 EP - 347 JF - Neural networks : the official journal of the International Neural Network Society JO - Neural Netw VL - 155 N2 - Extractive document summarization is a fundamental task in natural language processing (NLP). Recently, several Graph Neural Networks (GNNs) are proposed for this task. However, most existing GNN-based models can neither effectively encode semantic nodes of multiple granularity level apart from sentences nor substantially capture different cross-sentence meta-paths. To address these issues, we propose MHgatSum, a novel Multi-granularity Heterogeneous Graph ATtention networks for extractive document SUMmarization. Specifically, we first build a multi-granularity heterogeneous graph (HetG) for each document, which is better to represent the semantic meaning of the document. The HetG contains not only sentence nodes but also multiple other granularity effective semantic units with different semantic levels, including keyphrases and topics. These additional nodes act as the intermediary between sentences to build the meta-paths involved in sentence node (i.e., Sentence-Keyphrase-Sentence and Sentence-Topic-Sentence). Then, we propose a heterogeneous graph attention networks to embed the constructed HetG for extractive summarization, which enjoys multi-granularity semantic representations. The model is based on a hierarchical attention mechanism, including node-level and semantic-level attentions. The node-level attention can learn the importance between a node and its meta-path based neighbors, while the semantic-level attention is able to learn the importance of different meta-paths. Moreover, to better integrate sentence global knowledge, we further incorporate sentence node global importance in local node-level attention. We conduct empirical experiments on two benchmark datasets, which demonstrates the superiority of MHgatSum over previous SOTA models on the task of extractive summarization. SN - 1879-2782 UR - https://www.unboundmedicine.com/medline/citation/36113341/Multi_granularity_heterogeneous_graph_attention_networks_for_extractive_document_summarization_ DB - PRIME DP - Unbound Medicine ER -