 This paper proposes a new approach to abstractive summarization using a combination of pre-trained language model, PLM, and graph-based information. It builds upon BART, one of the most advanced sequence-to-sequence PLMs, and adds multi-source transformer modules to process both textual and graphical data. These modules enable the model to extract knowledge graph information and structured semantics from the input documents. The output of these modules is then used as an enhanced encoding for decoding, resulting in more accurate and informative summaries. The proposed approach was evaluated on the Wikisum dataset, which consists of Wikipedia articles. Compared to other baselines, it outperforms them significantly in terms of F1 score and Roosh scores. This article was authored by Tong Chen, Xiu Wei Wang, Tian Wei Yu, and others.