Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AMRs Assemble! Learning to Ensemble with Autoregressive Models for AMR Parsing (2306.10786v1)

Published 19 Jun 2023 in cs.CL and cs.AI

Abstract: In this paper, we examine the current state-of-the-art in AMR parsing, which relies on ensemble strategies by merging multiple graph predictions. Our analysis reveals that the present models often violate AMR structural constraints. To address this issue, we develop a validation method, and show how ensemble models can exploit SMATCH metric weaknesses to obtain higher scores, but sometimes result in corrupted graphs. Additionally, we highlight the demanding need to compute the SMATCH score among all possible predictions. To overcome these challenges, we propose two novel ensemble strategies based on Transformer models, improving robustness to structural constraints, while also reducing the computational time. Our methods provide new insights for enhancing AMR parsers and metrics. Our code is available at \href{https://www.github.com/babelscape/AMRs-Assemble}{github.com/babelscape/AMRs-Assemble}.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. Rafael Anchiêta and Thiago Pardo. 2020. Semantically inspired AMR alignment for the Portuguese language. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1595–1600, Online. Association for Computational Linguistics.
  2. Graph pre-training for AMR parsing and generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6001–6015, Dublin, Ireland. Association for Computational Linguistics.
  3. Abstract Meaning Representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178–186, Sofia, Bulgaria. Association for Computational Linguistics.
  4. Guntis Barzdins and Didzis Gosko. 2016. RIGA at SemEval-2016 task 8: Impact of Smatch extensions and character-level neural translation on AMR parsing accuracy. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1143–1147, San Diego, California. Association for Computational Linguistics.
  5. One SPRING to Rule Them Both: Symmetric AMR semantic Parsing and Generation without a Complex Pipeline. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14):12564–12573.
  6. SPRING Goes Online: End-to-End AMR Parsing and Generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 134–142, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
  7. XL-AMR: enabling cross-lingual AMR parsing with transfer learning techniques. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 2487–2500. Association for Computational Linguistics.
  8. InfoForager: Leveraging semantic search with AMR for COVID-19 research. In Proceedings of the Second International Workshop on Designing Meaning Representations, pages 67–77, Barcelona Spain (online). Association for Computational Linguistics.
  9. Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 748–752, Sofia, Bulgaria. Association for Computational Linguistics.
  10. ATP: AMRize then parse! enhancing AMR parsing with PseudoAMRs. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 2482–2496, Seattle, United States. Association for Computational Linguistics.
  11. BiBL: AMR parsing and generation with bidirectional Bayesian learning. In Proceedings of the 29th International Conference on Computational Linguistics, pages 5461–5475, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
  12. An incremental parser for Abstract Meaning Representation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 536–546, Valencia, Spain. Association for Computational Linguistics.
  13. LongT5: Efficient text-to-text transformer for long sequences. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 724–736, Seattle, United States. Association for Computational Linguistics.
  14. Hardy Hardy and Andreas Vlachos. 2018. Guided neural language generation for abstractive summarization using Abstract Meaning Representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 768–773, Brussels, Belgium. Association for Computational Linguistics.
  15. Leveraging Abstract Meaning Representation for knowledge base question answering. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3884–3894, Online. Association for Computational Linguistics.
  16. Ensembling Graph Predictions for AMR Parsing. Conference on Neural Information Processing Systems.
  17. Maximum Bayes Smatch ensemble distillation for AMR parsing. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5379–5392, Seattle, United States. Association for Computational Linguistics.
  18. Abstract Meaning Representation for multi-document summarization. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1178–1190, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
  19. I know what you asked: Graph path learning using AMR for commonsense reasoning. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2459–2471, Barcelona, Spain (Online). International Committee on Computational Linguistics.
  20. Fully-Semantic Parsing and Generation: the BabelNet Meaning Representation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1727–1741, Dublin, Ireland. Association for Computational Linguistics.
  21. Roberto Navigli. 2018. Natural language understanding: Instructions for (present and future) use. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 5697–5702. ijcai.org.
  22. BabelNet Meaning Representation: A Fully Semantic Formalism to Overcome Language Barriers. Proceedings of the AAAI Conference on Artificial Intelligence, 36.
  23. Weisfeiler-leman in the bamboo: Novel AMR graph metrics and a benchmark for AMR graph similarity. Transactions of the Association for Computational Linguistics, 9:1425–1441.
  24. Juri Opitz and Anette Frank. 2022. Better Smatch = better parser? AMR evaluation is not so simple anymore. In Proceedings of the 3rd Workshop on Evaluation and Comparison of NLP Systems, pages 32–43, Online. Association for Computational Linguistics.
  25. AMR similarity metrics from principles. Transactions of the Association for Computational Linguistics, 8:522–538.
  26. K. Elif Oral and Gülşen Eryiğit. 2022. AMR alignment for morphologically-rich and pro-drop languages. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 143–152, Dublin, Ireland. Association for Computational Linguistics.
  27. Biomedical event extraction using Abstract Meaning Representation. In BioNLP 2017, pages 126–135, Vancouver, Canada,. Association for Computational Linguistics.
  28. Semantic neural machine translation using AMR. Transactions of the Association for Computational Linguistics, 7:19–31.
  29. Scalable zero-shot entity linking with dense entity retrieval. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6397–6407, Online. Association for Computational Linguistics.
  30. Chen Yu and Daniel Gildea. 2022. Sequence-to-sequence AMR parsing with ancestor information. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 571–577, Dublin, Ireland. Association for Computational Linguistics.
Citations (6)

Summary

We haven't generated a summary for this paper yet.