Measuring the Measuring Tools: An Automatic Evaluation of Semantic Metrics for Text Corpora (2211.16259v1)
Abstract: The ability to compare the semantic similarity between text corpora is important in a variety of natural language processing applications. However, standard methods for evaluating these metrics have yet to be established. We propose a set of automatic and interpretable measures for assessing the characteristics of corpus-level semantic similarity metrics, allowing sensible comparison of their behavior. We demonstrate the effectiveness of our evaluation measures in capturing fundamental characteristics by evaluating them on a collection of classical and state-of-the-art metrics. Our measures revealed that recently-developed metrics are becoming better in identifying semantic distributional mismatch while classical metrics are more sensitive to perturbations in the surface text levels.
- George Kour (16 papers)
- Samuel Ackerman (21 papers)
- Orna Raz (20 papers)
- Eitan Farchi (37 papers)
- Boaz Carmeli (14 papers)
- Ateret Anaby-Tavor (21 papers)