ReadMe++: Benchmarking Multilingual Language Models for Multi-Domain Readability Assessment

Published in The 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP 2024), 2024

Abstract: We present a comprehensive evaluation of large language models for multilingual readability assessment. Existing evaluation resources lack domain and language diversity, limiting the ability for cross-domain and cross-lingual analyses. This paper introduces ReadMe++, a multilingual multi-domain dataset with human annotations of 9757 sentences in Arabic, English, French, Hindi, and Russian, collected from 112 different data sources. This benchmark will encourage research on developing robust multilingual readability assessment methods. Using ReadMe++, we benchmark multilingual and monolingual language models in the supervised, unsupervised, and few-shot prompting settings. The domain and language diversity in ReadMe++ enable us to test more effective few-shot prompting, and identify shortcomings in state-of-the-art unsupervised methods. Our experiments also reveal exciting results of superior domain generalization and enhanced cross-lingual transfer capabilities by models trained on ReadMe++.

Recommended Citation: Naous, T., Ryan, M. J., Lavrouk, A., Chandra, M., & Xu, W. (2023). ReadMe++: Benchmarking Multilingual Language Models for Multi-Domain Readability Assessment. arXiv preprint arXiv:2305.14463.

https://arxiv.org/abs/2305.14463

Download paper: here