ZCU-NLP at MADAR 2019: Recognizing Arabic Dialects


Pavel Přibáň and Stephen Taylor
Proceedings of the Fourth Arabic Natural Language Processing Workshop (2019)

PDF

Abstract

In this paper, we present our systems for the MADAR Shared Task: Arabic Fine-Grained Dialect Identification. The shared task consists of two subtasks. The goal of Subtask– 1 (S-1) is to detect an Arabic city dialect in a given text and the goal of Subtask–2 (S-2) is to predict the country of origin of a Twitter user by using tweets posted by the user. In S-1, our proposed systems are based on language modelling. We use language models to extract features that are later used as an input for other machine learning algorithms. We also experiment with recurrent neural networks (RNN), but these experiments showed that simpler machine learning algorithms are more successful. Our system achieves 0.658 macro F1-score and our rank is 6th out of 19 teams in S-1 and 7th in S-2 with 0.475 macro F1-score.

Authors

BibTex

@inproceedings{priban-taylor-2019-zcu, title = "{ZCU}-{NLP} at {MADAR} 2019: Recognizing {A}rabic Dialects", author = "P{\v{r}}ib{\'a}{\v{n}}, Pavel and Taylor, Stephen", booktitle = "Proceedings of the Fourth Arabic Natural Language Processing Workshop", month = aug, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/W19-4623", doi = "10.18653/v1/W19-4623", pages = "208--213", abstract = "In this paper, we present our systems for the MADAR Shared Task: Arabic Fine-Grained Dialect Identification. The shared task consists of two subtasks. The goal of Subtask{--} 1 (S-1) is to detect an Arabic city dialect in a given text and the goal of Subtask{--}2 (S-2) is to predict the country of origin of a Twitter user by using tweets posted by the user. In S-1, our proposed systems are based on language modelling. We use language models to extract features that are later used as an input for other machine learning algorithms. We also experiment with recurrent neural networks (RNN), but these experiments showed that simpler machine learning algorithms are more successful. Our system achieves 0.658 macro F1-score and our rank is 6th out of 19 teams in S-1 and 7th in S-2 with 0.475 macro F1-score." }
Back to Top