UWBa at SemEval-2025 Task 7: Multilingual and Crosslingual Fact-Checked Claim Retrieval


Ladislav Lenc and Jiří Martínek and Jakub Šmíd and Pavel Král
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025) (2025)

PDF

Abstract

This paper presents a zero-shot system for fact-checked claim retrieval. We employed several state-of-the-art large language models to obtain text embeddings. The models were then combined to obtain the best possible result. Our approach achieved 7th place in monolingual and 9th in cross-lingual subtasks. We used only English translations as an input to the text embedding models since multilingual models did not achieve satisfactory results. We identified the most relevant claims for each post by leveraging the embeddings and measuring cosine similarity. Overall, the best results were obtained by the NVIDIA NV-Embed-v2 model. For some languages, we benefited from model combinations (NV-Embed & GPT or Mistral).

Authors

BibTex

@inproceedings{lenc-etal-2025-uwba, title = "{UWB}a at {S}em{E}val-2025 Task 7: Multilingual and Crosslingual Fact-Checked Claim Retrieval", author = "Lenc, Ladislav and C{\'i}fka, Daniel and Martinek, Jiri and {\v{S}}m{\'i}d, Jakub and Kral, Pavel", editor = "Rosenthal, Sara and Ros{\'a}, Aiala and Ghosh, Debanjan and Zampieri, Marcos", booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)", month = jul, year = "2025", address = "Vienna, Austria", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2025.semeval-1.31/", pages = "209--215", ISBN = "979-8-89176-273-2", abstract = "This paper presents a zero-shot system for fact-checked claim retrieval. We employed several state-of-the-art large language models to obtain text embeddings. The models were then combined to obtain the best possible result. Our approach achieved 7th place in monolingual and 9th in cross-lingual subtasks. We used only English translations as an input to the text embedding models since multilingual models did not achieve satisfactory results. We identified the most relevant claims for each post by leveraging the embeddings and measuring cosine similarity. Overall, the best results were obtained by the NVIDIA NV-Embed-v2 model. For some languages, we benefited from model combinations (NV-Embed {\&} GPT or Mistral)." }
Back to Top