Weak supervision for Question Type Detection with large language models


Jiří Martínek and Christophe Cerisara and Pavel Král and Ladislav Lenc and Josef Baloun
Interspeech (2022)

PDF

Abstract

Large pre-trained language models (LLM) have shown remarkable Zero-Shot Learning performances in many Natural Language Processing tasks. However, designing effective prompts is still very difficult for some tasks, in particular for dialogue act recognition. We propose an alternative way to leverage pretrained LLM for such tasks that replace manual prompts with simple rules, which are more intuitive and easier to design for some tasks. We demonstrate this approach on the question type recognition task, and show that our zero-shot model obtains competitive performances both with a supervised LSTM trained on the full training corpus, and another supervised model from previously published works on the MRDA corpus. We further analyze the limits of the proposed approach, which can not be applied on any task, but may advantageously complement prompt programming for specific classes.

Authors

BibTex

@inproceedings{martinek2022weak, title={Weak supervision for Question Type Detection with large language models}, author={Mart{\'\i}nek, Ji{\v{r}}{\'\i} and Cerisara, Christophe and Kr{\'a}l, Pavel and Lenc, Ladislav and Baloun, Josef}, booktitle={INTERSPEECH 2022-}, year={2022} }
Back to Top