Weakly Supervised Parsing with Rules


Christophe Cerisara and Pavel Král
Interspeech 2013 (2013)

PDF

Abstract

This work proposes a new research direction to address the lack of structures in traditional n-gram models. It is based on a weakly supervised dependency parser that can model speech syntax without relying on any annotated training corpus. Labeled data is replaced by a few hand-crafted rules that encode basic syntactic knowledge. Bayesian inference then samples the rules, disambiguating and combining them to create complex tree structures that maximize a discriminative model's posterior on a target unlabeled corpus. This posterior encodes sparse selectional preferences between a head word and its dependents. The model is evaluated on English and Czech newspaper texts, and is then validated on French broadcast news transcriptions.

Authors

BibTex

@InProceedings{Kral13IS, author = {Cerisara, C. and Lorenzo, A. and Kr\'al, P.}, title = {Weakly Supervised Parsing with Rules}, booktitle = {Interspeech 2013}, pages = {2192-2196}, year = {2013}, address = {Lyon, France}, month = {25-29 August}, publisher = {ISCA}, isbn = {978-1-62993-443-3} }
Back to Top