ServiceNow Research

The Power of Prompt Tuning for Low-Resource Semantic Parsing

Abstract

Prompt tuning has recently emerged as an effective method for adapting pre-trained language models to a number of language tasks. In this paper, we investigate prompt tuning for semantic parsing, the task of mapping natural language utterances onto formal meaning representations. For large T5 models we find (i) that prompt tuning significantly outperforms fine-tuning in the low data regime and (ii) that canonicalization – i.e. naturalizing the meaning representations – barely improves performance. This last result is surprising as it suggests that large T5 models can be modulated to generate sequences that are far from the pre-training distribution.

Publication
Annual Meeting of the Association for Computational Linguistics (ACL)
Siva Reddy
Siva Reddy
Research Scientist

Research Scientist at Human Machine Interaction Through Language located at Montreal, QC, Canada.

Harm de Vries
Harm de Vries
Research Lead

Research Lead at Large Language Models Lab located at Amsterdam, Holland.