ServiceNow Research

Picard: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models

Abstract

Large pre-trained language models for textual data have an unconstrained output space; at each decoding step, they can produce any of 10,000s of sub-word tokens. When fine-tuned to target constrained formal languages like SQL, these models often generate invalid code, rendering it unusable. We propose PICARD (code and trained models available at this https URL), a method for constraining auto-regressive decoders of language models through incremental parsing. PICARD helps to find valid output sequences by rejecting inadmissible tokens at each decoding step. On the challenging Spider and CoSQL text-to-SQL translation tasks, we show that PICARD transforms fine-tuned T5 models with passable performance into state-of-the-art solutions.

Publication
Conference on Empirical Methods in Natural Language Processing (EMNLP)
Torsten Scholak
Torsten Scholak
Applied Research Scientist

Applied Research Scientist at Human Machine Interaction Through Language located at Montreal, QC, Canada.

Dzmitry Bahdanau
Dzmitry Bahdanau
Research Lead

Research Lead at Human Machine Interaction Through Language located at Montreal, QC, Canada.