ServiceNow Research

On Difficulties of Probability Distillation

Abstract

Probability distillation has recently been of interest to deep learning practitioners as it presents a practical solution for sampling from autoregressive models for deployment in real-time applications. We identify a pathological optimization issue with the commonly adopted stochastic minimization of the (reverse) KL divergence, owing to sparse gradient signal from the teacher model due to curse of dimensionality. We also explore alternative principles for distillation, and show that one can achieve qualitatively better results than with KL minimization.

Publication
International Conference on Learning Representations (ICLR)
Alexandre Lacoste
Alexandre Lacoste
Research Scientist

Research Scientist at Human Decision Support located at Montreal, QC, Canada.