1

Systematic Generalization with Edge Transformers
Recent research suggests that systematic generalization in natural language understanding remains a challenge for state-of-the-art …
The Dynamics of Functional Diversity throughout Neural Network Training
Deep ensembles offer consistent performance gains, both in terms of reduced generalization error and improved predictive uncertainty …
Towards Neural Functional Program Evaluation
This paper explores the capabilities of current transformer-based language models for program evaluation of simple functional …
Picard: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models
Large pre-trained language models for textual data have an unconstrained output space; at each decoding step, they can produce any of …
Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations
Explainability for machine learning models has gained considerable attention within the research community given the importance of …
Generative Compositional Augmentations for Scene Graph Prediction
Inferring objects and their relationships from an image in the form of a scene graph is useful in many applications at the intersection …
Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote Sensing Data
Remote sensing and automatic earth monitoring are key to solve global-scale challenges such as disaster prevention, land use …
RandomSCM: interpretable ensembles of sparse classifiers tailored for omics data

Recent metabolomics measurement devices, such as mass spectrometers, produce extremely high-dimensional data. Together with small …

DuoRAT: Towards Simpler Text-to-SQL Models
Recent neural text-to-SQL models can effectively translate natural language questions to corresponding SQL queries on unseen databases. …
Understanding by Understanding Not: Modeling Negation in Language Models
Negation is a core construction in natural language. Despite being very successful on many tasks, state-of-the-art pre-trained language …