1

In Defense of Uniform Convergence: Generalization via derandomization with an application to interpolating predictors
We propose to study the generalization error of a learned predictor ĥ in terms of that of a surrogate (potentially randomized) …
Linear Mode Connectivity and the Lottery Ticket Hypothesis
We study whether a neural network optimizes to the same, linearly connected minimum under different samples of SGD noise (e.g., random …
Online Learned Continual Compression with Adaptive Quantization Modules
We introduce and study the problem of Online Continual Compression, where one attempts to simultaneously learn to compress and store a …
Knowledge Hypergraphs: Prediction Beyond Binary Relations
Knowledge graphs store facts using relations between two entities. In this work, we address the question of link prediction in …
Fast and Furious Convergence: Stochastic Second Order Methods under Interpolation
We consider stochastic second-order methods for minimizing smooth and strongly-convex functions under an interpolation condition …
RelatIF: Identifying Explanatory Training Examples via Relative Influence
In this work, we focus on the use of influence functions to identify relevant training examples that one might hope …
Stochastic Neural Network with Kronecker Flow
Recent advances in variational inference enable the modelling of highly structured joint distributions, but are limited in their …
A Closer Look at the Optimization Landscapes of Generative Adversarial Networks
Generative adversarial networks have been very successful in generative modeling, however they remain relatively challenging to train …
A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms
We propose to meta-learn causal structures based on how fast a learner adapts to new distributions arising from sparse distributional …
Finding and Visualizing Weaknesses of Deep Reinforcement Learning Agents
As deep reinforcement learning driven by visual perception becomes more widely used there is a growing need to better understand and …