About
People
Publications
Open Source
Demos
Events
Blog
Careers
Contact
English
English
Français
ServiceNow
ServiceNow AI Research
Tags
Trustworthiness
ServiceNow AI Research
Trustworthiness
Pruning Neural Networks at Initialization: Why are We Missing the Mark?
Recent work has explored the possibility of pruning neural networks at initialization. We assess proposals for doing so: SNIP (Lee et …
Jonathan Frankle
,
Gintare Karolina Dziugaite
,
Daniel M. Roy
,
Michael Carbin
International Conference on Learning Representations (ICLR), 2021.
PDF
Cite
On the role of data in PAC-Bayes bounds
The dominant term in PAC-Bayes bounds is often the Kullback–Leibler divergence between the posterior and prior. For so-called …
Gintare Karolina Dziugaite
,
Kyle Hsu
,
Waseem Gharbieh
,
Gabriel Arpino
,
Daniel M. Roy
International Conference on Artificial Intelligence and Statistics (AISTATS), 2021.
PDF
Cite
Code
An empirical study of loss landscape geometry and evolution of the data-dependent Neural Tangent Kernel
In suitably initialized wide networks, small learning rates transform deep neural networks (DNNs) into neural tangent kernel (NTK) …
Stanislav Fort
,
Gintare Karolina Dziugaite
,
Mansheej Paul
,
Sepideh Kharaghani
,
Daniel M. Roy
,
Surya Ganguli
Conference on Neural Information Processing Systems (NeurIPS), 2020.
PDF
Cite
Video
Like A Researcher Stating Broader Impact for the Very First Time
In requiring that a statement of broader impact accompany all submissions for this year’s conference, the NeurIPS program chairs …
Grace Abuhamad
,
Claudel Rheault
Workshop at the Neural Information Processing Systems (NeurIPS), 2020.
PDF
Cite
Pruning Neural Networks at Initialization: Why Are We Missing the Mark?
Recent work has explored the possibility of pruning neural networks at initialization. We assess proposals for doing so: SNIP (Lee et …
Jonathan Frankle
,
Gintare Karolina Dziugaite
,
Daniel M. Roy
,
Michael Carbin
Workshop at the Neural Information Processing Systems (NeurIPS), 2020.
PDF
Cite
Sharpened Generalization Bounds based on Conditional Mutual Information and an Application to Noisy-Gradient Iterative Algorithms
The information-theoretic framework of Russo and J. Zou (2016) and Xu and Raginsky (2017) provides bounds on the generalization error …
Mahdi Haghifam
,
Jeffrey Negrea
,
Ashish Khisti
,
Daniel M. Roy
,
Gintare Karolina Dziugaite
Conference on Neural Information Processing Systems (NeurIPS), 2020.
PDF
Cite
On the Information Complexity of Proper Learners for VC Classes in the Realizable Case
We provide a negative resolution to a conjecture of Steinke and Zakynthinou (2020a), by showing that their bound on the conditional …
Mahdi Haghifam
,
Gintare Karolina Dziugaite
,
Shay Moran
,
Daniel M. Roy
ArXiv, 2020.
PDF
Cite
Enforcing Interpretability and its Statistical Impacts: Trade-offs between Accuracy and Interpretability
To date, there has been no formal study of the statistical cost of interpretability in machine learning. As such, the discourse around …
Gintare Karolina Dziugaite
,
Shai Ben-David
,
Daniel M. Roy
ArXiv, 2020.
PDF
Cite
In Defense of Uniform Convergence: Generalization via derandomization with an application to interpolating predictors
We propose to study the generalization error of a learned predictor ĥ in terms of that of a surrogate (potentially randomized) …
Jeffrey Negrea
,
Gintare Karolina Dziugaite
,
Daniel M. Roy
International Conference on Machine Learning (ICML), 2020.
PDF
Cite
Linear Mode Connectivity and the Lottery Ticket Hypothesis
We study whether a neural network optimizes to the same, linearly connected minimum under different samples of SGD noise (e.g., random …
Jonathan Frankle
,
Gintare Karolina Dziugaite
,
Daniel M. Roy
,
Michael Carbin
International Conference on Machine Learning (ICML), 2020.
PDF
Cite
«
»
Cite
×