Accueil
Équipe
Publications
Open Source
Démos
Évènements
Blog
Carrières
Nous joindre
Français
Français
English
ServiceNow
ServiceNow recherche
Tags
Trustworthiness
ServiceNow recherche
Trustworthiness
The Dynamics of Functional Diversity throughout Neural Network Training
Deep ensembles offer consistent performance gains, both in terms of reduced generalization error and improved predictive uncertainty …
Lee Zamparo
,
Marc-Etienne Brunet
,
Thomas George
,
Sepideh Kharaghani
,
Gintare Karolina Dziugaite
Conference on Neural Information Processing Systems (NeurIPS), 2021.
PDF
Citation
Pruning Neural Networks at Initialization: Why are We Missing the Mark?
Recent work has explored the possibility of pruning neural networks at initialization. We assess proposals for doing so: SNIP (Lee et …
Jonathan Frankle
,
Gintare Karolina Dziugaite
,
Daniel M. Roy
,
Michael Carbin
International Conference on Learning Representations (ICLR), 2021.
PDF
Citation
On the role of data in PAC-Bayes bounds
The dominant term in PAC-Bayes bounds is often the Kullback–Leibler divergence between the posterior and prior. For so-called …
Gintare Karolina Dziugaite
,
Kyle Hsu
,
Waseem Gharbieh
,
Gabriel Arpino
,
Daniel M. Roy
International Conference on Artificial Intelligence and Statistics (AISTATS), 2021.
PDF
Citation
Code
An empirical study of loss landscape geometry and evolution of the data-dependent Neural Tangent Kernel
In suitably initialized wide networks, small learning rates transform deep neural networks (DNNs) into neural tangent kernel (NTK) …
Stanislav Fort
,
Gintare Karolina Dziugaite
,
Mansheej Paul
,
Sepideh Kharaghani
,
Daniel M. Roy
,
Surya Ganguli
Conference on Neural Information Processing Systems (NeurIPS), 2020.
PDF
Citation
Vidéo
Like A Researcher Stating Broader Impact for the Very First Time
In requiring that a statement of broader impact accompany all submissions for this year’s conference, the NeurIPS program chairs …
Grace Abuhamad
,
Claudel Rheault
Workshop at the Neural Information Processing Systems (NeurIPS), 2020.
PDF
Citation
Pruning Neural Networks at Initialization: Why Are We Missing the Mark?
Recent work has explored the possibility of pruning neural networks at initialization. We assess proposals for doing so: SNIP (Lee et …
Jonathan Frankle
,
Gintare Karolina Dziugaite
,
Daniel M. Roy
,
Michael Carbin
Workshop at the Neural Information Processing Systems (NeurIPS), 2020.
PDF
Citation
Sharpened Generalization Bounds based on Conditional Mutual Information and an Application to Noisy-Gradient Iterative Algorithms
The information-theoretic framework of Russo and J. Zou (2016) and Xu and Raginsky (2017) provides bounds on the generalization error …
Mahdi Haghifam
,
Jeffrey Negrea
,
Ashish Khisti
,
Daniel M. Roy
,
Gintare Karolina Dziugaite
Conference on Neural Information Processing Systems (NeurIPS), 2020.
PDF
Citation
On the Information Complexity of Proper Learners for VC Classes in the Realizable Case
We provide a negative resolution to a conjecture of Steinke and Zakynthinou (2020a), by showing that their bound on the conditional …
Mahdi Haghifam
,
Gintare Karolina Dziugaite
,
Shay Moran
,
Daniel M. Roy
ArXiv, 2020.
PDF
Citation
Enforcing Interpretability and its Statistical Impacts: Trade-offs between Accuracy and Interpretability
To date, there has been no formal study of the statistical cost of interpretability in machine learning. As such, the discourse around …
Gintare Karolina Dziugaite
,
Shai Ben-David
,
Daniel M. Roy
ArXiv, 2020.
PDF
Citation
In Defense of Uniform Convergence: Generalization via derandomization with an application to interpolating predictors
We propose to study the generalization error of a learned predictor ĥ in terms of that of a surrogate (potentially randomized) …
Jeffrey Negrea
,
Gintare Karolina Dziugaite
,
Daniel M. Roy
International Conference on Machine Learning (ICML), 2020.
PDF
Citation
«
»
Citation
×