Accueil
Équipe
Publications
Open Source
Démos
Évènements
Blog
Carrières
Nous joindre
Français
Français
English
ServiceNow
ServiceNow recherche
Tags
Optimization
ServiceNow recherche
Optimization
Let's Make Block Coordinate Descent Converge Faster: Faster Greedy Rules, Message-Passing, Active-Set Complexity, and Superlinear Convergence
Block coordinate descent (BCD) methods are widely used for large-scale numerical optimization because of their cheap iteration costs, …
julie nutini
,
Issam H. Laradji
,
Mark Schmidt
Journal of Machine Learning Research (JMLR), 2022.
PDF
Citation
Code
Stochastic polyak step-size for sgd: An adaptive learning rate for fast convergence
We propose a stochastic variant of the classical Polyak step-size (Polyak, 1987) commonly used in the subgradient method. Although …
Nicolas Loizou
,
Sharan Vaswani
,
Issam H. Laradji
,
Simon Lacoste-Julien
International Conference on Artificial Intelligence and Statistics (AISTATS), 2021.
PDF
Citation
Code
Vidéo
Learning Data Augmentation with Online Bilevel Optimization for Image Classification
Data augmentation is a key practice in machine learning for improving generalization performance. However, finding the best data …
Saypraseuth Mounsaveng
,
Issam H. Laradji
,
Ismail Ben Ayed
,
David Vazquez
,
Marco Pedersoli
Winter Conference on Applications of Computer Vision (WACV), 2021.
PDF
Citation
Code
AR-DAE: Towards Unbiased Neural Entropy Gradient Estimation
Entropy is ubiquitous in machine learning, but it is in general intractable to compute the entropy of the distribution of an arbitrary …
Jae Hyun Lim
,
Aaron Courville
,
Christopher Pal
,
Chin-Wei Huang
International Conference on Machine Learning (ICML), 2020.
PDF
Citation
Code
Fast and Furious Convergence: Stochastic Second Order Methods under Interpolation
We consider stochastic second-order methods for minimizing smooth and strongly-convex functions under an interpolation condition …
Si Yi Meng
,
Sharan Vaswani
,
Issam H. Laradji
,
Mark Schmidt
,
Simon Lacoste-Julien
International Conference on Artificial Intelligence and Statistics (AISTATS), 2020.
PDF
Citation
Code
Diapositives
Vidéo
A Closer Look at the Optimization Landscapes of Generative Adversarial Networks
Generative adversarial networks have been very successful in generative modeling, however they remain relatively challenging to train …
Hugo Berard
,
Gauthier Gidel
,
Amjad Almahairi
,
Pascal Vincent
,
Simon Lacoste-Julien
International Conference on Learning Representations (ICLR), 2020.
PDF
Citation
Code
Painless Stochastic Gradient: Interpolation, Line-Search, and Convergence Rates
Recent works have shown that stochastic gradient descent (SGD) achieves the fast convergence rates of full-batch gradient descent for …
Sharan Vaswani
,
Aaron Mishkin
,
Issam H. Laradji
,
Mark Schmidt
,
Gauthier Gidel
,
Simon Lacoste-Julien
Conference on Neural Information Processing Systems (NeurIPS), 2019.
PDF
Citation
Reducing Noise in GAN Training with Variance Reduced Extragradient
We study the effect of the stochastic gradient noise on the training of generative adversarial networks (GANs) and show that it can …
Tatjana Chavdarova
,
Gauthier Gidel
,
François Fleuret
,
Simon Lacoste-Julien
Conference on Neural Information Processing Systems (NeurIPS), 2019.
PDF
Citation
Code
Efficient Deep Gaussian Process Models for Variable-Sized Inputs
Deep Gaussian processes (DGP) have appealing Bayesian properties, can handle variable-sized data, and learn deep features. Their …
Issam H. Laradji
,
Mark Schmidt
,
Vladimir Pavlovic
,
Minyoung Kim
International Joint Conference on Neural Networks (IJCNN), 2019.
PDF
Citation
Code
Improving Optimization Bounds using Machine Learning: Decision Diagrams meet Deep Reinforcement Learning
Finding tight bounds on the optimal solution is a critical element of practical solution methods for discrete optimization problems. In …
Quentin Cappart
,
Emmanuel Goutierre
,
David Bergman
,
Louis-Martin Rousseau
Association for the Advancement of Artificial Intelligence (AAAI), 2019.
PDF
Citation
Code
«
Citation
×