November, Friday 17th
11:00 (Shannon amphitheatre, building 660) (
see location)
Levent Sagun
(IPHT Saclay)
Title: Over-Parametrization in Deep Learning
Abstract:
Stochastic gradient descent (SGD) works surprisingly well in optimizing the loss functions that arise in deep learning. However, it is unclear what makes SGD so special? In this talk, we will discuss the role of over-parametrization in deep learning as an attempt to understand what's special in SGD. In particular, we will see empirical results that show that in certain regimes SGD may not be so special at all. We will discuss whether we can explain this by looking at the geometry of the loss surface. To this end, we will take a look at the Hessian of the loss function and its spectrum, and see how increasing the number of parameters may lead to an easier optimization problem.
Contact: guillaume.charpiat at inria.fr
This presentation is organized by the
GT Deep Net, funded by DigiCosme.
All TAU & GT Deep Net seminars:
here