Chargement...
 
Tao

Seminar22112019

Friday, 22nd of November

11h (room R2014, 660 building) (see location)

Guillaume Charpiat

(TAU)

Input similarity from the neural network perspective

We first exhibit a multimodal image registration task, for which a neural network
trained on a dataset with noisy labels reaches almost perfect accuracy, far beyond
noise variance. This surprising auto-denoising phenomenon can be explained as
a noise averaging effect over the labels of similar input examples. This effect
theoretically grows with the number of similar examples; the question is then to
define and estimate the similarity of examples.
We express a proper definition of similarity, from the neural network perspective,
i.e. we quantify how undissociable two inputs A and B are, taking a machine
learning viewpoint: how much a parameter variation designed to change the output
for A would impact the output for B?
We study the mathematical properties of this similarity measure, and show how to
use it on a trained network to estimate sample density, in low complexity, enabling
new types of statistical analysis for neural networks. We also propose to use it
during training, to enforce that examples known to be similar should also be seen
as similar by the network, and notice speed-up training effects for certain datasets.

This is joint work with Loris Felardos (TAU), and Nicolas Girard and Yuliya Tarabalka (Titane team, INRIA Sophia-Antipolis), accepted for publication at NeurIPS 2019.

Reference

https://www.lri.fr/~gcharpia/input_similarity.pdf

Code

https://github.com/Lydorn/netsimilarity


All TAU seminars: here


Collaborateur(s) de cette page: guillaume .
Page dernièrement modifiée le Mardi 19 novembre 2019 18:01:16 CET par guillaume.