Tuesday, 11th of June

15h15 (room R2014, 660 building) (see location)

2 talks, by Victor Berger & Zhengying Liu (TAU)

First talk

Victor Berger


Ensemblist Variational AutoEncoder: latent representation of semi-structured data


Conditional Generative Models are now acknowledged an essential tool in
Machine Learning. This paper focuses on their control. While many
approaches aim at disentangling the data through the coordinate-wise
control of their latent representations, another direction is explored
in this paper. The proposed EVAE handles data with a natural
multi-ensemblist structure (i.e. that can naturally be decomposed into
elements). Derived from Bayesian variational principles, EVAE learns a
latent representation leveraging both observational and symbolic
information. A first contribution of the approach is that this latent
representation supports a compositional generative model, amenable to
multi-ensemblist operations (addition or subtraction of elements in the
composition). This compositional ability is enabled by the invariance
and generality of the whole framework w.r.t. respectively, the order and
number of the elements. The second contribution of the paper is a proof
of concept on synthetic 1D and 2D problems, demonstrating the efficiency
of the proposed approach.

Second talk

Zhengying Liu


AutoCV Challenge Design and Baseline Results


We present the design and beta tests of a new machine learning challenge called AutoCV (for Automated Computer Vision), which is the first event in a series of challenges we are planning on the theme of Automated Deep Learning. We target applications for which Deep Learning methods have had great success in the past few years, with the aim of pushing the state of the art in fully automated methods to design the architecture of neural networks and train them without any human intervention. The tasks are restricted to multi-label image classification problems, from domains including medical, areal, people, object, and handwriting imaging. Thus the type of images will vary a lot in scales, textures, and structure. Raw data are provided (no features extracted), but all datasets are formatted in a uniform tensor manner (although images may have fixed or variable sizes within a dataset). The participants's code will be blind tested on a challenge platform in a controlled manner, with restrictions on training and test time and memory limitations. The challenge is part of the official selection of IJCNN 2019.

Contact: guillaume.charpiat at
All TAU seminars: here

Collaborateur(s) de cette page: guillaume .
Page dernièrement modifiée le Lundi 24 juin 2019 18:53:47 CEST par guillaume.