NIPS Conference 2018 in Montreal

Published on in event-world

The thirty-second Conference on Neural Information Processing Systems will take place this year in Montreal, from December 2nd to 8th. This international machine learning, computational neuroscience and artificial intelligence conference is, with more than 8,000 participants in 2017, the largest in in the world.

The researchers of the Machine Learning for Big Data Chair will be, once again, well represented this year, with no less than six papers and two workshops communications. Also, Professor Florence d’Alché-Buc will be a Senior Area Chair for the 2019 NIPS edition, demonstrating again the quality of the research conducted at Télécom ParisTech.

Program and registration of NIPS 2018 can be found here.

A Structured Prediction Approach for Label Ranking

By Anna Korba, Alexandre Garcia and Florence d'Alché-Buc

We propose to solve a label ranking problem as a structured output regression task. In this view, we adopt a least square surrogate loss approach that solves a supervised learning problem in two steps: a regression step in a well-chosen feature space and a pre-image (or decoding) step. Read more.

On Binary Classification in Extreme Regions

By Hamid Jalalzai, Stephan Clémençon and Anne Sabourin

In pattern recognition, a random label Y is to be predicted based upon observing a random vector X valued in R^d with d>1 by means of a classification rule with minimum probability of error. In a wide variety of applications, ranging from finance/insurance to environmental sciences through teletraffic data analysis for instance, extreme (i.e. very large) observations X are of crucial importance, while contributing in a negligible manner to the (empirical) error however, simply because of their rarity. Read more

Asymptotic optimality of adaptive importance sampling

By François Portier and Bernard Delyon

Adaptive importance sampling (AIS) uses past samples to update the sampling policy qt at each stage t. Each stage t is formed with two steps : (i) to explore the space with nt points according to qt and (ii) to exploit the current amount of information to update the sampling policy. The very fundamental question raised in this paper concerns the behavior of empirical sums based on AIS. Read more

Accelerated Stochastic Matrix Inversion: General Theory and Speeding up BFGS Rules for Faster Second-Order Optimization

By Robert Gower, Filip Hanzely, Peter Richtarik and Sebastian Stich

We present the first accelerated randomized algorithm for solving linear systems in Euclidean spaces. One essential problem of this type is the matrix inversion problem. In particular, our algorithm can be specialized to invert positive definite matrices in such a way that all iterates (approximate solutions) generated by the algorithm are positive definite matrices themselves. Read more

Probabilistic Pose Graph Optimization via Bingham Distributions and Tempered Geodesic MCMC

By Tolga Birdal, Umut Simsekli, Mustafa Onur Eken and Slobodan Ilic

We introduce Tempered Geodesic Markov Chain Monte Carlo (TG-MCMC) algorithm for initializing pose graph optimization problems, arising in various scenarios such as SFM (structure from motion) or SLAM (simultaneous localization and mapping). Read more

Multivariate Convolutional Sparse Coding for Electromagnetic Brain Signals

By Tom Dupré la Tour, Thomas Moreau, Mainak Jas and Alexandre Gramfort

Frequency-specific patterns of neural activity are traditionally interpreted as sustained rhythmic oscillations, and related to cognitive mechanisms such as attention, high level visual processing or motor control. While alpha waves (8--12\,Hz) are known to closely resemble short sinusoids, and thus are revealed by Fourier analysis or wavelet transforms, there is an evolving debate that electromagnetic neural signals are composed of more complex waveforms that cannot be analyzed by linear filters and traditional signal representations. In this paper, we propose to learn dedicated representations of such recordings using a multivariate convolutional sparse coding (CSC) algorithm. Read more

Communication: Signal and Noise Detection using Recurrent Autoencoders on Seismic Marine Data
In the Machine Learning for Geophysical & Geochemical Signals Workshop

With Mathieu Chambefort, Nicolas Salaun, Emilie Chautru, Stephan Clémençon and Guillaume Poulain

In the Big Data era, geophysics are faced with new industrial constrains like processing more and more seismic data (more than 106 shot points per marine seismic survey) in a more timely, reliable and efficient manner. To deal with these challenges, the team developed a deep learning approach based on recurrent LSTM (long short-term memory) to the processing of seismic time series.

Communication: Machine Learning for Survival Analysis: Empirical Risk Minimization for Censored Distribution-Free Regression with Applications
In the
Machine Learning for Health Workshop (poster session)

With Guillaume Ausset, Stéphan Clémençon and François Portier

In this paper we propose a framework for empirical risk minimization in censored regression, showing that classic machine learning regression methods that carry no assumption on any specific distribution can be applied to survival analysis. While providing theoretical guarantees to the results, we focuse mainly on describing the algorithm, and supporting theorem. Numerical experiments are included to support the theoretical conclusions.

Photo by Ichigo / Pixabay

Follow the conference on Twitter