Thursday, September 23, 2010

Call for papers - NIPS Workshop - Low-rank Matrix Approximation for Large-scale Learning

===============================================
CALL FOR PAPERS

Low-rank Matrix Approximation for Large-scale Learning
NIPS 2010 Workshop, Whistler, Canada
December 11, 2010

http://www.eecs.berkeley.edu/~ameet/low-rank-nips10/

Submission Deadline: October 31, 2010
===============================================

OVERVIEW

Today's data-driven society is full of large-scale datasets. In the
context of machine learning, these datasets are often represented by
large matrices representing either a set of real-valued features for
each point or pairwise similarities between points. Hence, modern
learning problems in computer vision, natural language processing,
computational biology, and other areas often face the daunting task of
storing and operating on matrices with thousands to millions of
entries. An attractive solution to this problem involves working with
low-rank approximations of the original matrix. Low-rank approximation
is at the core of widely used algorithms such as Principle Component
Analysis, Multidimensional Scaling, Latent Semantic Indexing, and
manifold learning. Furthermore, low-rank matrices appear in a wide
variety of applications including lossy data compression,
collaborative filtering, image processing, text analysis, matrix
completion and metric learning.

The NIPS workshop on "Low-rank Matrix Approximation for Large-scale
Learning" aims to survey recent work on matrix approximation with an
emphasis on usefulness for practical large-scale machine learning
problems. We aim to provide a forum for researchers to discuss several
important questions associated with low-rank approximation techniques.
The workshop will begin with an introductory talk and will include
invited talks by Emmanuel Candes (Stanford), Ken Clarkson (IBM
Almaden) and Petros Drineas (RPI). There will also be several
contributed paper talks as well as poster session for contributed
papers.

We encourage submissions exploring the impact of low-rank methods for
large-scale machine learning in the form of new algorithms,
theoretical advances and/or empirical results. We also welcome work on
related topics that motivate additional interesting scenarios for use
of low-rank approximations for learning tasks. Some specific areas of
interest include randomized low-rank approximation techniques, the
effect of data heterogeneity on randomization, performance of various
low-rank methods for large-scale tasks and the tradeoff between
numerical precision and time/space efficiency in the context of
machine learning performance, e.g., classification or clustering
accuracy.

SUBMISSION GUIDELINES

Submissions should be written as extended abstracts, no longer than 4
pages in the NIPS latex style. Style files and formatting instructions
can be found at http://nips.cc/PaperInformation/StyleFiles.
Submisssions must be in PDF format. Authors names and affiliations
should be included, as the review process will not be double blind.
The extended abstract may be accompanied by an unlimited appendix and
other supplementary material, with the understanding that anything
beyond 4 pages may be ignored by the program committee.

Please send your PDF submission by email to submit.lowrank@gmail.com
by October 31. Notifications will be given on or before November 15.
Topics that were recently published or presented elsewhere are
allowed, provided that the extended abstract mentions this explicitly.

ORGANIZERS

Michael Mahoney (Stanford), Mehryar Mohri (NYU, Google Research),
Ameet Talwalkar (Berkeley)

PROGRAM COMMITTEE

Alexandre d'Aspremont (Princeton), Christos Boutsidis (Rensselear
Polytechnic Institute), Kamilika Das (NASA Ames Research Center),
Maryam Fazel (Washington), Michael I. Jordan (Berkeley), Sanjiv Kumar
(Google Research), James Kwok (Hong Kong University of Science and
Technology), Gunnar Martinsson (Colorado at Boulder)