- About
- Events
- Calendar
- Graduation Information
- Cornell Learning Machines Seminar
- Student Colloquium
- BOOM
- Fall 2024 Colloquium
- Conway-Walker Lecture Series
- Salton 2024 Lecture Series
- Seminars / Lectures
- Big Red Hacks
- Cornell University - High School Programming Contests 2024
- Game Design Initiative
- CSMore: The Rising Sophomore Summer Program in Computer Science
- Explore CS Research
- ACSU Research Night
- Cornell Junior Theorists' Workshop 2024
- People
- Courses
- Research
- Undergraduate
- M Eng
- MS
- PhD
- Admissions
- Current Students
- Computer Science Graduate Office Hours
- Advising Guide for Research Students
- Business Card Policy
- Cornell Tech
- Curricular Practical Training
- A & B Exam Scheduling Guidelines
- Fellowship Opportunities
- Field of Computer Science Ph.D. Student Handbook
- Graduate TA Handbook
- Field A Exam Summary Form
- Graduate School Forms
- Instructor / TA Application
- Ph.D. Requirements
- Ph.D. Student Financial Support
- Special Committee Selection
- Travel Funding Opportunities
- Travel Reimbursement Guide
- The Outside Minor Requirement
- Diversity and Inclusion
- Graduation Information
- CS Graduate Minor
- Outreach Opportunities
- Parental Accommodation Policy
- Special Masters
- Student Spotlights
- Contact PhD Office
Lower Bounds for Log-Concave and Gaussian Sampling
Abstract: Log-concave sampling has witnessed remarkable algorithmic advances in recent years, but the corresponding problem of proving lower bounds for this task has remained elusive, with lower bounds previously known only in dimension one. In this work, we establish the following query lower bounds: (1) sampling from strongly log-concave and log-smooth distributions in dimension d \ge 2 requires \Omega(\log \kappa) queries, which is sharp in any constant dimension, and (2) sampling from Gaussians in dimension d (hence also from general log-concave and log-smooth distributions in dimension d) requires \tilde{\Omega}(min(\sqrt{\kappa} \log d, d)) queries, which is nearly sharp for the class of Gaussians. Here, \kappa denotes the condition number of the target distribution.
In this talk, I will focus on the lower bound for Gaussians. This lower bound is the first polynomial lower bound for general log-concave sampling when \kappa is polynomial in d, as well as the first lower bound proving that the query complexity of general log-concave sampling cannot be dimension-free (i.e., only depend on \kappa). The proof is based on a novel reduction that demonstrates that "block Krylov" algorithms are optimal for this problem, which we believe to be of independent interest.
Bio: Shyam Narayanan is a fourth-year Ph.D. student, advised by Piotr Indyk, in the Theory of Computation group at MIT. His main research interest is on high-dimensional algorithmic statistics and learning. In general, his research interests span a wide range of algorithmic areas, including testing and learning distributions, clustering, differential privacy, sublinear algorithms, and streaming algorithms.