Ainesh Bakshi
Email: ainesh (at) mit (dot) edu Office: 2-174 |

I am currently a postdoc in the Computer Science and Mathematics departments at MIT, where I collaborate closely with Ankur Moitra, Sam Hopkins and Piotr Indyk. I am broadly interested in Theoretical Computer Science and my current research interests lie in the intersection of algorithm design and learning theory. More specifically, I spend time thinking about the sum-of-squares convex programming hierarchy, randomized numerical linear algebra, algorithmic statistics, quantum information theory, and high-dimensional probability.

I recently finished my PhD at Carnegie Mellon University, where I was extremely fortunate to be co-advised by Pravesh Kothari and David Woodruff. During the summer of 2021, I interned with Madhur Tulsiani and Yury Makarychev at Toyota Technological Institute-Chicago and in summer 2020, I interned with Ken Clarkson at IBM Almaden.

Even earlier, I was an undergrad at Rutgers, New Brunswick, where I had the pleasure of working with Martin Farach-Colton
and Pranjal Awasthi.

Algorithms for Learning Latent Models: Establishing Tractability to Approaching Optimality

PhD Thesis [pdf]

Committee: Pravesh Kothari, David Woodruff, Ryan O'Donnell, Boaz Barak.

High-Temperature Gibbs States are Unentangled and Efficiently Preparable

Ainesh Bakshi, Allen Liu, Ankur Moitra, Ewin Tang

Preprint [arXiv].

A quasi-polynomial time algorithm for Multi-Dimensional Scaling via LP Hierarchies

Ainesh Bakshi, Vincent Cohen-Addad, Sameul B. Hopkins, Rajesh Jayaram, Silvio Lattanzi

Preprint [arXiv].

Learning Quantum Hamiltonians at any Temperature in Polynomial Time

Ainesh Bakshi, Allen Liu, Ankur Moitra, Ewin Tang

QIP 2024 (Plenary Talk and Best Student Paper) [arXiv].

STOC 2024

An Improved Classical Singular Value Transform for Quantum Machine Learning

Ainesh Bakshi, Ewin Tang

SODA 2024 [arXiv].

A Near-Linear Time Algorithm for the Chamfer Distance

Ainesh Bakshi, Piotr Indyk, Rajesh Jayaram, Sandeep Silwal, Erik Waingarten

NeurIPS 2023 [arXiv].

Krylov Methods are (nearly) Optimal for Low-Rank Approximation

Ainesh Bakshi, Shyam Narayanan

FOCS 2023 [arXiv].

Tensor Decompositions Meet Control Theory:

Learning General Mixtures of Linear Dynamical Systems

Ainesh Bakshi, Allen Liu, Ankur Moitra, Morris Yau

ICML 2023 [arXiv].

A New Approach to Learning a Linear Dynamical System

Ainesh Bakshi, Allen Liu, Ankur Moitra, Morris Yau

STOC 2023 [arXiv].

Sub-Quadratic Algorithms for Kernel Matrices via Kernel Density Estimation

Ainesh Bakshi, Piotr Indyk, Praneeth Kacham, Sandeep Silwal, Samson Zhou

ICLR 2023 (Spotlight Talk) [arXiv].

Low-Rank Approximation with 1/epsilon^(1/3) Matrix-Vector Products

Ainesh Bakshi, Ken Clarkson, David Woodruff

STOC 2022 [arXiv].

Robustly Learning Mixtures of k Arbitrary Gaussians

Ainesh Bakshi, Ilias Diakonikolas, He Jia, Daniel Kane, Pravesh Kothari, Santosh Vempala

STOC 2022 [arXiv].

Robust Linear Regression: Optimal Rates in Polynomial Time

Ainesh Bakshi, Adarsh Prasad

STOC 2021 [arXiv].

Learning a Latent Simplex in Input Sparsity Time

Ainesh Bakshi, Chiranjib Bhattacharyya, Ravi Kannan, David Woodruff, Samson Zhou

ICLR 2021 (Spotlight talk) [arXiv].

List-Decodable Subspace Recovery: Dimension Independent Error in Polynomial Time

Ainesh Bakshi, Pravesh Kothari

SODA 2021 [arXiv].

Testing Positive Semi-Definiteness via Random Submatrices

Ainesh Bakshi, Nadiia Chepurko, Rajesh Jayaram

FOCS 2020 [arXiv].

Outlier-Robust Clustering of Non-spherical Mixtures

Ainesh Bakshi, Pravesh Kothari

FOCS 2020 [arXiv]

Conference version to be merged with this paper.

Robust and Sample Optimal Algorithms for PSD Low Rank Approximation

Ainesh Bakshi, Nadiia Chepurko, David Woodruff

FOCS 2020 [arXiv].

Weighted Maximum Independent Set of Geometric Objects in Turnstile Streams

Ainesh Bakshi, Nadiia Chepurko, David Woodruff

APPROX 2020 [arXiv].

Robust Communication-Optimal Distributed Clustering Algorithms

Pranjal Awasthi, Ainesh Bakshi, Nina Balcan, Colin White, David Woodruff

ICALP 2019 [arXiv].

Learning Two Layer Rectified Neural Networks in Polynomial Time

Ainesh Bakshi, Rajesh Jayaram, David Woodruff

COLT 2019 [arXiv].

Sublinear Time Low-Rank Approximation of Distance Matrices

Ainesh Bakshi, David Woodruff

NeurIPS 2018 (Spotlight Talk) [arXiv].

File Systems fated for Senescence? Nonsense, Says Science!

A. Conway, A. Bakshi, Y. Jiao, Y. Zhan, M. Bender, W. Jannen, R. Johnson, B. Kuzmaul, D. Porter, J. Yuan, M. Farach-Colton

FAST '17 [pdf].

- Program Committee: STOC 2024, COLT 2023, ALT 2023

- TA for CS 15-751 : TCS Toolkit

by Ryan O'Donnell

- TA for CS 15-451/651 : Algorithms

by Anupam Gupta and David Woodruff