Center for Social Information Sciences (CSIS) Seminar
Abstract: A burgeoning paradigm in algorithm design is learning-augmented algorithms, or algorithms with predictions, where methods can take advantage of a (possibly imperfect) prediction about their instance. While past work has focused on using predictions to improve competitive ratios and runtime, this talk addresses a different, salient question: how do we learn the predictions themselves? We introduce an approach for co-designing learning-augmented algorithms with their own custom learning algorithms, with the crucial step being to optimize nice surrogate losses bounding the algorithms' costs. This leads to improved sample complexity bounds for several learning-augmented graph algorithms and the first learning-theoretic guarantees for page migration with predictions, among other contributions. We also instantiate these ideas on the new direction of learning-augmented private algorithms, where the goal is to reduce utility loss due to privacy rather than runtime. Our approach drives numerous insights on how to robustly incorporate external information to release better statistics of sensitive datasets, which we verify empirically on the task of multiple quantile release.
Bio: Misha Khodak is a PhD student in computer science at Carnegie Mellon University advised by Nina Balcan and Ameet Talwalkar. He studies foundations and applications of machine learning, especially meta-learning and algorithm design. Misha is a recipient of the Facebook PhD Fellowship and has interned at Google Research - New York, Microsoft Research - New England, the Lawrence Livermore National Lab, and the Princeton Plasma Physics Lab.