Instructor: Yaniv Plan
Office: 1219 Math Annex
Email:  yaniv (at) math (dot) ubc (dot) ca

Lectures: TuTh, 3:30 – 5:00pm, Henn 301.

Office hours: TBD.

Prerequisites: The course will assume knowledge of linear algebra (and functional analysis) as well as a strong probabilistic intuition.  For example, I will assume you have familiarity with stochastic processes, norms, singular values, and Lipschitz functions.

Overview:  We study the tools and concepts in high-dimensional probability which support the theoretical foundations of compressed sensing; they also apply to many other problems in machine learning and data science.

Detailed course outline: See here. (We probably won’t cover all of those topics.)

Textbook:  There is no required textbook.  The following references cover some of the material, and they are available online:

  1. R. Vershynin, High-dimensional probability.  This book has the most overlap with our course. (Our course begins by following an earlier course of Vershynin’s on high-dimensional probability.)
  2. T. Tao, Topics in random matrix theory.
  3. D. Chafai, O. Guedon, G. Lecue, A. Pajor, Interactions between compressed sensing, random matrices and high dimensional geometry, preprint.
  4. R. Vershynin, Lectures in geometric funcitonal analysis.
  5. R. Adler, J. Taylor, Random fields and geometry.
  6. S. Foucart, H. Rauhut, A mathematical introduction to compressive sensing.
  7. J. Lee, nicely presented proof of majorizing measures theorem, which is the lower bound for generic chaining.
  8. Theory of deep learning class at UdeM.
  9. Berkeley 2-month program on theory of deep learning, summer 2019.
  10. Earlier version of this course, which contains a series of notes.  For the beginning of the course, we will roughly follow the same notes.

Grading: Students will complete a class project (in teams), including a presentation in class. The project may be a literature review or a mini-research problem.