Our work centers on three complementary directions: representation, generalization, and efficiency.

Representation

We study how deep models organize high-dimensional data into structured, low-dimensional features that support their functionality.

Representative Papers

Generalization

We aim to explain when deep models learn underlying distributions instead of memorizing samples, and how this transition depends on model architecture, optimization, and data.

Representative Papers