Representation
We study how deep models organize high-dimensional data into structured, low-dimensional features that support their functionality.
Foundations of AI and scientific machine learning directions pursued by the DeepThink Lab.
Our work centers on three complementary directions: representation, generalization, and efficiency.
We study how deep models organize high-dimensional data into structured, low-dimensional features that support their functionality.
We aim to explain when deep models learn underlying distributions instead of memorizing samples, and how this transition depends on model architecture, optimization, and data.
We develop low-complexity principles to help deep models train and run efficiently.
We develop domain-aware AI methods that accelerate scientific discovery.
We design stable reconstruction and generative methods under noise, model mismatch, and uncertainty in sensing systems.