Research Agenda

Expressive, tractable, and trustworthy probabilistic AI.

I develop probabilistic AI systems that combine expressive generative modeling, tractable inference, and trustworthy reasoning. My work centers on probabilistic circuits and related generative models that represent uncertainty explicitly, answer probabilistic queries efficiently, and support reliable decisions under missing, noisy, or conflicting evidence.

Model

complex structured distributions

Infer

probabilistic queries efficiently

Evaluate

learning dynamics and reliability

Deploy

with robustness and human guidance

Research Map

One agenda across modeling, learning, deployment, and reasoning.

The sections are connected by a single question: how can uncertainty-aware models become expressive enough to use, reliable enough to deploy, and structured enough to inspect?

Theme 017 connected papers

Expressive Tractable Probabilistic Generative Models

Probabilistic CircuitsNormalizing FlowsVoronoi RoutingRepresentation LearningExact Inference
01

Core Question

How can generative models capture complex data distributions while preserving useful probabilistic inference?

02

Approach

I develop probabilistic generative models that combine the semantics of tractable probabilistic circuits with the representation power of flows, geometry-aware routing, and learned representations.

03

Visual Model

Tractable composition with expressive structure

PROBABILISTIC CIRCUITS+×w1p1(x)p1(y)×w2p2(x)p2(y)×w3p3(x)p3(y)×w4p4(x)p4(y)x factorsy factorsp1(x), p1(y)xy
04

Research Problems

Increase expressivity without silently breaking tractable inference.

Align neural transformations and local geometry with circuit factorization.

Learn probabilistic representations that remain useful when evidence is incomplete.

05

What This Enables

Models that can generate, score, marginalize, condition, and reason about missing or partial evidence in a single probabilistic framework.

06

Representative Papers

Selected work connected to this theme.

B1#262026

Tractable and Expressive Generative Modeling with Probabilistic Flow Circuits

Neurosymbolic AI: Foundations and Applications, pages 183–222. Wiley Online Library, 2026

C12#232026

Geometry-Aware Probabilistic Circuits via Voronoi Tessellations

The 43rd International Conference on Machine Learning (ICML), 2026

W5#222025

Autoencoding Probabilistic Circuits

The Eighth Workshop on Tractable Probabilistic Modeling (TPM), 2025

Theme 023 connected papers

Understanding Learning Dynamics in Generative Models

Duality GapSharpness-Aware LearningOptimization GeometryHessian StructureScore-Based Learning
01

Core Question

How do generative models learn, converge, overfit, and generalize?

02

Approach

I study training behavior through measurable quantities such as duality gaps, curvature, sharpness, and tractable second-order structure, connecting adversarial training diagnostics with PC optimization.

03

Visual Model

Sharp and flat optima in generative learning

Comparison of sharp and flat optima, optimization paths, train-validation loss gaps, sharpness, and generalization behavior
04

Research Problems

Diagnose when adversarial generative training is converging or cycling.

Understand when expressive PCs overfit through sharp likelihood landscapes.

Use tractable model structure to design reliable generative learning objectives.

05

What This Enables

Training procedures whose behavior can be monitored, analyzed, and improved instead of treated as a black box.

06

Representative Papers

Selected work connected to this theme.

C14#252026

Tractable Sharpness-Aware Learning of Probabilistic Circuits

The 40th Annual AAAI Conference on Artificial Intelligence (AAAI), 2026

C2#22021

On Duality Gap as a Measure for Monitoring GAN Training

International Joint Conference on Neural Networks (IJCNN), 2021

C1#12021

On Characterizing GAN Convergence Through Proximal Duality Gap

Proceedings of the 38th International Conference on Machine Learning (ICML), PMLR 139, 9660–9670, 2021

Theme 036 connected papers

Robust and Reliable Deployability

Few-Shot LearningTask ShiftRobust EvaluationMultimodal FusionCredibility Estimation
01

Core Question

How can AI systems remain reliable when evidence is limited, shifted, missing, noisy, or conflicting?

02

Approach

I design evaluation protocols and probabilistic models for deployment stressors: few-shot adaptation, task shift, missing modalities, corrupted sources, and conflicting information.

03

Visual Model

Reliability under missing, noisy, and conflicting evidence

Robust multimodal fusion

Robust multimodal fusion

Reliability analysis for late multimodal fusion with probabilistic circuits.

04

Research Problems

Stress-test adaptation when tasks differ from the training regime.

Model reliability under missing, corrupted, or contradictory modalities.

Expose deployment failures that average benchmark accuracy can hide.

05

What This Enables

Systems that know when adaptation helps, when evidence is unreliable, and how to make decisions under realistic uncertainty.

06

Representative Papers

Selected work connected to this theme.

C13#242026

Context-Specific Credibility-Aware Multimodal Fusion with Conditional Probabilistic Circuits

The 29th International Conference on Information Fusion (FUSION), 2026

C6#142024

On the Robustness and Reliability of Late Multi-Modal Fusion using Probabilistic Circuits

The 27th International Conference on Information Fusion (FUSION), 2024

J3#102023

Leveraging Task Variability in Meta-Learning

SN Computer Science, 4(5), 539. Springer, 2023

Theme 046 connected papers

Human-Allied Learning and Reasoning

Human-Allied LearningDomain ConstraintsNeuro-Symbolic AIKnowledge GraphsProbabilistic Reasoning
01

Core Question

How can probabilistic models incorporate human guidance, domain knowledge, constraints, and credibility-aware reasoning?

02

Approach

I use tractable probabilistic inference as a substrate for constraint-aware learning, human feedback, knowledge-guided modeling, and auditable reasoning about source credibility.

03

Visual Model

Human guidance and probabilistic reasoning

Human-allied probabilistic circuits

Human-allied probabilistic circuits

A unified interface for incorporating feedback and constraints into PC learning.

04

Research Problems

Express domain knowledge as probabilistic constraints during learning.

Support human feedback without losing calibrated uncertainty.

Use inference queries to make credibility and reasoning inspectable.

05

What This Enables

AI systems that collaborate with human expertise, respect structured knowledge, and expose the probabilistic assumptions behind their decisions.

06

Representative Papers

Selected work connected to this theme.

C11#202025

Human-Allied Relational Reinforcement Learning

The Twelfth Annual Conference on Advances in Cognitive Systems (ACS), 2025

C10#192025

Scalable Knowledge Graph Construction from Unstructured Text: A Case Study on Artisanal and Small-Scale Gold Mining

The 29th Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD), 2025

C9#182025

A Unified Framework for Human-Allied Learning of Probabilistic Circuits

The 39th Annual AAAI Conference on Artificial Intelligence (AAAI), 2025

Cross-Cutting Methods

Common tools and evaluation lenses that connect the research program.

Probabilistic CircuitsNormalizing FlowsTractable InferenceGenerative ModelingOptimization GeometryCredibility ModelingDomain ConstraintsHuman GuidanceRobust EvaluationMultimodal Fusion