Papers
arxiv:2510.12691

DiffEM: Learning from Corrupted Data with Diffusion Models via Expectation Maximization

Published on Dec 20, 2025
Authors:
,
,
,
,

Abstract

Diffusion models are trained using Expectation-Maximization from corrupted data, where clean data reconstruction and model refinement alternate in E and M steps with theoretical convergence guarantees.

AI-generated summary

Diffusion models have emerged as powerful generative priors for high-dimensional inverse problems, yet learning them when only corrupted or noisy observations are available remains challenging. In this work, we propose a new method for training diffusion models with Expectation-Maximization (EM) from corrupted data. Our proposed method, DiffEM, utilizes conditional diffusion models to reconstruct clean data from observations in the E-step, and then uses the reconstructed data to refine the conditional diffusion model in the M-step. Theoretically, we provide monotonic convergence guarantees for the DiffEM iteration, assuming appropriate statistical conditions. We demonstrate the effectiveness of our approach through experiments on various image reconstruction tasks.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2510.12691
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2510.12691 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.12691 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.12691 in a Space README.md to link it from this page.

Collections including this paper 1