MMaDA-Parallel-M

We introduce Parallel Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation (MMaDA-Parallel), a parallel multimodal diffusion framework that enables continuous, bidirectional interaction between text and images throughout the entire denoising trajectory.

This variant is based on MagVITv2, trained from MMaDA.

Paper | Code

Citation

@article{tian2025mmadaparallel,
  title={MMaDA-Parallel: Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation},
  author={Tian, Ye and Yang, Ling and Yang, Jiongfan and Wang, Anran and Tian, Yu and Zheng, Jiani and Wang, Haochen and Teng, Zhiyang and Wang, Zhuochen and Wang, Yinjie and Tong, Yunhai and Wang, Mengdi and Li, Xiangtai},
  journal={arXiv preprint arXiv:2511.09611},
  year={2025}
}
Downloads last month
46
Safetensors
Model size
8B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support