[ECCV2024] MambaIR: A Simple Baseline for Image Restoration with State-Space Model
[CVPR2025] MambaIRv2: Attentive State Space Restoration
π Introduction
The Mamba-based image restoration backbones have recently demonstrated significant potential in balancing global reception and computational efficiency. However, the inherent causal modeling limitation of Mamba, where each token depends solely on its predecessors in the scanned sequence, restricts the full utilization of pixels across the image and thus presents new challenges in image restoration. In this work, we propose MambaIRv2, which equips Mamba with the non-causal modeling ability similar to ViTs to reach the attentive state space restoration model. Specifically, the proposed attentive state-space equation allows to attend beyond the scanned sequence and facilitate image unfolding with just one single scan. Moreover, we further introduce a semantic-guided neighboring mechanism to encourage interaction between distant but similar pixels. Extensive experiments show our MambaIRv2 outperforms SRFormer by even 0.35dB PSNR for lightweight SR even with 9.3% less parameters and suppresses HAT on classic SR by up to 0.29dB
π Citation
If our work assists your research, feel free to give us a star β or cite us using:
π Note
This repo is used for hosting MambaIR's checkpoints. For more details, please refer to https://github.com/csguoh/MambaIR
@inproceedings{guo2025mambair,
title={MambaIR: A simple baseline for image restoration with state-space model},
author={Guo, Hang and Li, Jinmin and Dai, Tao and Ouyang, Zhihao and Ren, Xudong and Xia, Shu-Tao},
booktitle={European Conference on Computer Vision},
pages={222--241},
year={2024},
organization={Springer}
}
@article{guo2024mambairv2,
title={MambaIRv2: Attentive State Space Restoration},
author={Guo, Hang and Guo, Yong and Zha, Yaohua and Zhang, Yulun and Li, Wenbo and Dai, Tao and Xia, Shu-Tao and Li, Yawei},
journal={arXiv preprint arXiv:2411.15269},
year={2024}
}