π ISALux: Illumination & Semantics Aware Transformer with Mixture of Experts
π©βπ» Authors:
Raul Balmez, Alexandru Brateanu, Ciprian Orhei, Codruta Ancuti, Cosmin Ancuti
π Abstract
We introduce ISALux, a novel transformer-based approach for Low-Light Image Enhancement (LLIE) that integrates both illumination and semantic priors.
β¨ Key contributions:
- HISA-MSA: A new attention block fusing illumination + semantic segmentation.
- Mixture of Experts (MoE): Improves contextual learning with conditional activation.
- LoRA-enhanced self-attention: Tackles overfitting across diverse light conditions.
Extensive experiments on multiple benchmarks demonstrate state-of-the-art performance.
Ablation studies highlight the role of each proposed component.
π Updates
- 29.07.2025 π Our paper ISALux is live on arXiv!
Dive in to explore methods, results, and ablations. π
π Citation
@misc{balmez2025isaluxilluminationsegmentationaware,
title={ISALux: Illumination and Segmentation Aware Transformer Employing Mixture of Experts for Low Light Image Enhancement},
author={Raul Balmez and Alexandru Brateanu and Ciprian Orhei and Codruta Ancuti and Cosmin Ancuti},
year={2025},
eprint={2508.17885},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2508.17885},
}
Evaluation results
- PSNR on LOL-v1self-reported27.630
- SSIM on LOL-v1self-reported0.881
- PSNR on LOL-v2-Realself-reported29.760
- SSIM on LOL-v2-Realself-reported0.908
- PSNR on LOL-v2-Syntheticself-reported30.780
- SSIM on LOL-v2-Syntheticself-reported0.956
- PSNR on SDSD-indoorself-reported30.670
- SSIM on SDSD-indoorself-reported0.909
- PSNR on SDSD-outdoorself-reported31.580
- SSIM on SDSD-outdoorself-reported0.895