🌌 ISALux: Illumination & Semantics Aware Transformer with Mixture of Experts

πŸ‘©β€πŸ’» Authors:
Raul Balmez, Alexandru Brateanu, Ciprian Orhei, Codruta Ancuti, Cosmin Ancuti

πŸ“„ arXiv


πŸ”Ž Abstract

We introduce ISALux, a novel transformer-based approach for Low-Light Image Enhancement (LLIE) that integrates both illumination and semantic priors.

✨ Key contributions:

  • HISA-MSA: A new attention block fusing illumination + semantic segmentation.
  • Mixture of Experts (MoE): Improves contextual learning with conditional activation.
  • LoRA-enhanced self-attention: Tackles overfitting across diverse light conditions.

Extensive experiments on multiple benchmarks demonstrate state-of-the-art performance.
Ablation studies highlight the role of each proposed component.


πŸ†• Updates

  • 29.07.2025 πŸŽ‰ Our paper ISALux is live on arXiv!
    Dive in to explore methods, results, and ablations. πŸš€


πŸ“š Citation

@misc{balmez2025isaluxilluminationsegmentationaware,
  title={ISALux: Illumination and Segmentation Aware Transformer Employing Mixture of Experts for Low Light Image Enhancement}, 
  author={Raul Balmez and Alexandru Brateanu and Ciprian Orhei and Codruta Ancuti and Cosmin Ancuti},
  year={2025},
  eprint={2508.17885},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={https://arxiv.org/abs/2508.17885}, 
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Evaluation results