Mimic before Reconstruct: Enhancing Masked Autoencoders with Feature Mimicking
Abstract
Masked Auto<PRE_TAG>encoders</POST_TAG> (MAE) have been popular paradigms for large-scale vision representation pre-training. However, MAE solely reconstructs the low-level RGB signals after the decoder and lacks supervision upon high-level semantics for the encoder, thus suffering from sub-optimal learned representations and long pre-training epochs. To alleviate this, previous methods simply replace the pixel reconstruction targets of 75% masked tokens by encoded features from pre-trained image-image (<PRE_TAG>DINO</POST_TAG>) or image-language (<PRE_TAG>CLIP</POST_TAG>) contrastive learning. Different from those efforts, we propose to Mimic before Reconstruct for Masked Autoencoders, named as MR-<PRE_TAG>MAE</POST_TAG>, which jointly learns high-level and low-level representations without interference during pre-training. For high-level semantics, MR-<PRE_TAG>MAE</POST_TAG> employs a mimic loss over 25% visible tokens from the encoder to capture the pre-trained patterns encoded in <PRE_TAG>CLIP</POST_TAG> and <PRE_TAG>DINO</POST_TAG>. For low-level structures, we inherit the reconstruction loss in MAE to predict RGB pixel values for 75% masked tokens after the decoder. As MR-<PRE_TAG>MAE</POST_TAG> applies high-level and low-level targets respectively at different partitions, the learning conflicts between them can be naturally overcome and contribute to superior visual representations for various downstream tasks. On ImageNet-1K, the MR-<PRE_TAG>MAE</POST_TAG> base pre-trained for only 400 epochs achieves 85.8% top-1 accuracy after fine-tuning, surpassing the 1600-epoch MAE base by +2.2% and the previous state-of-the-art BEiT V2 base by +0.3%. Code and pre-trained models will be released at https://github.com/Alpha-VL/ConvMAE.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper