Papers
arxiv:2212.06385

TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities

Published on Dec 13, 2022
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

Recently, the success of pre-training in text domain has been fully extended to vision, audio, and cross-modal scenarios. The proposed <PRE_TAG><PRE_TAG>pre-training models</POST_TAG></POST_TAG> of different modalities are showing a rising trend of homogeneity in their model structures, which brings the opportunity to implement different <PRE_TAG><PRE_TAG>pre-training models</POST_TAG></POST_TAG> within a uniform framework. In this paper, we present TencentPretrain, a toolkit supporting <PRE_TAG><PRE_TAG>pre-training models</POST_TAG></POST_TAG> of different modalities. The core feature of TencentPretrain is the modular design. The toolkit uniformly divides <PRE_TAG><PRE_TAG>pre-training models</POST_TAG></POST_TAG> into 5 components: embedding, encoder, <PRE_TAG>target embedding</POST_TAG>, decoder, and target. As almost all of common modules are provided in each component, users can choose the desired modules from different components to build a complete pre-training model. The modular design enables users to efficiently reproduce existing <PRE_TAG><PRE_TAG>pre-training models</POST_TAG></POST_TAG> or build brand-new one. We test the toolkit on text, vision, and audio benchmarks and show that it can match the performance of the original implementations.

Community

Sign up or log in to comment

Models citing this paper 63

Browse 63 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2212.06385 in a dataset README.md to link it from this page.

Spaces citing this paper 48

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.