SEGA: Structural Entropy Guided Anchor View for Graph Contrastive Learning
Abstract
In contrastive learning, the choice of ``view'' controls the information that the representation captures and influences the performance of the model. However, leading graph contrastive learning methods generally produce views via random corruption or learning, which could lead to the loss of essential information and alteration of semantic information. An anchor <PRE_TAG>view</POST_TAG> that maintains the essential information of input graphs for contrastive learning has been hardly investigated. In this paper, based on the theory of graph information bottleneck, we deduce the definition of this anchor <PRE_TAG>view</POST_TAG>; put differently, the anchor <PRE_TAG>view</POST_TAG> with essential information of input graph is supposed to have the minimal structural uncertainty. Furthermore, guided by structural entropy, we implement the anchor <PRE_TAG>view</POST_TAG>, termed SEGA, for graph contrastive learning. We extensively validate the proposed anchor <PRE_TAG>view</POST_TAG> on various benchmarks regarding graph classification under unsupervised, semi-supervised, and transfer learning and achieve significant performance boosts compared to the state-of-the-art methods.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper