new

Get trending papers in your email inbox!

Subscribe

byAK and the research community

Mar 11

Self-supervised Spatio-temporal Representation Learning for Videos by Predicting Motion and Appearance Statistics

We address the problem of video representation learning without human-annotated labels. While previous efforts address the problem by designing novel self-supervised tasks using video data, the learned features are merely on a frame-by-frame basis, which are not applicable to many video analytic tasks where spatio-temporal features are prevailing. In this paper we propose a novel self-supervised approach to learn spatio-temporal features for video representation. Inspired by the success of two-stream approaches in video classification, we propose to learn visual features by regressing both motion and appearance statistics along spatial and temporal dimensions, given only the input video data. Specifically, we extract statistical concepts (fast-motion region and the corresponding dominant direction, spatio-temporal color diversity, dominant color, etc.) from simple patterns in both spatial and temporal domains. Unlike prior puzzles that are even hard for humans to solve, the proposed approach is consistent with human inherent visual habits and therefore easy to answer. We conduct extensive experiments with C3D to validate the effectiveness of our proposed approach. The experiments show that our approach can significantly improve the performance of C3D when applied to video classification tasks. Code is available at https://github.com/laura-wang/video_repres_mas.

Self-supervised Video Representation Learning by Uncovering Spatio-temporal Statistics

This paper proposes a novel pretext task to address the self-supervised video representation learning problem. Specifically, given an unlabeled video clip, we compute a series of spatio-temporal statistical summaries, such as the spatial location and dominant direction of the largest motion, the spatial location and dominant color of the largest color diversity along the temporal axis, etc. Then a neural network is built and trained to yield the statistical summaries given the video frames as inputs. In order to alleviate the learning difficulty, we employ several spatial partitioning patterns to encode rough spatial locations instead of exact spatial Cartesian coordinates. Our approach is inspired by the observation that human visual system is sensitive to rapidly changing contents in the visual field, and only needs impressions about rough spatial locations to understand the visual contents. To validate the effectiveness of the proposed approach, we conduct extensive experiments with four 3D backbone networks, i.e., C3D, 3D-ResNet, R(2+1)D and S3D-G. The results show that our approach outperforms the existing approaches across these backbone networks on four downstream video analysis tasks including action recognition, video retrieval, dynamic scene recognition, and action similarity labeling. The source code is publicly available at: https://github.com/laura-wang/video_repres_sts.

Quantifying the Poor Purity and Completeness of Morphological Samples Selected by Galaxy Colour

The galaxy population is strongly bimodal in both colour and morphology, and the two measures correlate strongly, with most blue galaxies being late-types (spirals) and most early-types, typically ellipticals, being red. This observation has led to the use of colour as a convenient selection criteria to make samples which are then labelled by morphology. Such use of colour as a proxy for morphology results in necessarily impure and incomplete samples. In this paper, we make use of the morphological labels produced by Galaxy Zoo to measure how incomplete and impure such samples are, considering optical (ugriz), NUV and NIR (JHK) bands. The best single colour optical selection is found using a threshold of g-r = 0.742, but this still results in a sample where only 56% of red galaxies are smooth and 56% of smooth galaxies are red. Use of the NUV gives some improvement over purely optical bands, particularly for late-types, but still results in low purity/completeness for early-types. No significant improvement is found by adding NIR bands. With any two bands, including NUV, a sample of early-types with greater than two-thirds purity cannot be constructed. Advances in quantitative galaxy morphologies have made colour-morphology proxy selections largely unnecessary going forward; where such assumptions are still required, we recommend studies carefully consider the implications of sample incompleteness/impurity.