hyf015 commited on
Commit
7f7af01
·
verified ·
1 Parent(s): dadbe32

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -14,6 +14,8 @@ size_categories:
14
  - n<1K
15
  ---
16
 
 
 
17
  # EgoExoLearn
18
  This repository contains the video data of the following paper:
19
  > **EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World**<br>
@@ -21,5 +23,3 @@ This repository contains the video data of the following paper:
21
  > IEEE/CVF Conference on Computer Vision and Pattern Recognition (**CVPR**), 2024<be>
22
 
23
  EgoExoLearn is a dataset that emulates the human demonstration following process, in which individuals record egocentric videos as they execute tasks guided by exocentric-view demonstration videos. Focusing on the potential applications in daily assistance and professional support, EgoExoLearn contains egocentric and demonstration video data spanning 120 hours captured in daily life scenarios and specialized laboratories. Along with the videos we record high-quality gaze data and provide detailed multimodal annotations, formulating a playground for modeling the human ability to bridge asynchronous procedural actions from different viewpoints.
24
-
25
- Videos in huggingface are unprocessed, full-size videos. For benchmark and gaze alignment, we use processed 25fps videos. For processed data and code for benchmark, please visit the [github page](https://github.com/OpenGVLab/EgoExoLearn).
 
14
  - n<1K
15
  ---
16
 
17
+ NOTE: Videos in huggingface are unprocessed, full-size videos. For benchmark and gaze alignment, we use processed 25fps videos. For processed data and code for benchmark, please visit the [github page](https://github.com/OpenGVLab/EgoExoLearn).
18
+
19
  # EgoExoLearn
20
  This repository contains the video data of the following paper:
21
  > **EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World**<br>
 
23
  > IEEE/CVF Conference on Computer Vision and Pattern Recognition (**CVPR**), 2024<be>
24
 
25
  EgoExoLearn is a dataset that emulates the human demonstration following process, in which individuals record egocentric videos as they execute tasks guided by exocentric-view demonstration videos. Focusing on the potential applications in daily assistance and professional support, EgoExoLearn contains egocentric and demonstration video data spanning 120 hours captured in daily life scenarios and specialized laboratories. Along with the videos we record high-quality gaze data and provide detailed multimodal annotations, formulating a playground for modeling the human ability to bridge asynchronous procedural actions from different viewpoints.