sjauhri commited on
Commit
cd8d9e4
·
verified ·
1 Parent(s): 8dc42b7

Update README.md

Browse files

Added ICCV 2025 acceptance

Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -12,7 +12,7 @@ tags:
12
  <!-- ## Dataset Details -->
13
  2HANDS is the 2-Handed Affordance + Narration DataSet, consisting of a large number of unimanual and bimanual object affordance segmentation masks and task narrations as affordance class-labels.
14
 
15
- - **Project Site** https://sites.google.com/view/2handedafforder
16
  - **Paper:** https://arxiv.org/abs/2503.09320
17
  - **Repository:** Coming soon
18
 
@@ -22,7 +22,7 @@ Egocentric images and narrations/verb classes are derived from the EPIC-KITCHENS
22
 
23
  ## Citation
24
  You may cite our work as:
25
- Heidinger, M.\*, Jauhri, S.\*, Prasad, V., & Chalvatzaki, G. (2025). 2handedafforder: Learning precise actionable bimanual affordances from human videos. arXiv preprint arXiv:2503.09320.
26
 
27
  **BibTeX:**
28
  @misc{heidinger20252handedafforderlearningpreciseactionable,
 
12
  <!-- ## Dataset Details -->
13
  2HANDS is the 2-Handed Affordance + Narration DataSet, consisting of a large number of unimanual and bimanual object affordance segmentation masks and task narrations as affordance class-labels.
14
 
15
+ - **Project Site** https://sites.google.com/view/2handedafforder (ICCV 2025)
16
  - **Paper:** https://arxiv.org/abs/2503.09320
17
  - **Repository:** Coming soon
18
 
 
22
 
23
  ## Citation
24
  You may cite our work as:
25
+ Heidinger, M.\*, Jauhri, S.\*, Prasad, V., & Chalvatzaki, G. (2025). 2handedafforder: Learning precise actionable bimanual affordances from human videos. ICCV 2025
26
 
27
  **BibTeX:**
28
  @misc{heidinger20252handedafforderlearningpreciseactionable,