HUJI EgoSeg Dataset

Dataset

This dataset contains 122 videos captured from an egocentric camera. The first 44 videos (filenames Huji_*) were shot by us using a head-mounted GoPro Hero3+. The next set are videos curated from YouTube to form 7 additional wearer activity categories. We also used videos from the First-Person Social Interactions [1] and GTEA Gaze+ [2] datasets.

Please check out our EgoSeg project page.

Videos

SubjectSequence NameSizeLink
Chetan*Huji_Chetan_1452MB[MP4]
Huji_Chetan_21701MB[MP4]
Huji_Chetan_3_Passenger_part13727MB[MP4]
Huji_Chetan_3_Passenger_part2897MB[MP4]
Huji_Chetan_4_Dinner_Part1623MB[MP4]
Huji_Chetan_4_Dinner_Part2621MB[MP4]
Huji_Chetan_4_Dinner_Part3616MB[MP4]
Yair*Huji_Yair_1_part13727MB[MP4]
Huji_Yair_1_part21332MB[MP4]
Huji_Yair_21332MB[MP4]
Huji_Yair_3972MB[MP4]
Huji_Yair_4945MB[MP4]
Huji_Yair_5636MB[MP4]
Huji_Yair_6623MB[MP4]
Huji_Yair_7_part13727MB[MP4]
Huji_Yair_7_part21740MB[MP4]
Huji_Yair_8_part13726MB[MP4]
Huji_Yair_8_part2289MB[MP4]
Huji_Yair_9_part13727MB[MP4]
Huji_Yair_9_part2949MB[MP4]
Huji_Yair_10_Lighttrain1029MB[MP4]
Huji_Yair_11_Lighttrain970MB[MP4]
Huji_Yair_Sitting_Eating13672MB[MP4]
Huji_Yair_Standing_Grass13051MB[MP4]
ArielHuji_Ariel_13934MB[MP4]
Huji_Ariel_23934MB[MP4]
Huji_Ariel_3967MB[MP4]
Huji_Ariel_43934MB[MP4]
Huji_Ariel_53934MB[MP4]
Huji_Ariel_61858MB[MP4]
Huji_Ariel_7709MB[MP4]
Huji_Ariel_83934MB[MP4]
Huji_Ariel_91020MB[MP4]
Huji_Ariel_103934MB[MP4]
Huji_Ariel_112941MB[MP4]
Huji_Ariel_123149MB[MP4]
Huji_Ariel_131435MB[MP4]
Huji_Ariel_143934MB[MP4]
Huji_Ariel_152468MB[MP4]
Huji_Ariel_16579MB[MP4]
Huji_Ariel_17504MB[MP4]
Huji_Ariel_18599MB[MP4]
Huji_Ariel_19976MB[MP4]
Huji_Ariel_20574MB[MP4]
YoutubeBiking Playlist*5 Vids[Youtube]
Sailing Playlist15 Vids[Youtube]
Horseback Playlist9 Vids[Youtube]
Skiing Playlist9 Vids[Youtube]
Stair Climbing Playlist9 Vids[Youtube]
Running Playlist7 Vids[Youtube]
Boxing Playlist24 Vids[Youtube]
GTEA Gaze+[2]Cooking Dataset30 Vids[Link]
Chetan*, Yair*, ArielHuji_*44 Vids (83GB)[tar]
* denotes videos which comprised the dataset of our CVPR'14 work.

Annotations

Ground truth annotations containing temporal segmentation of all videos into 14 different camera wearer activities are available here.
We used ELAN for annotations. To process the annotations in Matlab we developed helper functions to both read and write .EAF files, see here.

Publications

Please cite the following papers if you use this dataset:
@inproceedings{poleg_wacv16_compactcnn,
  title     = {Compact CNN for Indexing Egocentric Videos},
  author    = {Yair Poleg and Ariel Ephrat and Shmuel Peleg and Chetan Arora},
  year      = {2016},
  booktitle = {WACV}
}
@inproceedings{poleg_cvpr14_egoseg,
  title     = {Temporal Segmentation of Egocentric Videos},
  author    = {Yair Poleg and Chetan Arora and Shmuel Peleg},
  year      = {2014},
  booktitle = {CVPR}
}

References

[1] Alireza Fathi, Jessica K. Hodgins, James M. Rehg, Social Interactions: A First-Person Perspective, CVPR 2012.
[2] Alireza Fathi, Yin Li, James M. Rehg, Learning to recognize daily actions using gaze, ECCV 2012.