EgoSampling dataset and output

 

EgoSampling: Fast-Forward and Stereo for Egocentric Videos

Yair Poleg 1      Tavi Halperin1       Chetan Arora2       Shmuel Peleg1

1Hebrew University of Jerusalem     2IIIT Delhi

 

Input Videos for Fast-Forward Experiments

The table below lists the sequences we used in our experiments. For each input sequence, we provide the result of a naive fast-forward (uniform sampling) and our method (both 1st and 2nd order smoothness). All results are given both before stabilization ('Raw') and after stabilization using YouTube's stabilizer ('Stabilized').

Input
Sequence
Start/End
Frames
x10 Naive
Fast-Forward
Our Output
1st Order
Our Output
2nd Order
Bike1 [1] 350→11136 [Raw]
[Stabilized]
[Raw]
[Stabilized]
[Raw]
[Stabilized]
Bike2 [1]
(Same file as Bike1)
16150→23199 [Raw]
[Stabilized]
[Raw]
[Stabilized]
[Raw]
[Stabilized]
Bike3 [1]5800→29500 [Raw]
[Stabilized]
[Raw]
[Stabilized]
[Raw]
[Stabilized]
Walking1 [1]2800→20049 [Raw]
[Stabilized]
[Raw]
[Stabilized]
[Raw]
[Stabilized]
Walking2 [2]700→7600 [Raw]
[Stabilized]
[Raw]
[Stabilized]
[Raw]
[Stabilized]
Running1 2100→15000 [Raw]
[Stabilized]
[Raw]
[Stabilized]
[Raw]
[Stabilized]
Driving2 1800→12000 [Raw]
[Stabilized]
[Raw]
[Stabilized]
[Raw]
[Stabilized]

References:

[1] Johannes Kopf,Michael Cohen, Richard Szeliski, First-person Hyperlapse Videos - Supplemental Material. http://research.microsoft.com/enus/um/redmond/projects/hyperlapse/supplementary/index.html.

[2] Alireza Fathi, Jessica K. Hodgins, James M. Rehg, Social Interactions: A First-Person Perspective, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2012.

 
Back to project page


web site statistics software