G4_01140.mp4 Apr 2026

: "Reaping the Benefits of Global Context for Action Recognition" Authors : Yin Li, Zhefan Ye, and James M. Rehg

The video file is a sample from the GTEA Gaze+ (Georgia Tech Egocentric Activities) dataset . This dataset is widely used in computer vision research for egocentric (first-person) action recognition and hand-object interaction analysis. The primary research paper associated with this dataset is: g4_01140.mp4

: It is often used to benchmark models that predict where a person is looking and what action they are performing simultaneously. : "Reaping the Benefits of Global Context for

: The naming convention g4_01140.mp4 typically identifies the subject or session (e.g., "g4" for group/subject 4) and the specific activity or sequence number. The primary research paper associated with this dataset

: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015 Key Dataset Details

: Activities of daily living (cooking) recorded with a head-mounted camera and an eye-tracker.

: "Reaping the Benefits of Global Context for Action Recognition" Authors : Yin Li, Zhefan Ye, and James M. Rehg

The video file is a sample from the GTEA Gaze+ (Georgia Tech Egocentric Activities) dataset . This dataset is widely used in computer vision research for egocentric (first-person) action recognition and hand-object interaction analysis. The primary research paper associated with this dataset is:

: It is often used to benchmark models that predict where a person is looking and what action they are performing simultaneously.

: The naming convention g4_01140.mp4 typically identifies the subject or session (e.g., "g4" for group/subject 4) and the specific activity or sequence number.

: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015 Key Dataset Details

: Activities of daily living (cooking) recorded with a head-mounted camera and an eye-tracker.