MYFix: Automated Fixation Annotation of Eye-Tracking Videos (Python Code and Sample Data)
Description
How to cite?
Abstract
How to use the code?
To create the environment execute the following command:
`conda env create -f environment.yml`
Dependencies (which migth require extra attention):
- torch: pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
- YOLO: conda install -c conda-forge ultralytics
- transformers: conda install -c conda-forge transformers
To avoid problems with torch and numpy, install torch (which includes numpy anyway) first.
To annotate your eye-tracking data automatically, execute the `annotate.ipynb` file, and reference the data folder in the variable `base_path`, as well as the name of the gaze-data-file in `gaze_file` and the video-file in `video_file`. The following folder structure is mandatory:
```bash
.base_path
├── gaze_file
└── video_file
````
The execution of the script will create files in the following folder structure:
```bash
.base_path
├── extracted_frames
│ ├── frame_0.jpg
│ ├── frame_3.jpg
│ └── ...
│
├── outputs
│ ├── saved_frames_semSeg_yolo
│ │ ├── frame_0.jpg
│ │ ├── frame_3.jpg
│ │ └── ...
│ │
│ ├── confusion_matrix.png
│ ├── labeled_data_semSeg_yolo.csv
│ └── stitched_video.mp4
│
├── video_file
├── fixation_gaze_positions.csv
├── saccades_gaze_positions.csv
└── gaze_file
```
To receive evealuation results, a manual labeling for each frame is necessary. If no manual labeling is provided the script will return an error. Enter this information in the file `labeled_data_semSeg_yolo.csv` in the column `manual`.
Files
submission.zip
Additional details
Dates
- Submitted
-
2024-02-27