Published February 27, 2024 | Version 1.0.0
Other Open

MYFix: Automated Fixation Annotation of Eye-Tracking Videos (Python Code and Sample Data)

  • 1. ROR icon TU Wien
  • 1. TU Wien

Description

To create the environment execute the following command:  

`conda env create -f environment.yml`

Dependencies (which migth require extra attention):

- torch: pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

- YOLO: conda install -c conda-forge ultralytics

- transformers: conda install -c conda-forge transformers

 

To avoid problems with torch and numpy, install torch (which includes numpy anyway) first.

To annotate your eye-tracking data automatically, execute the `annotate.ipynb` file, and reference the data folder in the variable `base_path`, as well as the name of the gaze-data-file in `gaze_file` and the video-file in `video_file`. The following folder structure is mandatory:

```bash

.base_path

├── gaze_file              

└── video_file

````

The execution of the script will create files in the following folder structure:

```bash

.base_path

├── extracted_frames

│   ├── frame_0.jpg

│   ├── frame_3.jpg

│   └── ...

├── outputs

│   ├── saved_frames_semSeg_yolo

│   │   ├── frame_0.jpg

│   │   ├── frame_3.jpg

│   │   └── ...

│   │

│   ├── confusion_matrix.png  

│   ├── labeled_data_semSeg_yolo.csv

│   └── stitched_video.mp4

├── video_file

├── fixation_gaze_positions.csv

├── saccades_gaze_positions.csv

└── gaze_file

```

To receive evealuation results, a manual labeling for each frame is necessary. If no manual labeling is provided the script will return an error. Enter this information in the file `labeled_data_semSeg_yolo.csv` in the column `manual`.

Files

submission.zip
Files (683.7 MiB)
Name Size
md5:1369e8ff9dec2fd61e181148846d35c1
683.7 MiB Preview Download

Additional details

Created:
February 27, 2024
Modified:
April 23, 2024