Keypoint-MoSeq: parsing behavior by linking point tracking to pose dynamics
Authors/Creators
- Weinreb, Caleb1
-
Pearl, Jonah E.1
-
Lin, Sherry1
-
Osman, Mohammed Abdal Monium1
- Zhang, Libby2
- Annapragada, Sidharth1
- Conlin, Eli1
- Hoffman, Red1
- Makowska, Sofia1
- Gillis, Winthrop F.1
-
Jay, Maya1
-
Ye, Shaokai3
-
Mathis, Alexander3
-
Mathis, Mackenzie Weygandt3
-
Pereira, Talmo4
-
Linderman, Scott W.2
-
Datta, Sandeep Robert1
Description
Raw data for the paper Keypoint-MoSeq: parsing behavior by linking point tracking to pose dynamics.
open_field_2D.zip
2D keypoints from open field recordings, used in Fig 1, Fig 2, Fig 3a-g. The data is formatted as if it were the output of DeepLabCut so that it can be used with keypoint-MoSeq tutorial.
open_field_3D.h5
3D keypoints from open field recordings, used in Fig 5g-l. The data is formatted as an h5 file with one dataset per recording. Each dataset is an array with shape (n_frames, n_keypoints, 3).
accelerometry_and_keypoints.h5
2D keypoints and intertial measurement unit (UMI) readings, used in Fig 3h-i. The keypoints and IMU data can be aligned using their respective timestamps.
dopamine_and_keypoints.h5
2D keypoints and striatal dopamine signals (measured using dLight), used in Fig 4. The dopamine signal is already synced to the keypoints.