Multi-Camera Action Dataset (MCAD)
- 1. Tianjin University
- 2. National University of Singapore
Description
Action recognition has received increasing attentions from the computer vision and machine learning community in the last decades. Ever since then, the recognition task has evolved from single view recording under controlled laboratory environment to unconstrained environment (i.e., surveillance environment or user generated videos). Furthermore, recent work focused on other aspect of action recognition problem, such as cross-view classification, cross domain learning, multi-modality learning, and action localization. Despite the large variations of studies, we observed limited works that explore the open-set and open-view classification problem, which is a genuine inherited properties in action recognition problem. In other words, a well designed algorithm should robustly identify an unfamiliar action as “unknown” and achieved similar performance across sensors with similar field of view. The Multi-Camera Action Dataset (MCAD) is designed to evaluate the open-view classification problem under surveillance environment.
In our multi-camera action dataset, different from common action datasets we use a total of five cameras, which can be divided into two types of cameras (StaticandPTZ), to record actions. Particularly, there are three Static cameras (Cam04 & Cam05 & Cam06) with fish eye effect and two PanTilt-Zoom (PTZ) cameras (PTZ04 & PTZ06). Static camera has a resolution of 1280×960 pixels, while PTZ camera has a resolution of 704×576 pixels and a smaller field of view than Static camera. What’s more, we don’t control the illumination environment. We even set two contrasting conditions (Daytime and Nighttime environment) which makes our dataset more challenge than many controlled datasets with strongly controlled illumination environment.The distribution of the cameras is shown in the picture on the right.
We identified 18 units single person daily actions with/without object which are inherited from the KTH, IXMAS, and TRECIVD datasets etc. The list and the definition of actions are shown in the table. These actions can also be divided into 4 types actions. Micro action without object (action ID of 01, 02 ,05) and with object (action ID of 10, 11, 12 ,13). Intense action with object (action ID of 03, 04 ,06, 07, 08, 09) and with object (action ID of 14, 15, 16, 17, 18). We recruited a total of 20 human subjects. Each candidate repeats 8 times (4 times during the day and 4 times in the evening) of each action under one camera. In the recording process, we use five cameras to record each action sample separately. During recording stage we just tell candidates the action name then they could perform the action freely with their own habit, only if they do the action in the field of view of the current camera. This can make our dataset much closer to reality. As a results there is high intra action class variation among different action samples as shown in picture of action samples.
URL: http://mmas.comp.nus.edu.sg/MCAD/MCAD.html
Resources:
- IDXXXX.mp4.tar.gz contains video data for each individual
- boundingbox.tar.gz contains person bounding box for all videos
- protocol.json contains the evaluation protocol
- img_list.txt contains the download URLs for the images version of the video data
- idt_list.txt contians the download URLs for the improved Dense Trajectory feature
- stip_list.txt contians the download URLs for the STIP feature
- Manual annotated 2D joints for selected camera view and action class (available via http://zju-capg.org/heightmap/)
How to Cite:
Please cite the following paper if you use the MCAD dataset in your work (papers, articles, reports, books, software, etc):
- Wenhui Liu, Yongkang Wong, An-An Liu, Yang Li, Yu-Ting Su, Mohan Kankanhalli
Multi-Camera Action Dataset for Cross-Camera Action Recognition Benchmarking
IEEE Winter Conference on Applications of Computer Vision (WACV), 2017.
http://doi.org/10.1109/WACV.2017.28
Files
idt_list.txt
Files
(5.9 GB)
Name | Size | Download all |
---|---|---|
md5:c74a3bb4fbfbad480731884331cca76e
|
9.8 MB | Download |
md5:113f2075ac65af82a01a2a60b86cc75b
|
333.5 MB | Download |
md5:545ca35c4ffed7c592838a40ec76ef2d
|
316.9 MB | Download |
md5:a2bb2a1cc7bf21a4b189c3c524d20c4e
|
323.9 MB | Download |
md5:5bd3a4bab77850d064f04e90058af778
|
258.0 MB | Download |
md5:add54ded6f9f65a3ac2ffe8f83ec3101
|
308.5 MB | Download |
md5:37893aa835f5f53804fb9e97a8a13075
|
309.5 MB | Download |
md5:72c6691249dd0e42c8d3bac097d2cc05
|
280.0 MB | Download |
md5:cec99d4cdbfa2d31829b9b18a6666215
|
326.1 MB | Download |
md5:e2c0f1a6e2585f9277f5afca893de993
|
278.0 MB | Download |
md5:19f9191cd1dc23f04f3962860c6a7bda
|
267.5 MB | Download |
md5:358915569b63ae33aaaac73c2137f00d
|
313.1 MB | Download |
md5:27abb64bae233af907034873cc1f7d78
|
297.6 MB | Download |
md5:933be11610f76a8f232aa95d3874b92c
|
259.2 MB | Download |
md5:7aa55e4465a52557e1dc0cffeb9363d9
|
257.6 MB | Download |
md5:45d52e24cb9f6b694be4c5ffc8ded77c
|
255.4 MB | Download |
md5:39f6864844120b5bef65d51394aea183
|
340.5 MB | Download |
md5:a90e7671c209d1a558d2b0d678c07277
|
291.2 MB | Download |
md5:186be9695a6d9d92a7303fa85e38af16
|
290.3 MB | Download |
md5:74d7ce4410e6b549ae99f0ec0e5b1465
|
301.8 MB | Download |
md5:175e0f09ae747a6b1ef9d4c629e183cc
|
264.6 MB | Download |
md5:ccfe39e6f947d34c5b4db553180cf1d0
|
1.1 kB | Preview Download |
md5:da636508925592de9afc25173af23a27
|
1.1 kB | Preview Download |
md5:63dde899f9e21107e8ecdb7713d858ee
|
2.6 kB | Preview Download |
md5:f25b51b10e296bb238ca45d3df406ce8
|
1.1 kB | Preview Download |
Additional details
Related works
- Is part of
- 10.1109/WACV.2017.28 (DOI)