We offer a benchmark suite together with an evaluation server, such that authors can upload their results and get a ranking. We offer a dataset that contains more than 25,000 pictures, including 15,403 images for training set, 5,000 images for validation set and 5,000 images for testing set. If you would like to submit your results, please follow the instructions on our submission page.
Note: We only display results with relatively detailed descriptions.
Following MPII, we use mAP(%) evaluation measure.
All teams with successful submissions have a placeholder in the leaderboard, and the results of all teams will be released on 10 June. The winner of the challenge is the team with maximal score of mAP.
Method | mAP | Details | Abbreviation | Submit Time |
---|---|---|---|---|
Baseline | 55.8 | Details | Abbreviation | 2018-04-12 11:00:00 |
JDAI-Human | 72.2 | Details | Abbreviation | 2018-06-10 12:21:00 |
OSU-Human | 59.2 | Details | Abbreviation | 2018-05-30 09:31:00 |
RNG | 57.8 | Details | Abbreviation | 2018-05-31 22:57:00 |
MJDG | 69.9 | Details | Abbreviation | 2018-06-10 19:25:00 |