Note
This is a Hugging Face dataset. Learn how to load datasets from the Hub in the Hugging Face integration docs.
Dataset Card for DensePose-COCO#
DensePose-COCO is a large-scale ground-truth dataset with image-to-surface correspondences manually annotated on COCO images.

This is a FiftyOne dataset with 33929 samples.
Installation#
If you haven’t already, install FiftyOne:
pip install -U fiftyone
Usage#
import fiftyone as fo
import fiftyone.utils.huggingface as fouh
# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = fouh.load_from_hub("Voxel51/DensePose-COCO")
# Launch the App
session = fo.launch_app(dataset)
Dataset Details#
Dataset Description#
Curated by: Rıza Alp Güler, Natalia Neverova, Iasonas Kokkinos
Language(s) (NLP): en
License: cc-by-nc-2.0
Dataset Sources#
Repository: https://github.com/facebookresearch/Densepose
Paper : https://arxiv.org/abs/1802.00434
Homepage: http://densepose.org/
Uses#
Dense human pose estimation
Dataset Structure#
Name: DensePoseCOCO
Media type: image
Num samples: 33929
Persistent: False
Tags: []
Sample fields:
id: fiftyone.core.fields.ObjectIdField
filepath: fiftyone.core.fields.StringField
tags: fiftyone.core.fields.ListField(fiftyone.core.fields.StringField)
metadata: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.metadata.ImageMetadata)
detections: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections)
segmentations: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections)
keypoints: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Keypoints)
The dataset has 2 splits: “train” and “val”. Samples are tagged with their split.
Dataset Creation#
Curation Rationale#
Please refer the homepage and the paper for the curation rationale.
Annotation process#
Please refer the github repo for the annotation process.
Citation#
BibTeX:
@InProceedings{Guler2018DensePose,
title={DensePose: Dense Human Pose Estimation In The Wild},
author={R\{i}za Alp G\"uler, Natalia Neverova, Iasonas Kokkinos},
journal={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2018}
}