Note
This is a Hugging Face dataset. Learn how to load datasets from the Hub in the Hugging Face integration docs.
Dataset Card for EMNIST-Letters-10k#
A random subset of the train and test splits from the letters portion of EMNIST

This is a FiftyOne dataset with 10000 samples.
Installation#
If you haven’t already, install FiftyOne:
pip install -U fiftyone
Usage#
import fiftyone as fo
from fiftyone.utils.huggingface import load_from_hub
# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = load_from_hub("Voxel51/emnist-letters-tiny")
# Launch the App
session = fo.launch_app(dataset)
Dataset Details#
Dataset Description#
Curated by: [More Information Needed]
Funded by [optional]: [More Information Needed]
Shared by [optional]: [More Information Needed]
Language(s) (NLP): en
License: [More Information Needed]
Dataset Sources#
Homepage: https://www.nist.gov/itl/products-and-services/emnist-dataset
Paper : https://arxiv.org/abs/1702.05373v1
Citation#
BibTeX:
@misc{cohen2017emnistextensionmnisthandwritten,
title={EMNIST: an extension of MNIST to handwritten letters},
author={Gregory Cohen and Saeed Afshar and Jonathan Tapson and André van Schaik},
year={2017},
eprint={1702.05373},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/1702.05373},
}