Note

This is a community plugin, an external project maintained by its respective author. Community plugins are not part of FiftyOne core and may change independently. Please review each plugin’s documentation and license before use.

FiftyOne VLM Efficient#

A comprehensive toolkit for improving Vision-Language Model (VLM) training data quality through state-of-the-art research techniques in FiftyOne

Project Vision#

In the rapidly evolving field of Vision-Language Models (VLMs), data quality has emerged as a critical factor in model performance. This project aims to make advanced data quality improvement techniques accessible to anyone working with VLMs, integrating proven methods from top research labs like NVIDIA’s VILA, LLaVA and more.

Our goal is to make state-of-the-art data quality techniques accessible to everyone - from individual developers and hobbyists to large companies. We provide easy-to-use plugins that help improve training data efficiency and quality for vision-language models, without requiring deep expertise in the field.

Key Features#

1. Dataset Pruning#

  • SIEVE Method - State-of-the-art multimodal dataset pruning using image captioning models

Installation#

First, install the base requirements:

pip install -U fiftyone transformers accelerate einops timm torch sentence-transformers numpy Pillow
pip install git+https://github.com/openai/CLIP.git

Then, install the VLM plugin:

fiftyone plugins download https://github.com/AdonaiVera/fiftyone-vlm-efficient

Research-Backed Techniques#

This toolkit integrates several state-of-the-art approaches from recent research:

  1. SIEVE Method (CVPR 2024)

Future Work#

We plan to integrate additional research-backed techniques:

  1. Pruning Methods

  2. Advanced Pre-processing

  3. Prompt Engineering

  4. Knowledge Distillation

Usage Examples#

This example demonstrates how to load and process the LAION GPT4Vision captions dataset:

import fiftyone as fo
import fiftyone.zoo as foz
from fiftyone.utils.huggingface import load_from_hub

# Load the dataset from the Hugging Face Hub
dataset = load_from_hub(
    "laion/220k-GPT4Vision-captions-from-LIVIS",
    format="parquet",
    filepath="url",
    max_samples=10, 
)

# Launch the FiftyOne app for visualization
session = fo.launch_app(dataset)

# Keep the app running until manually closed
session.wait()

The LAION dataset contains high-quality GPT4Vision-generated captions that can be used to:

  • Train and evaluate VLM models

  • Benchmark caption quality

  • Generate reference captions for quality assessment

  • Fine-tune captioning models

Citations#

@article{Sieve,
  title={Sieve: Multimodal Dataset Pruning Using Image Captioning Models},
  author={Mahmoud, Anas and Elhoushi, Mostafa and Abbas, Amro and Yang, Yu and Ardalani, Newsha and Leather, Hugh and Morcos, Ari},
  journal={CVPR},
  year={2024}
}

Contributing#

We welcome contributions! If you’re working on VLM training and have techniques that could improve data quality, please reach out.