fiftyone.utils.clip.tokenizer#

CLIP text tokenizer from openai/CLIP.

Copyright 2017-2025, Voxel51, Inc.

Functions:

default_bpe()

bytes_to_unicode()

Returns a list of utf-8 bytes and a corresponding list of unicode strings.

get_pairs(word)

Returns the set of symbol pairs in a word.

basic_clean(text)

whitespace_clean(text)

Classes:

SimpleTokenizer([bpe_path])

fiftyone.utils.clip.tokenizer.default_bpe()#
fiftyone.utils.clip.tokenizer.bytes_to_unicode()#

Returns a list of utf-8 bytes and a corresponding list of unicode strings.

The reversible bpe codes work on unicode strings.

This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.

When you’re at something like a 10B token dataset you end up needing around 5K for decent coverage. This is a significant percentage of your normal, say, 32K bpe vocab. To avoid that, we want lookup tables between utf-8 bytes and unicode strings.

And avoids mapping to whitespace/control characters the bpe code barfs on.

fiftyone.utils.clip.tokenizer.get_pairs(word)#

Returns the set of symbol pairs in a word.

word is represented as tuple of symbols (symbols being variable-length strings).

fiftyone.utils.clip.tokenizer.basic_clean(text)#
fiftyone.utils.clip.tokenizer.whitespace_clean(text)#
class fiftyone.utils.clip.tokenizer.SimpleTokenizer(bpe_path: str = '/opt/hostedtoolcache/Python/3.11.13/x64/lib/python3.11/site-packages/fiftyone/utils/clip/bpe_simple_vocab_16e6.txt.gz')#

Bases: object

Methods:

bpe(token)

encode(text)

decode(tokens)

bpe(token)#
encode(text)#
decode(tokens)#