fiftyone.utils.clip.tokenizer¶
CLIP text tokenizer from https://github.com/openai/CLIP.
Functions:
Returns a list of utf-8 bytes and a corresponding list of unicode strings. |
|
|
Returns the set of symbol pairs in a word. |
|
|
|
Classes:
|
-
fiftyone.utils.clip.tokenizer.
default_bpe
()¶
-
fiftyone.utils.clip.tokenizer.
bytes_to_unicode
()¶ Returns a list of utf-8 bytes and a corresponding list of unicode strings.
The reversible bpe codes work on unicode strings.
This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
When you’re at something like a 10B token dataset you end up needing around 5K for decent coverage. This is a significant percentage of your normal, say, 32K bpe vocab. To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
And avoids mapping to whitespace/control characters the bpe code barfs on.
-
fiftyone.utils.clip.tokenizer.
get_pairs
(word)¶ Returns the set of symbol pairs in a word.
word
is represented as tuple of symbols (symbols being variable-length strings).
-
fiftyone.utils.clip.tokenizer.
basic_clean
(text)¶
-
fiftyone.utils.clip.tokenizer.
whitespace_clean
(text)¶