GPT-3.5-turbo-16k Tokenizer
A ๐ค-compatible version of the GPT-3.5-turbo-16k tokenizer (adapted from openai/tiktoken). This means it can be used with Hugging Face libraries including Transformers, Tokenizers, and Transformers.js.
Example usage:
Transformers/Tokenizers
from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained('Xenova/gpt-3.5-turbo-16k')
assert tokenizer.encode('hello world') == [15339, 1917]
Transformers.js
import { AutoTokenizer } from '@xenova/transformers';
const tokenizer = await AutoTokenizer.from_pretrained('Xenova/gpt-3.5-turbo-16k');
const tokens = tokenizer.encode('hello world'); // [15339, 1917]
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no pipeline_tag.