GPT-2 Fine-Tuned Model
This is a fine-tuned version of the GPT-2 model designed for text generation tasks. The model has been fine-tuned to improve its performance on generating coherent and contextually relevant text.
Model Details
- Model Name: GPT-2 Fine-Tuned
- Base Model: gpt2
- Architecture: GPT2LMHeadModel
- Tokenization: Supported
pad_token_id
: 50256bos_token_id
: 50256eos_token_id
: 50256
Supported Tasks
This model supports the following task:
- Text Generation
Configuration
Model Configuration (config.json)
- Hidden Size: 768
- Number of Layers: 12
- Number of Attention Heads: 12
- Vocab Size: 50257
- Token Type IDs: Not used
Generation Configuration (generation_config.json)
- Sampling Temperature: 0.7
- Top-p (nucleus sampling): 0.9
- Pad Token ID: 50256
- Bos Token ID: 50256
- Eos Token ID: 50256
Usage
To use this model for text generation via the Hugging Face API, use the following Python code snippet:
import requests
api_url = "https://api-inference.huggingface.co/models/rahul77/gpt-2-finetune"
headers = {
"Authorization": "Bearer YOUR_API_TOKEN", # Replace with your Hugging Face API token
"Content-Type": "application/json"
}
data = {
"inputs": "What is a large language model?",
"parameters": {
"max_length": 50
}
}
response = requests.post(api_url, headers=headers, json=data)
if response.status_code == 200:
print(response.json())
else:
print(f"Error: {response.status_code}")
print(response.json())
- Downloads last month
- 22
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for rahul77/gpt-2-finetune
Base model
openai-community/gpt2