TeamAiko-GPT-Neo-1.3B
This is the TeamAiko-GPT-Neo-1.3B
model, a customized version of the EleutherAI/gpt-neo-1.3B
model. This model has been branded and configured for use by Team Aiko. Note: This is the base version of the model and has not been trained on any specific datasets.
Model Details
- Model Name: TeamAiko-GPT-Neo-1.3B
- Base Model: EleutherAI/gpt-neo-1.3B
- Architecture: GPT-Neo
- Parameters: 1.3 billion
- Tokenizer: AutoTokenizer
- Framework: PyTorch
Usage
To use this model, you can load it with the transformers
library:
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Path to the model folder
model_path = "Team-Aiko/TeamAiko-GPT-Neo-1.3B"
# Set device to CPU to limit RAM usage
device = torch.device("cpu")
torch.set_num_threads(4) # Limit the number of threads used by PyTorch
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path)
model.to(device) # Move model to CPU
# Test the model with a sample input
input_text = "Once upon a time in a land far, far away"
inputs = tokenizer(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs["input_ids"], max_length=50, num_return_sequences=1, no_repeat_ngram_size=2, early_stopping=True)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print("Generated text:", generated_text)
- Downloads last month
- 3
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.