Model Card for LockinGPT

LockinGPT is a fine-tuned language model based on distilgpt2, optimized for generating conversational questions and creative prompts related to blockchain topics, especially focusing on Solana-based ecosystems.

Model Details

Model Description

LockinGPT is specifically fine-tuned for generating yes/no questions and other conversational content related to the Solana blockchain and $LOCKIN token ecosystem. It is designed to aid developers, investors, and enthusiasts in generating useful blockchain-related queries. The model was fine-tuned using a curated dataset of Solana-related content to ensure relevance and accuracy.

  • Developed by: Jonathan Gan
  • Funded by [optional]: Self-funded
  • Shared by [optional]: Jonathan Gan
  • Model type: Causal Language Model
  • Language(s) (NLP): English
  • License: MIT
  • Finetuned from model [optional]: distilbert/distilgpt2

Model Sources [optional]

  • Repository: Private repository (contact Jonathan Gan for details)
  • Paper [optional]: N/A
  • Demo [optional]: N/A

Uses

Direct Use

  • Generating blockchain-related questions for interactive use.
  • Conversational tasks related to the Solana ecosystem.

Downstream Use [optional]

  • Fine-tuned for specific blockchain or crypto-related chatbot applications.

Out-of-Scope Use

  • Non-English conversational tasks.
  • Topics unrelated to blockchain or cryptocurrency may produce incoherent outputs.
  • Sensitive or adversarial applications.

Bias, Risks, and Limitations

  • The model is fine-tuned on Solana-related content and may not generalize well outside this domain.
  • It may reflect biases present in the training data (e.g., promotion of specific blockchain technologies over others).

Recommendations

Users should verify generated content for factual accuracy, especially in contexts requiring precision (e.g., financial advice or technical implementation).

How to Get Started with the Model

Use the code below to get started with LockinGPT:

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("./lockin_model")
model = AutoModelForCausalLM.from_pretrained("./lockin_model")

prompt = "Generate a yes/no question about the $LOCKIN token"
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(inputs["input_ids"], max_new_tokens=50, do_sample=True, top_p=0.9, temperature=1.3)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Downloads last month
4
Safetensors
Model size
81.9M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for jonngan/lockinaiv1

Finetuned
(615)
this model