Model Card for Model ID
Model Details
Model Description
Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: [Krishna Priya Nynaru]
- Model type: [Text Generation]
- Language(s) (NLP): [More Information Needed]
- License: [Llama 3.2 Community License]
- Finetuned from model [optional]: [meta-llama/Llama-3.2-3B-Instruct]
Model Sources [optional]
- Repository: [More Information Needed]
- Paper [optional]: [More Information Needed]
- Demo [optional]: [More Information Needed]
Uses
[More Information Needed]
Downstream Use [optional]
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model
Use with transformers Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
Make sure to update your transformers installation via pip install --upgrade transformers
Use the code below to get started with the model.
python
import torch
from transformers import pipeline
model_id = "priyanynaru/LLaMa-3.2-3b-Instruct-Finetuned-Syntheticdata"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "Review the input prompt properly and recommend Job for the candidate from companies list"},
{"role": "user", "content": "--- paste your resume details here -----"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
Training Details
Training Data
The dataset used for fine-tuning is priyanynaru/Synthetic_dataset_for_jobrecommendations. This dataset contains fields such as:
Name
Location
Skills
Experience
Job Recommendations
The dataset is provided in .csv format and is used to fine-tune the LLaMA 3.2 3B Instruct model. The fine-tuning process utilizes the SFTTrainer from the trl library, which is specifically designed for supervised fine-tuning of large language models.
Training Procedure
Preprocessing [optional]
[More Information Needed]
Training Hyperparameters
- Training regime: [More Information Needed]
Speeds, Sizes, Times [optional]
[More Information Needed]
Evaluation
Testing Data, Factors & Metrics
Testing Data
[More Information Needed]
Factors
[More Information Needed]
Metrics
[More Information Needed]
Results
[More Information Needed]
Summary
Model Examination [optional]
[More Information Needed]
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: [A100 SXM]
- Hours used: [3hrs]
- Cloud Provider: [0.52 kg of CO2]
- Compute Region: [US]
Technical Specifications [optional]
Model Architecture and Objective
[More Information Needed]
Compute Infrastructure
[More Information Needed]
Hardware
[More Information Needed]
Software
[More Information Needed]
- Downloads last month
- 79
Model tree for priyanynaru/LLaMa-3.2-3b-Instruct-Finetuned-Syntheticdata
Base model
meta-llama/Llama-3.2-3B-Instruct