Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,109 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Model Card: T5-Base Fine-Tuned for Recipe Direction Generation (FP16)
|
2 |
+
**Model Overview**
|
3 |
+
- **Model Name:** t5-base-recipe-finetuned-fp16
|
4 |
+
- **Model Type:** Sequence-to-Sequence Transformer
|
5 |
+
- **Base Model:** google/t5-base (220M parameters)
|
6 |
+
- **Quantization:** FP16 (half-precision floating-point)
|
7 |
+
- **Task:** Generate cooking directions from a list of ingredients
|
8 |
+
|
9 |
+
**Intended Use**
|
10 |
+
- This model is designed to generate step-by-step cooking directions given a list of ingredients. It’s intended for:
|
11 |
+
- Recipe creation assistance.
|
12 |
+
- Educational purposes in culinary AI research.
|
13 |
+
- Exploration of text-to-text generation in domain-specific tasks.
|
14 |
+
- Primary Users: Home cooks, recipe developers, AI researchers.
|
15 |
+
|
16 |
+
|
17 |
+
# Model Details
|
18 |
+
- **Architecture:** T5 (Text-to-Text Transfer Transformer), encoder-decoder Transformer with 12 layers, 768 hidden size, 12 attention heads.
|
19 |
+
- **Input:** Text string in the format "generate recipe directions from ingredients: <ingredient1> <ingredient2> ...".
|
20 |
+
- **Output:** Text string containing cooking directions.
|
21 |
+
- **Quantization:** Converted to FP16 for reduced memory usage (~425 MB vs. ~850 MB in FP32) and faster inference on GPU.
|
22 |
+
- **Hardware:** Fine-tuned and tested on a 12 GB NVIDIA GPU with CUDA.
|
23 |
+
# Training Data
|
24 |
+
- **Dataset:** RecipeNLG
|
25 |
+
- **Source:** Publicly available recipe dataset (downloaded as CSV)
|
26 |
+
- **Size:** 2,231,142 examples (original); subset of 178,491 used for training (10% of train split)
|
27 |
+
# Splits:
|
28 |
+
- **Train:** 178,491 examples (subset)
|
29 |
+
- **Validation:** 223,114 examples
|
30 |
+
- **Test:** 223,115 examples
|
31 |
+
- **Attributes:** ingredients (list of ingredients), directions (list of steps)
|
32 |
+
- **Preprocessing:** Converted stringified lists to text; input prefixed with "generate recipe directions from ingredients: ".
|
33 |
+
|
34 |
+
# Training Procedure
|
35 |
+
**Framework:** Hugging Face Transformers
|
36 |
+
**Hyperparameters:**
|
37 |
+
Epochs: 1 (subset training; full training planned for 3 epochs)
|
38 |
+
Effective Batch Size: 32 (8 per device, 4 gradient accumulation steps)
|
39 |
+
Learning Rate: 2e-5
|
40 |
+
Optimizer: AdamW (default in Trainer)
|
41 |
+
Mixed Precision: FP16 (fp16=True)
|
42 |
+
Training Time: ~2.3 hours estimated for subset (1 epoch); full dataset (3 epochs) estimated at ~68 hours per epoch without optimization.
|
43 |
+
Compute: Single 12 GB GPU (NVIDIA, CUDA-enabled).
|
44 |
+
Evaluation
|
45 |
+
Metrics: Loss (to be filled post-training)
|
46 |
+
Validation Loss: [TBD after training]
|
47 |
+
Test Loss: [TBD after evaluation]
|
48 |
+
Method: Evaluated using Trainer.evaluate() on validation and test splits.
|
49 |
+
Qualitative: Generated directions checked for coherence with input ingredients (e.g., chicken and rice input should yield relevant steps).
|
50 |
+
Performance
|
51 |
+
Results: [TBD; e.g., "Validation Loss: X.XX, Test Loss: Y.YY after 1 epoch on subset"]
|
52 |
+
Strengths: Expected to generate plausible directions for common ingredient combinations.
|
53 |
+
Limitations:
|
54 |
+
Limited training on subset may reduce generalization.
|
55 |
+
Sporadic data mismatches may affect output quality.
|
56 |
+
FP16 quantization might slightly alter precision vs. FP32.
|
57 |
+
Usage
|
58 |
+
Installation
|
59 |
+
bash
|
60 |
+
|
61 |
+
Collapse
|
62 |
+
|
63 |
+
Wrap
|
64 |
+
|
65 |
+
Copy
|
66 |
+
pip install transformers torch datasets
|
67 |
+
Inference Example
|
68 |
+
python
|
69 |
+
|
70 |
+
Collapse
|
71 |
+
|
72 |
+
Wrap
|
73 |
+
|
74 |
+
Copy
|
75 |
+
from transformers import T5Tokenizer, T5ForConditionalGeneration
|
76 |
+
import torch
|
77 |
+
|
78 |
+
model_path = "./t5_recipe_finetuned_fp16"
|
79 |
+
tokenizer = T5Tokenizer.from_pretrained(model_path)
|
80 |
+
model = T5ForConditionalGeneration.from_pretrained(model_path).to("cuda").half()
|
81 |
+
|
82 |
+
ingredients = ["1 lb chicken breast", "2 cups rice", "1 onion", "2 tbsp soy sauce"]
|
83 |
+
input_text = "generate recipe directions from ingredients: " + " ".join(ingredients)
|
84 |
+
input_ids = tokenizer(input_text, return_tensors="pt", max_length=128, truncation=True).input_ids.to("cuda")
|
85 |
+
|
86 |
+
model.eval()
|
87 |
+
with torch.no_grad():
|
88 |
+
output_ids = model.generate(input_ids, max_length=256, num_beams=4, early_stopping=True, no_repeat_ngram_size=2)
|
89 |
+
directions = tokenizer.decode(output_ids[0], skip_special_tokens=True)
|
90 |
+
print(directions)
|
91 |
+
Saved Model
|
92 |
+
Location: ./t5_recipe_finetuned_fp16
|
93 |
+
Size: ~425 MB (FP16 weights)
|
94 |
+
Limitations and Biases
|
95 |
+
Data Quality: Some RecipeNLG entries have mismatched ingredients and directions, potentially leading to nonsensical outputs.
|
96 |
+
Scope: Trained only on English recipes; may not handle non-English inputs or exotic cuisines well.
|
97 |
+
Bias: Reflects biases in RecipeNLG (e.g., Western cuisine dominance).
|
98 |
+
Quantization: FP16 may introduce minor numerical differences vs. FP32, though mitigated by FP16 training.
|
99 |
+
Ethical Considerations
|
100 |
+
Use: Should not be used to replace professional culinary expertise without validation.
|
101 |
+
Safety: Generated directions aren’t guaranteed to be safe or accurate (e.g., cooking times, temperatures).
|
102 |
+
Contact
|
103 |
+
Author: [Your Name/Group Name]
|
104 |
+
Support: [Your Email/GitHub, if applicable]
|
105 |
+
Citation
|
106 |
+
If you use this model, please cite:
|
107 |
+
|
108 |
+
RecipeNLG dataset: [Add citation if available]
|
109 |
+
T5: Raffel et al., "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer" (2020)
|