adjust README
Browse files- README.md +12 -119
- inverse_cooking.md +119 -0
README.md
CHANGED
@@ -1,119 +1,12 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
```
|
14 |
-
@InProceedings{Salvador2019inversecooking,
|
15 |
-
author = {Salvador, Amaia and Drozdzal, Michal and Giro-i-Nieto, Xavier and Romero, Adriana},
|
16 |
-
title = {Inverse Cooking: Recipe Generation From Food Images},
|
17 |
-
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
|
18 |
-
month = {June},
|
19 |
-
year = {2019}
|
20 |
-
}
|
21 |
-
```
|
22 |
-
|
23 |
-
### Installation
|
24 |
-
|
25 |
-
This code uses Python 3.6 and PyTorch 0.4.1 cuda version 9.0.
|
26 |
-
|
27 |
-
- Installing PyTorch:
|
28 |
-
```bash
|
29 |
-
$ conda install pytorch=0.4.1 cuda90 -c pytorch
|
30 |
-
```
|
31 |
-
|
32 |
-
- Install dependencies
|
33 |
-
```bash
|
34 |
-
$ pip install -r requirements.txt
|
35 |
-
```
|
36 |
-
|
37 |
-
### Pretrained model
|
38 |
-
|
39 |
-
- Download ingredient and instruction vocabularies [here](https://dl.fbaipublicfiles.com/inversecooking/ingr_vocab.pkl) and [here](https://dl.fbaipublicfiles.com/inversecooking/instr_vocab.pkl), respectively.
|
40 |
-
- Download pretrained model [here](https://dl.fbaipublicfiles.com/inversecooking/modelbest.ckpt).
|
41 |
-
|
42 |
-
### Demo
|
43 |
-
|
44 |
-
You can use our pretrained model to get recipes for your images.
|
45 |
-
|
46 |
-
Download the required files (listed above), place them under the ```data``` directory, and try our demo notebook ```src/demo.ipynb```.
|
47 |
-
|
48 |
-
Note: The demo will run on GPU if a device is found, else it will use CPU.
|
49 |
-
|
50 |
-
### Data
|
51 |
-
|
52 |
-
- Download [Recipe1M](http://im2recipe.csail.mit.edu/dataset/download) (registration required)
|
53 |
-
- Extract files somewhere (we refer to this path as ```path_to_dataset```).
|
54 |
-
- The contents of ```path_to_dataset``` should be the following:
|
55 |
-
```
|
56 |
-
det_ingrs.json
|
57 |
-
layer1.json
|
58 |
-
layer2.json
|
59 |
-
images/
|
60 |
-
images/train
|
61 |
-
images/val
|
62 |
-
images/test
|
63 |
-
```
|
64 |
-
|
65 |
-
*Note: all python calls below must be run from ```./src```*
|
66 |
-
### Build vocabularies
|
67 |
-
|
68 |
-
```bash
|
69 |
-
$ python build_vocab.py --recipe1m_path path_to_dataset
|
70 |
-
```
|
71 |
-
|
72 |
-
### Images to LMDB (Optional, but recommended)
|
73 |
-
|
74 |
-
For fast loading during training:
|
75 |
-
|
76 |
-
```bash
|
77 |
-
$ python utils/ims2file.py --recipe1m_path path_to_dataset
|
78 |
-
```
|
79 |
-
|
80 |
-
If you decide not to create this file, use the flag ```--load_jpeg``` when training the model.
|
81 |
-
|
82 |
-
### Training
|
83 |
-
|
84 |
-
Create a directory to store checkpoints for all models you train
|
85 |
-
(e.g. ```../checkpoints``` and point ```--save_dir``` to it.)
|
86 |
-
|
87 |
-
We train our model in two stages:
|
88 |
-
|
89 |
-
1. Ingredient prediction from images
|
90 |
-
|
91 |
-
```bash
|
92 |
-
python train.py --model_name im2ingr --batch_size 150 --finetune_after 0 --ingrs_only \
|
93 |
-
--es_metric iou_sample --loss_weight 0 1000.0 1.0 1.0 \
|
94 |
-
--learning_rate 1e-4 --scale_learning_rate_cnn 1.0 \
|
95 |
-
--save_dir ../checkpoints --recipe1m_dir path_to_dataset
|
96 |
-
```
|
97 |
-
|
98 |
-
2. Recipe generation from images and ingredients (loading from 1.)
|
99 |
-
|
100 |
-
```bash
|
101 |
-
python train.py --model_name model --batch_size 256 --recipe_only --transfer_from im2ingr \
|
102 |
-
--save_dir ../checkpoints --recipe1m_dir path_to_dataset
|
103 |
-
```
|
104 |
-
|
105 |
-
Check training progress with Tensorboard from ```../checkpoints```:
|
106 |
-
|
107 |
-
```bash
|
108 |
-
$ tensorboard --logdir='../tb_logs' --port=6006
|
109 |
-
```
|
110 |
-
|
111 |
-
### Evaluation
|
112 |
-
|
113 |
-
- Save generated recipes to disk with
|
114 |
-
```python sample.py --model_name model --save_dir ../checkpoints --recipe1m_dir path_to_dataset --greedy --eval_split test```.
|
115 |
-
- This script will return ingredient metrics (F1 and IoU)
|
116 |
-
|
117 |
-
### License
|
118 |
-
|
119 |
-
inversecooking is released under MIT license, see [LICENSE](LICENSE.md) for details.
|
|
|
1 |
+
---
|
2 |
+
title: Lunchpad
|
3 |
+
emoji: 👨🍳
|
4 |
+
colorFrom: orange
|
5 |
+
colorTo: blue
|
6 |
+
sdk: gradio
|
7 |
+
sdk_version: 4.26.0
|
8 |
+
app_file: app.py
|
9 |
+
pinned: false
|
10 |
+
---
|
11 |
+
|
12 |
+
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
inverse_cooking.md
ADDED
@@ -0,0 +1,119 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## Inverse Cooking: Recipe Generation from Food Images
|
2 |
+
|
3 |
+
Code supporting the paper:
|
4 |
+
|
5 |
+
*Amaia Salvador, Michal Drozdzal, Xavier Giro-i-Nieto, Adriana Romero.
|
6 |
+
[Inverse Cooking: Recipe Generation from Food Images. ](https://arxiv.org/abs/1812.06164)
|
7 |
+
CVPR 2019*
|
8 |
+
|
9 |
+
|
10 |
+
If you find this code useful in your research, please consider citing using the
|
11 |
+
following BibTeX entry:
|
12 |
+
|
13 |
+
```
|
14 |
+
@InProceedings{Salvador2019inversecooking,
|
15 |
+
author = {Salvador, Amaia and Drozdzal, Michal and Giro-i-Nieto, Xavier and Romero, Adriana},
|
16 |
+
title = {Inverse Cooking: Recipe Generation From Food Images},
|
17 |
+
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
|
18 |
+
month = {June},
|
19 |
+
year = {2019}
|
20 |
+
}
|
21 |
+
```
|
22 |
+
|
23 |
+
### Installation
|
24 |
+
|
25 |
+
This code uses Python 3.6 and PyTorch 0.4.1 cuda version 9.0.
|
26 |
+
|
27 |
+
- Installing PyTorch:
|
28 |
+
```bash
|
29 |
+
$ conda install pytorch=0.4.1 cuda90 -c pytorch
|
30 |
+
```
|
31 |
+
|
32 |
+
- Install dependencies
|
33 |
+
```bash
|
34 |
+
$ pip install -r requirements.txt
|
35 |
+
```
|
36 |
+
|
37 |
+
### Pretrained model
|
38 |
+
|
39 |
+
- Download ingredient and instruction vocabularies [here](https://dl.fbaipublicfiles.com/inversecooking/ingr_vocab.pkl) and [here](https://dl.fbaipublicfiles.com/inversecooking/instr_vocab.pkl), respectively.
|
40 |
+
- Download pretrained model [here](https://dl.fbaipublicfiles.com/inversecooking/modelbest.ckpt).
|
41 |
+
|
42 |
+
### Demo
|
43 |
+
|
44 |
+
You can use our pretrained model to get recipes for your images.
|
45 |
+
|
46 |
+
Download the required files (listed above), place them under the ```data``` directory, and try our demo notebook ```src/demo.ipynb```.
|
47 |
+
|
48 |
+
Note: The demo will run on GPU if a device is found, else it will use CPU.
|
49 |
+
|
50 |
+
### Data
|
51 |
+
|
52 |
+
- Download [Recipe1M](http://im2recipe.csail.mit.edu/dataset/download) (registration required)
|
53 |
+
- Extract files somewhere (we refer to this path as ```path_to_dataset```).
|
54 |
+
- The contents of ```path_to_dataset``` should be the following:
|
55 |
+
```
|
56 |
+
det_ingrs.json
|
57 |
+
layer1.json
|
58 |
+
layer2.json
|
59 |
+
images/
|
60 |
+
images/train
|
61 |
+
images/val
|
62 |
+
images/test
|
63 |
+
```
|
64 |
+
|
65 |
+
*Note: all python calls below must be run from ```./src```*
|
66 |
+
### Build vocabularies
|
67 |
+
|
68 |
+
```bash
|
69 |
+
$ python build_vocab.py --recipe1m_path path_to_dataset
|
70 |
+
```
|
71 |
+
|
72 |
+
### Images to LMDB (Optional, but recommended)
|
73 |
+
|
74 |
+
For fast loading during training:
|
75 |
+
|
76 |
+
```bash
|
77 |
+
$ python utils/ims2file.py --recipe1m_path path_to_dataset
|
78 |
+
```
|
79 |
+
|
80 |
+
If you decide not to create this file, use the flag ```--load_jpeg``` when training the model.
|
81 |
+
|
82 |
+
### Training
|
83 |
+
|
84 |
+
Create a directory to store checkpoints for all models you train
|
85 |
+
(e.g. ```../checkpoints``` and point ```--save_dir``` to it.)
|
86 |
+
|
87 |
+
We train our model in two stages:
|
88 |
+
|
89 |
+
1. Ingredient prediction from images
|
90 |
+
|
91 |
+
```bash
|
92 |
+
python train.py --model_name im2ingr --batch_size 150 --finetune_after 0 --ingrs_only \
|
93 |
+
--es_metric iou_sample --loss_weight 0 1000.0 1.0 1.0 \
|
94 |
+
--learning_rate 1e-4 --scale_learning_rate_cnn 1.0 \
|
95 |
+
--save_dir ../checkpoints --recipe1m_dir path_to_dataset
|
96 |
+
```
|
97 |
+
|
98 |
+
2. Recipe generation from images and ingredients (loading from 1.)
|
99 |
+
|
100 |
+
```bash
|
101 |
+
python train.py --model_name model --batch_size 256 --recipe_only --transfer_from im2ingr \
|
102 |
+
--save_dir ../checkpoints --recipe1m_dir path_to_dataset
|
103 |
+
```
|
104 |
+
|
105 |
+
Check training progress with Tensorboard from ```../checkpoints```:
|
106 |
+
|
107 |
+
```bash
|
108 |
+
$ tensorboard --logdir='../tb_logs' --port=6006
|
109 |
+
```
|
110 |
+
|
111 |
+
### Evaluation
|
112 |
+
|
113 |
+
- Save generated recipes to disk with
|
114 |
+
```python sample.py --model_name model --save_dir ../checkpoints --recipe1m_dir path_to_dataset --greedy --eval_split test```.
|
115 |
+
- This script will return ingredient metrics (F1 and IoU)
|
116 |
+
|
117 |
+
### License
|
118 |
+
|
119 |
+
inversecooking is released under MIT license, see [LICENSE](LICENSE.md) for details.
|