Spaces:
Configuration error
Configuration error
lol
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- .gitattributes +0 -31
- README.md +136 -12
- app.py +30 -0
- callbacks.py +360 -0
- configs/pretrain_itervm.yaml +60 -0
- configs/pretrain_language_model.yaml +45 -0
- configs/pretrain_vm.yaml +51 -0
- configs/template.yaml +67 -0
- configs/train_iternet.yaml +65 -0
- dataset.py +278 -0
- demo.py +109 -0
- figures/demo/096314.png +0 -0
- figures/demo/096314_.png +0 -0
- figures/demo/096314__.png +0 -0
- figures/demo/096314___.png +0 -0
- figures/demo/327806.png +0 -0
- figures/demo/327806_.png +0 -0
- figures/demo/327806__.png +0 -0
- figures/demo/327806___.png +0 -0
- figures/demo/365560__.png +0 -0
- figures/demo/365560___.png +0 -0
- figures/demo/418760__.png +0 -0
- figures/demo/418760___.png +0 -0
- figures/demo/444825.png +0 -0
- figures/demo/444825_.png +0 -0
- figures/demo/444825__.png +0 -0
- figures/demo/444825___.png +0 -0
- figures/demo/451010.png +0 -0
- figures/demo/451010_.png +0 -0
- figures/demo/451010__.png +0 -0
- figures/demo/451010___.png +0 -0
- figures/demo/502850.png +0 -0
- figures/demo/502850_.png +0 -0
- figures/demo/502850__.png +0 -0
- figures/demo/502850___.png +0 -0
- figures/demo/534995.png +0 -0
- figures/demo/534995_.png +0 -0
- figures/demo/534995__.png +0 -0
- figures/demo/534995___.png +0 -0
- figures/demo/542200__.png +0 -0
- figures/demo/542200___.png +0 -0
- figures/demo/605564.png +0 -0
- figures/demo/605564_.png +0 -0
- figures/demo/605564__.png +0 -0
- figures/demo/605564___.png +0 -0
- figures/demo/614224.png +0 -0
- figures/demo/614224_.png +0 -0
- figures/demo/614224__.png +0 -0
- figures/demo/614224___.png +0 -0
- figures/demo/684224.png +0 -0
.gitattributes
DELETED
@@ -1,31 +0,0 @@
|
|
1 |
-
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
-
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
-
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
-
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
-
*.ftz filter=lfs diff=lfs merge=lfs -text
|
6 |
-
*.gz filter=lfs diff=lfs merge=lfs -text
|
7 |
-
*.h5 filter=lfs diff=lfs merge=lfs -text
|
8 |
-
*.joblib filter=lfs diff=lfs merge=lfs -text
|
9 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
10 |
-
*.model filter=lfs diff=lfs merge=lfs -text
|
11 |
-
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
12 |
-
*.npy filter=lfs diff=lfs merge=lfs -text
|
13 |
-
*.npz filter=lfs diff=lfs merge=lfs -text
|
14 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
15 |
-
*.ot filter=lfs diff=lfs merge=lfs -text
|
16 |
-
*.parquet filter=lfs diff=lfs merge=lfs -text
|
17 |
-
*.pickle filter=lfs diff=lfs merge=lfs -text
|
18 |
-
*.pkl filter=lfs diff=lfs merge=lfs -text
|
19 |
-
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
-
*.pt filter=lfs diff=lfs merge=lfs -text
|
21 |
-
*.pth filter=lfs diff=lfs merge=lfs -text
|
22 |
-
*.rar filter=lfs diff=lfs merge=lfs -text
|
23 |
-
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
24 |
-
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
25 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
26 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
27 |
-
*.wasm filter=lfs diff=lfs merge=lfs -text
|
28 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
29 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
30 |
-
*.zst filter=lfs diff=lfs merge=lfs -text
|
31 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
CHANGED
@@ -1,12 +1,136 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# IterVM: Iterative Vision Modeling Module for Scene Text Recognition
|
2 |
+
|
3 |
+
The official code of [IterNet](https://arxiv.org/abs/2204.02630).
|
4 |
+
|
5 |
+
We propose IterVM, an iterative approach for visual feature extraction which can significantly improve scene text recognition accuracy.
|
6 |
+
IterVM repeatedly uses the high-level visual feature extracted at the previous iteration to enhance the multi-level features extracted at the subsequent iteration.
|
7 |
+
|
8 |
+
|
9 |
+

|
10 |
+
|
11 |
+
|
12 |
+
## Runtime Environment
|
13 |
+
```
|
14 |
+
pip install -r requirements.txt
|
15 |
+
```
|
16 |
+
Note: `fastai==1.0.60` is required.
|
17 |
+
|
18 |
+
## Datasets
|
19 |
+
<details>
|
20 |
+
<summary>Training datasets (Click to expand) </summary>
|
21 |
+
1. [MJSynth](http://www.robots.ox.ac.uk/~vgg/data/text/) (MJ):
|
22 |
+
- Use `tools/create_lmdb_dataset.py` to convert images into LMDB dataset
|
23 |
+
- [LMDB dataset BaiduNetdisk(passwd:n23k)](https://pan.baidu.com/s/1mgnTiyoR8f6Cm655rFI4HQ)
|
24 |
+
2. [SynthText](http://www.robots.ox.ac.uk/~vgg/data/scenetext/) (ST):
|
25 |
+
- Use `tools/crop_by_word_bb.py` to crop images from original [SynthText](http://www.robots.ox.ac.uk/~vgg/data/scenetext/) dataset, and convert images into LMDB dataset by `tools/create_lmdb_dataset.py`
|
26 |
+
- [LMDB dataset BaiduNetdisk(passwd:n23k)](https://pan.baidu.com/s/1mgnTiyoR8f6Cm655rFI4HQ)
|
27 |
+
3. [WikiText103](https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-v1.zip), which is only used for pre-trainig language models:
|
28 |
+
- Use `notebooks/prepare_wikitext103.ipynb` to convert text into CSV format.
|
29 |
+
- [CSV dataset BaiduNetdisk(passwd:dk01)](https://pan.baidu.com/s/1yabtnPYDKqhBb_Ie9PGFXA)
|
30 |
+
</details>
|
31 |
+
|
32 |
+
<details>
|
33 |
+
<summary>Evaluation datasets (Click to expand) </summary>
|
34 |
+
- Evaluation datasets, LMDB datasets can be downloaded from [BaiduNetdisk(passwd:1dbv)](https://pan.baidu.com/s/1RUg3Akwp7n8kZYJ55rU5LQ), [GoogleDrive](https://drive.google.com/file/d/1dTI0ipu14Q1uuK4s4z32DqbqF3dJPdkk/view?usp=sharing).
|
35 |
+
1. ICDAR 2013 (IC13)
|
36 |
+
2. ICDAR 2015 (IC15)
|
37 |
+
3. IIIT5K Words (IIIT)
|
38 |
+
4. Street View Text (SVT)
|
39 |
+
5. Street View Text-Perspective (SVTP)
|
40 |
+
6. CUTE80 (CUTE)
|
41 |
+
</details>
|
42 |
+
|
43 |
+
<details>
|
44 |
+
<summary>The structure of `data` directory (Click to expand) </summary>
|
45 |
+
- The structure of `data` directory is
|
46 |
+
```
|
47 |
+
data
|
48 |
+
├── charset_36.txt
|
49 |
+
├── evaluation
|
50 |
+
│ ├── CUTE80
|
51 |
+
│ ├── IC13_857
|
52 |
+
│ ├── IC15_1811
|
53 |
+
│ ├── IIIT5k_3000
|
54 |
+
│ ├── SVT
|
55 |
+
│ └── SVTP
|
56 |
+
├── training
|
57 |
+
│ ├── MJ
|
58 |
+
│ │ ├── MJ_test
|
59 |
+
│ │ ├── MJ_train
|
60 |
+
│ │ └── MJ_valid
|
61 |
+
│ └── ST
|
62 |
+
├── WikiText-103.csv
|
63 |
+
└── WikiText-103_eval_d1.csv
|
64 |
+
```
|
65 |
+
</details>
|
66 |
+
|
67 |
+
## Pretrained Models
|
68 |
+
|
69 |
+
Get the pretrained models from [GoogleDrive](https://drive.google.com/drive/folders/1C8NMI8Od8mQUMlsnkHNLkYj73kbAQ7Bl?usp=sharing). Performances of the pretrained models are summaried as follows:
|
70 |
+
|
71 |
+
|Model|IC13|SVT|IIIT|IC15|SVTP|CUTE|AVG|
|
72 |
+
|-|-|-|-|-|-|-|-|
|
73 |
+
|IterNet|97.9|95.1|96.9|87.7|90.9|91.3|93.8|
|
74 |
+
|
75 |
+
## Training
|
76 |
+
|
77 |
+
1. Pre-train vision model
|
78 |
+
```
|
79 |
+
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python main.py --config=configs/pretrain_vm.yaml
|
80 |
+
```
|
81 |
+
2. Pre-train language model
|
82 |
+
```
|
83 |
+
CUDA_VISIBLE_DEVICES=0,1,2,3 python main.py --config=configs/pretrain_language_model.yaml
|
84 |
+
```
|
85 |
+
3. Train IterNet
|
86 |
+
```
|
87 |
+
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python main.py --config=configs/train_iternet.yaml
|
88 |
+
```
|
89 |
+
Note:
|
90 |
+
- You can set the `checkpoint` path for vision model (vm) and language model separately for specific pretrained model, or set to `None` to train from scratch
|
91 |
+
|
92 |
+
|
93 |
+
## Evaluation
|
94 |
+
|
95 |
+
```
|
96 |
+
CUDA_VISIBLE_DEVICES=0 python main.py --config=configs/train_iternet.yaml --phase test --image_only
|
97 |
+
```
|
98 |
+
Additional flags:
|
99 |
+
- `--checkpoint /path/to/checkpoint` set the path of evaluation model
|
100 |
+
- `--test_root /path/to/dataset` set the path of evaluation dataset
|
101 |
+
- `--model_eval [alignment|vision]` which sub-model to evaluate
|
102 |
+
- `--image_only` disable dumping visualization of attention masks
|
103 |
+
|
104 |
+
## Run Demo
|
105 |
+
[<a href="https://colab.research.google.com/drive/1XmZGJzFF95uafmARtJMudPLLKBO2eXLv?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>](https://colab.research.google.com/drive/1XmZGJzFF95uafmARtJMudPLLKBO2eXLv?usp=sharing)
|
106 |
+
|
107 |
+
```
|
108 |
+
python demo.py --config=configs/train_iternet.yaml --input=figures/demo
|
109 |
+
```
|
110 |
+
Additional flags:
|
111 |
+
- `--config /path/to/config` set the path of configuration file
|
112 |
+
- `--input /path/to/image-directory` set the path of image directory or wildcard path, e.g, `--input='figs/test/*.png'`
|
113 |
+
- `--checkpoint /path/to/checkpoint` set the path of trained model
|
114 |
+
- `--cuda [-1|0|1|2|3...]` set the cuda id, by default -1 is set and stands for cpu
|
115 |
+
- `--model_eval [alignment|vision]` which sub-model to use
|
116 |
+
- `--image_only` disable dumping visualization of attention masks
|
117 |
+
|
118 |
+
|
119 |
+
## Citation
|
120 |
+
If you find our method useful for your reserach, please cite
|
121 |
+
```bash
|
122 |
+
@article{chu2022itervm,
|
123 |
+
title={IterVM: Iterative Vision Modeling Module for Scene Text Recognition},
|
124 |
+
author={Chu, Xiaojie and Wang, Yongtao},
|
125 |
+
journal={arXiv preprint arXiv:2204.02630},
|
126 |
+
year={2022}
|
127 |
+
}
|
128 |
+
```
|
129 |
+
|
130 |
+
## License
|
131 |
+
The project is only free for academic research purposes, but needs authorization for commerce. For commerce permission, please contact [email protected].
|
132 |
+
|
133 |
+
## Acknowledgements
|
134 |
+
This project is based on [ABINet](https://github.com/FangShancheng/ABINet.git).
|
135 |
+
Thanks for their great works.
|
136 |
+
|
app.py
ADDED
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import glob
|
2 |
+
import gradio as gr
|
3 |
+
from demo import get_model, preprocess, postprocess, load
|
4 |
+
from utils import Config, Logger, CharsetMapper
|
5 |
+
|
6 |
+
config = Config('configs/train_iternet.yaml')
|
7 |
+
config.model_vision_checkpoint = None
|
8 |
+
model = get_model(config)
|
9 |
+
model = load(model, 'workdir/train-iternet/best-train-iternet.pth')
|
10 |
+
charset = CharsetMapper(filename=config.dataset_charset_path, max_length=config.dataset_max_length + 1)
|
11 |
+
|
12 |
+
def process_image(image):
|
13 |
+
img = image.convert('RGB')
|
14 |
+
img = preprocess(img, config.dataset_image_width, config.dataset_image_height)
|
15 |
+
res = model(img)
|
16 |
+
return postprocess(res, charset, 'alignment')[0][0]
|
17 |
+
|
18 |
+
title = "Interactive demo: ABINet"
|
19 |
+
description = "Demo for ABINet, ABINet uses a vision model and an explicit language model to recognize text in the wild, which are trained in end-to-end way. The language model (BCN) achieves bidirectional language representation in simulating cloze test, additionally utilizing iterative correction strategy. To use it, simply upload a (single-text line) image or use one of the example images below and click 'submit'. Results will show up in a few seconds."
|
20 |
+
article = "<p style='text-align: center'><a href='https://arxiv.org/pdf/2103.06495.pdf'>Read Like Humans: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition</a> | <a href='https://github.com/FangShancheng/ABINet'>Github Repo</a></p>"
|
21 |
+
|
22 |
+
iface = gr.Interface(fn=process_image,
|
23 |
+
inputs=gr.inputs.Image(type="pil"),
|
24 |
+
outputs=gr.outputs.Textbox(),
|
25 |
+
title=title,
|
26 |
+
description=description,
|
27 |
+
article=article,
|
28 |
+
examples=glob.glob('figs/test/*.png'))
|
29 |
+
|
30 |
+
iface.launch(debug=True, share=True,enable_queue=True)
|
callbacks.py
ADDED
@@ -0,0 +1,360 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import logging
|
2 |
+
import shutil
|
3 |
+
import time
|
4 |
+
|
5 |
+
import editdistance as ed
|
6 |
+
import torchvision.utils as vutils
|
7 |
+
from fastai.callbacks.tensorboard import (LearnerTensorboardWriter,
|
8 |
+
SummaryWriter, TBWriteRequest,
|
9 |
+
asyncTBWriter)
|
10 |
+
from fastai.vision import *
|
11 |
+
from torch.nn.parallel import DistributedDataParallel
|
12 |
+
from torchvision import transforms
|
13 |
+
|
14 |
+
import dataset
|
15 |
+
from utils import CharsetMapper, Timer, blend_mask
|
16 |
+
|
17 |
+
|
18 |
+
class IterationCallback(LearnerTensorboardWriter):
|
19 |
+
"A `TrackerCallback` that monitor in each iteration."
|
20 |
+
def __init__(self, learn:Learner, name:str='model', checpoint_keep_num=5,
|
21 |
+
show_iters:int=50, eval_iters:int=1000, save_iters:int=20000,
|
22 |
+
start_iters:int=0, stats_iters=20000):
|
23 |
+
#if self.learn.rank is not None: time.sleep(self.learn.rank) # keep all event files
|
24 |
+
super().__init__(learn, base_dir='.', name=learn.path, loss_iters=show_iters,
|
25 |
+
stats_iters=stats_iters, hist_iters=stats_iters)
|
26 |
+
self.name, self.bestname = Path(name).name, f'best-{Path(name).name}'
|
27 |
+
self.show_iters = show_iters
|
28 |
+
self.eval_iters = eval_iters
|
29 |
+
self.save_iters = save_iters
|
30 |
+
self.start_iters = start_iters
|
31 |
+
self.checpoint_keep_num = checpoint_keep_num
|
32 |
+
self.metrics_root = 'metrics/' # rewrite
|
33 |
+
self.timer = Timer()
|
34 |
+
self.host = self.learn.rank is None or self.learn.rank == 0
|
35 |
+
|
36 |
+
def _write_metrics(self, iteration:int, names:List[str], last_metrics:MetricsList)->None:
|
37 |
+
"Writes training metrics to Tensorboard."
|
38 |
+
for i, name in enumerate(names):
|
39 |
+
if last_metrics is None or len(last_metrics) < i+1: return
|
40 |
+
scalar_value = last_metrics[i]
|
41 |
+
self._write_scalar(name=name, scalar_value=scalar_value, iteration=iteration)
|
42 |
+
|
43 |
+
def _write_sub_loss(self, iteration:int, last_losses:dict)->None:
|
44 |
+
"Writes sub loss to Tensorboard."
|
45 |
+
for name, loss in last_losses.items():
|
46 |
+
scalar_value = to_np(loss)
|
47 |
+
tag = self.metrics_root + name
|
48 |
+
self.tbwriter.add_scalar(tag=tag, scalar_value=scalar_value, global_step=iteration)
|
49 |
+
|
50 |
+
def _save(self, name):
|
51 |
+
if isinstance(self.learn.model, DistributedDataParallel):
|
52 |
+
tmp = self.learn.model
|
53 |
+
self.learn.model = self.learn.model.module
|
54 |
+
self.learn.save(name)
|
55 |
+
self.learn.model = tmp
|
56 |
+
else: self.learn.save(name)
|
57 |
+
|
58 |
+
def _validate(self, dl=None, callbacks=None, metrics=None, keeped_items=False):
|
59 |
+
"Validate on `dl` with potential `callbacks` and `metrics`."
|
60 |
+
dl = ifnone(dl, self.learn.data.valid_dl)
|
61 |
+
metrics = ifnone(metrics, self.learn.metrics)
|
62 |
+
cb_handler = CallbackHandler(ifnone(callbacks, []), metrics)
|
63 |
+
cb_handler.on_train_begin(1, None, metrics); cb_handler.on_epoch_begin()
|
64 |
+
if keeped_items: cb_handler.state_dict.update(dict(keeped_items=[]))
|
65 |
+
val_metrics = validate(self.learn.model, dl, self.loss_func, cb_handler)
|
66 |
+
cb_handler.on_epoch_end(val_metrics)
|
67 |
+
if keeped_items: return cb_handler.state_dict['keeped_items']
|
68 |
+
else: return cb_handler.state_dict['last_metrics']
|
69 |
+
|
70 |
+
def jump_to_epoch_iter(self, epoch:int, iteration:int)->None:
|
71 |
+
try:
|
72 |
+
self.learn.load(f'{self.name}_{epoch}_{iteration}', purge=False)
|
73 |
+
logging.info(f'Loaded {self.name}_{epoch}_{iteration}')
|
74 |
+
except: logging.info(f'Model {self.name}_{epoch}_{iteration} not found.')
|
75 |
+
|
76 |
+
def on_train_begin(self, n_epochs, **kwargs):
|
77 |
+
# TODO: can not write graph here
|
78 |
+
# super().on_train_begin(**kwargs)
|
79 |
+
self.best = -float('inf')
|
80 |
+
self.timer.tic()
|
81 |
+
if self.host:
|
82 |
+
checkpoint_path = self.learn.path/'checkpoint.yaml'
|
83 |
+
if checkpoint_path.exists():
|
84 |
+
os.remove(checkpoint_path)
|
85 |
+
open(checkpoint_path, 'w').close()
|
86 |
+
return {'skip_validate': True, 'iteration':self.start_iters} # disable default validate
|
87 |
+
|
88 |
+
def on_batch_begin(self, **kwargs:Any)->None:
|
89 |
+
self.timer.toc_data()
|
90 |
+
super().on_batch_begin(**kwargs)
|
91 |
+
|
92 |
+
def on_batch_end(self, iteration, epoch, last_loss, smooth_loss, train, **kwargs):
|
93 |
+
super().on_batch_end(last_loss, iteration, train, **kwargs)
|
94 |
+
if iteration == 0: return
|
95 |
+
|
96 |
+
if iteration % self.loss_iters == 0:
|
97 |
+
last_losses = self.learn.loss_func.last_losses
|
98 |
+
self._write_sub_loss(iteration=iteration, last_losses=last_losses)
|
99 |
+
self.tbwriter.add_scalar(tag=self.metrics_root + 'lr',
|
100 |
+
scalar_value=self.opt.lr, global_step=iteration)
|
101 |
+
|
102 |
+
if iteration % self.show_iters == 0:
|
103 |
+
log_str = f'epoch {epoch} iter {iteration}: loss = {last_loss:6.4f}, ' \
|
104 |
+
f'smooth loss = {smooth_loss:6.4f}'
|
105 |
+
logging.info(log_str)
|
106 |
+
# log_str = f'data time = {self.timer.data_diff:.4f}s, runing time = {self.timer.running_diff:.4f}s'
|
107 |
+
# logging.info(log_str)
|
108 |
+
|
109 |
+
if iteration % self.eval_iters == 0:
|
110 |
+
# TODO: or remove time to on_epoch_end
|
111 |
+
# 1. Record time
|
112 |
+
log_str = f'average data time = {self.timer.average_data_time():.4f}s, ' \
|
113 |
+
f'average running time = {self.timer.average_running_time():.4f}s'
|
114 |
+
logging.info(log_str)
|
115 |
+
|
116 |
+
# 2. Call validate
|
117 |
+
last_metrics = self._validate()
|
118 |
+
self.learn.model.train()
|
119 |
+
log_str = f'epoch {epoch} iter {iteration}: eval loss = {last_metrics[0]:6.4f}, ' \
|
120 |
+
f'ccr = {last_metrics[1]:6.4f}, cwr = {last_metrics[2]:6.4f}, ' \
|
121 |
+
f'ted = {last_metrics[3]:6.4f}, ned = {last_metrics[4]:6.4f}, ' \
|
122 |
+
f'ted/w = {last_metrics[5]:6.4f}, '
|
123 |
+
logging.info(log_str)
|
124 |
+
names = ['eval_loss', 'ccr', 'cwr', 'ted', 'ned', 'ted/w']
|
125 |
+
self._write_metrics(iteration, names, last_metrics)
|
126 |
+
|
127 |
+
# 3. Save best model
|
128 |
+
current = last_metrics[2]
|
129 |
+
if current is not None and current > self.best:
|
130 |
+
logging.info(f'Better model found at epoch {epoch}, '\
|
131 |
+
f'iter {iteration} with accuracy value: {current:6.4f}.')
|
132 |
+
self.best = current
|
133 |
+
self._save(f'{self.bestname}')
|
134 |
+
|
135 |
+
if iteration % self.save_iters == 0 and self.host:
|
136 |
+
logging.info(f'Save model {self.name}_{epoch}_{iteration}')
|
137 |
+
filename = f'{self.name}_{epoch}_{iteration}'
|
138 |
+
self._save(filename)
|
139 |
+
|
140 |
+
checkpoint_path = self.learn.path/'checkpoint.yaml'
|
141 |
+
if not checkpoint_path.exists():
|
142 |
+
open(checkpoint_path, 'w').close()
|
143 |
+
with open(checkpoint_path, 'r') as file:
|
144 |
+
checkpoints = yaml.load(file, Loader=yaml.FullLoader) or dict()
|
145 |
+
checkpoints['all_checkpoints'] = (
|
146 |
+
checkpoints.get('all_checkpoints') or list())
|
147 |
+
checkpoints['all_checkpoints'].insert(0, filename)
|
148 |
+
if len(checkpoints['all_checkpoints']) > self.checpoint_keep_num:
|
149 |
+
removed_checkpoint = checkpoints['all_checkpoints'].pop()
|
150 |
+
removed_checkpoint = self.learn.path/self.learn.model_dir/f'{removed_checkpoint}.pth'
|
151 |
+
os.remove(removed_checkpoint)
|
152 |
+
checkpoints['current_checkpoint'] = filename
|
153 |
+
with open(checkpoint_path, 'w') as file:
|
154 |
+
yaml.dump(checkpoints, file)
|
155 |
+
|
156 |
+
|
157 |
+
self.timer.toc_running()
|
158 |
+
|
159 |
+
def on_train_end(self, **kwargs):
|
160 |
+
#self.learn.load(f'{self.bestname}', purge=False)
|
161 |
+
pass
|
162 |
+
|
163 |
+
def on_epoch_end(self, last_metrics:MetricsList, iteration:int, **kwargs)->None:
|
164 |
+
self._write_embedding(iteration=iteration)
|
165 |
+
|
166 |
+
|
167 |
+
class TextAccuracy(Callback):
|
168 |
+
_names = ['ccr', 'cwr', 'ted', 'ned', 'ted/w']
|
169 |
+
def __init__(self, charset_path, max_length, case_sensitive, model_eval):
|
170 |
+
self.charset_path = charset_path
|
171 |
+
self.max_length = max_length
|
172 |
+
self.case_sensitive = case_sensitive
|
173 |
+
self.charset = CharsetMapper(charset_path, self.max_length)
|
174 |
+
self.names = self._names
|
175 |
+
|
176 |
+
self.model_eval = model_eval or 'alignment'
|
177 |
+
assert self.model_eval in ['vision', 'language', 'alignment']
|
178 |
+
|
179 |
+
def on_epoch_begin(self, **kwargs):
|
180 |
+
self.total_num_char = 0.
|
181 |
+
self.total_num_word = 0.
|
182 |
+
self.correct_num_char = 0.
|
183 |
+
self.correct_num_word = 0.
|
184 |
+
self.total_ed = 0.
|
185 |
+
self.total_ned = 0.
|
186 |
+
|
187 |
+
def _get_output(self, last_output):
|
188 |
+
if isinstance(last_output, (tuple, list)):
|
189 |
+
for res in last_output:
|
190 |
+
if res['name'] == self.model_eval: output = res
|
191 |
+
else: output = last_output
|
192 |
+
return output
|
193 |
+
|
194 |
+
def _update_output(self, last_output, items):
|
195 |
+
if isinstance(last_output, (tuple, list)):
|
196 |
+
for res in last_output:
|
197 |
+
if res['name'] == self.model_eval: res.update(items)
|
198 |
+
else: last_output.update(items)
|
199 |
+
return last_output
|
200 |
+
|
201 |
+
def on_batch_end(self, last_output, last_target, **kwargs):
|
202 |
+
output = self._get_output(last_output)
|
203 |
+
logits, pt_lengths = output['logits'], output['pt_lengths']
|
204 |
+
pt_text, pt_scores, pt_lengths_ = self.decode(logits)
|
205 |
+
assert (pt_lengths == pt_lengths_).all(), f'{pt_lengths} != {pt_lengths_} for {pt_text}'
|
206 |
+
last_output = self._update_output(last_output, {'pt_text':pt_text, 'pt_scores':pt_scores})
|
207 |
+
|
208 |
+
pt_text = [self.charset.trim(t) for t in pt_text]
|
209 |
+
label = last_target[0]
|
210 |
+
if label.dim() == 3: label = label.argmax(dim=-1) # one-hot label
|
211 |
+
gt_text = [self.charset.get_text(l, trim=True) for l in label]
|
212 |
+
|
213 |
+
for i in range(len(gt_text)):
|
214 |
+
if not self.case_sensitive:
|
215 |
+
gt_text[i], pt_text[i] = gt_text[i].lower(), pt_text[i].lower()
|
216 |
+
distance = ed.eval(gt_text[i], pt_text[i])
|
217 |
+
self.total_ed += distance
|
218 |
+
self.total_ned += float(distance) / max(len(gt_text[i]), 1)
|
219 |
+
|
220 |
+
if gt_text[i] == pt_text[i]:
|
221 |
+
self.correct_num_word += 1
|
222 |
+
self.total_num_word += 1
|
223 |
+
|
224 |
+
for j in range(min(len(gt_text[i]), len(pt_text[i]))):
|
225 |
+
if gt_text[i][j] == pt_text[i][j]:
|
226 |
+
self.correct_num_char += 1
|
227 |
+
self.total_num_char += len(gt_text[i])
|
228 |
+
|
229 |
+
return {'last_output': last_output}
|
230 |
+
|
231 |
+
def on_epoch_end(self, last_metrics, **kwargs):
|
232 |
+
mets = [self.correct_num_char / self.total_num_char,
|
233 |
+
self.correct_num_word / self.total_num_word,
|
234 |
+
self.total_ed,
|
235 |
+
self.total_ned,
|
236 |
+
self.total_ed / self.total_num_word]
|
237 |
+
return add_metrics(last_metrics, mets)
|
238 |
+
|
239 |
+
def decode(self, logit):
|
240 |
+
""" Greed decode """
|
241 |
+
# TODO: test running time and decode on GPU
|
242 |
+
out = F.softmax(logit, dim=2)
|
243 |
+
pt_text, pt_scores, pt_lengths = [], [], []
|
244 |
+
for o in out:
|
245 |
+
text = self.charset.get_text(o.argmax(dim=1), padding=False, trim=False)
|
246 |
+
text = text.split(self.charset.null_char)[0] # end at end-token
|
247 |
+
pt_text.append(text)
|
248 |
+
pt_scores.append(o.max(dim=1)[0])
|
249 |
+
pt_lengths.append(min(len(text) + 1, self.max_length)) # one for end-token
|
250 |
+
pt_scores = torch.stack(pt_scores)
|
251 |
+
pt_lengths = pt_scores.new_tensor(pt_lengths, dtype=torch.long)
|
252 |
+
return pt_text, pt_scores, pt_lengths
|
253 |
+
|
254 |
+
|
255 |
+
class TopKTextAccuracy(TextAccuracy):
|
256 |
+
_names = ['ccr', 'cwr']
|
257 |
+
def __init__(self, k, charset_path, max_length, case_sensitive, model_eval):
|
258 |
+
self.k = k
|
259 |
+
self.charset_path = charset_path
|
260 |
+
self.max_length = max_length
|
261 |
+
self.case_sensitive = case_sensitive
|
262 |
+
self.charset = CharsetMapper(charset_path, self.max_length)
|
263 |
+
self.names = self._names
|
264 |
+
|
265 |
+
def on_epoch_begin(self, **kwargs):
|
266 |
+
self.total_num_char = 0.
|
267 |
+
self.total_num_word = 0.
|
268 |
+
self.correct_num_char = 0.
|
269 |
+
self.correct_num_word = 0.
|
270 |
+
|
271 |
+
def on_batch_end(self, last_output, last_target, **kwargs):
|
272 |
+
logits, pt_lengths = last_output['logits'], last_output['pt_lengths']
|
273 |
+
gt_labels, gt_lengths = last_target[:]
|
274 |
+
|
275 |
+
for logit, pt_length, label, length in zip(logits, pt_lengths, gt_labels, gt_lengths):
|
276 |
+
word_flag = True
|
277 |
+
for i in range(length):
|
278 |
+
char_logit = logit[i].topk(self.k)[1]
|
279 |
+
char_label = label[i].argmax(-1)
|
280 |
+
if char_label in char_logit: self.correct_num_char += 1
|
281 |
+
else: word_flag = False
|
282 |
+
self.total_num_char += 1
|
283 |
+
if pt_length == length and word_flag:
|
284 |
+
self.correct_num_word += 1
|
285 |
+
self.total_num_word += 1
|
286 |
+
|
287 |
+
def on_epoch_end(self, last_metrics, **kwargs):
|
288 |
+
mets = [self.correct_num_char / self.total_num_char,
|
289 |
+
self.correct_num_word / self.total_num_word,
|
290 |
+
0., 0., 0.]
|
291 |
+
return add_metrics(last_metrics, mets)
|
292 |
+
|
293 |
+
|
294 |
+
class DumpPrediction(LearnerCallback):
|
295 |
+
|
296 |
+
def __init__(self, learn, dataset, charset_path, model_eval, image_only=False, debug=False):
|
297 |
+
super().__init__(learn=learn)
|
298 |
+
self.debug = debug
|
299 |
+
self.model_eval = model_eval or 'alignment'
|
300 |
+
self.image_only = image_only
|
301 |
+
assert self.model_eval in ['vision', 'language', 'alignment']
|
302 |
+
|
303 |
+
self.dataset, self.root = dataset, Path(self.learn.path)/f'{dataset}-{self.model_eval}'
|
304 |
+
self.attn_root = self.root/'attn'
|
305 |
+
self.charset = CharsetMapper(charset_path)
|
306 |
+
if self.root.exists(): shutil.rmtree(self.root)
|
307 |
+
self.root.mkdir(), self.attn_root.mkdir()
|
308 |
+
|
309 |
+
self.pil = transforms.ToPILImage()
|
310 |
+
self.tensor = transforms.ToTensor()
|
311 |
+
size = self.learn.data.img_h, self.learn.data.img_w
|
312 |
+
self.resize = transforms.Resize(size=size, interpolation=0)
|
313 |
+
self.c = 0
|
314 |
+
|
315 |
+
def on_batch_end(self, last_input, last_output, last_target, **kwargs):
|
316 |
+
if isinstance(last_output, (tuple, list)):
|
317 |
+
for res in last_output:
|
318 |
+
if res['name'] == self.model_eval: pt_text = res['pt_text']
|
319 |
+
if res['name'] == 'vision': attn_scores = res['attn_scores'].detach().cpu()
|
320 |
+
if res['name'] == self.model_eval: logits = res['logits']
|
321 |
+
else:
|
322 |
+
pt_text = last_output['pt_text']
|
323 |
+
attn_scores = last_output['attn_scores'].detach().cpu()
|
324 |
+
logits = last_output['logits']
|
325 |
+
|
326 |
+
images = last_input[0] if isinstance(last_input, (tuple, list)) else last_input
|
327 |
+
images = images.detach().cpu()
|
328 |
+
pt_text = [self.charset.trim(t) for t in pt_text]
|
329 |
+
gt_label = last_target[0]
|
330 |
+
if gt_label.dim() == 3: gt_label = gt_label.argmax(dim=-1) # one-hot label
|
331 |
+
gt_text = [self.charset.get_text(l, trim=True) for l in gt_label]
|
332 |
+
|
333 |
+
prediction, false_prediction = [], []
|
334 |
+
for gt, pt, image, attn, logit in zip(gt_text, pt_text, images, attn_scores, logits):
|
335 |
+
prediction.append(f'{gt}\t{pt}\n')
|
336 |
+
if gt != pt:
|
337 |
+
if self.debug:
|
338 |
+
scores = torch.softmax(logit, dim=-1)[:max(len(pt), len(gt)) + 1]
|
339 |
+
logging.info(f'{self.c} gt {gt}, pt {pt}, logit {logit.shape}, scores {scores.topk(5, dim=-1)}')
|
340 |
+
false_prediction.append(f'{gt}\t{pt}\n')
|
341 |
+
|
342 |
+
image = self.learn.data.denorm(image)
|
343 |
+
if not self.image_only:
|
344 |
+
image_np = np.array(self.pil(image))
|
345 |
+
attn_pil = [self.pil(a) for a in attn[:, None, :, :]]
|
346 |
+
attn = [self.tensor(self.resize(a)).repeat(3, 1, 1) for a in attn_pil]
|
347 |
+
attn_sum = np.array([np.array(a) for a in attn_pil[:len(pt)]]).sum(axis=0)
|
348 |
+
blended_sum = self.tensor(blend_mask(image_np, attn_sum))
|
349 |
+
blended = [self.tensor(blend_mask(image_np, np.array(a))) for a in attn_pil]
|
350 |
+
save_image = torch.stack([image] + attn + [blended_sum] + blended)
|
351 |
+
save_image = save_image.view(2, -1, *save_image.shape[1:])
|
352 |
+
save_image = save_image.permute(1, 0, 2, 3, 4).flatten(0, 1)
|
353 |
+
vutils.save_image(save_image, self.attn_root/f'{self.c}_{gt}_{pt}.jpg',
|
354 |
+
nrow=2, normalize=True, scale_each=True)
|
355 |
+
else:
|
356 |
+
self.pil(image).save(self.attn_root/f'{self.c}_{gt}_{pt}.jpg')
|
357 |
+
self.c += 1
|
358 |
+
|
359 |
+
with open(self.root/f'{self.model_eval}.txt', 'a') as f: f.writelines(prediction)
|
360 |
+
with open(self.root/f'{self.model_eval}-false.txt', 'a') as f: f.writelines(false_prediction)
|
configs/pretrain_itervm.yaml
ADDED
@@ -0,0 +1,60 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
global:
|
2 |
+
name: pretrain-itervm
|
3 |
+
phase: train
|
4 |
+
stage: pretrain-vision
|
5 |
+
workdir: workdir
|
6 |
+
seed: ~
|
7 |
+
|
8 |
+
dataset:
|
9 |
+
train: {
|
10 |
+
roots: ['data/training/MJ/MJ_train/',
|
11 |
+
'data/training/MJ/MJ_test/',
|
12 |
+
'data/training/MJ/MJ_valid/',
|
13 |
+
'data/training/ST'],
|
14 |
+
batch_size: 384
|
15 |
+
}
|
16 |
+
test: {
|
17 |
+
roots: ['data/evaluation/IIIT5k_3000',
|
18 |
+
'data/evaluation/SVT',
|
19 |
+
'data/evaluation/SVTP',
|
20 |
+
'data/evaluation/IC13_857',
|
21 |
+
'data/evaluation/IC15_1811',
|
22 |
+
'data/evaluation/CUTE80'],
|
23 |
+
batch_size: 384
|
24 |
+
}
|
25 |
+
data_aug: True
|
26 |
+
multiscales: False
|
27 |
+
num_workers: 14
|
28 |
+
|
29 |
+
training:
|
30 |
+
epochs: 8
|
31 |
+
show_iters: 50
|
32 |
+
eval_iters: 3000
|
33 |
+
save_iters: 3000
|
34 |
+
|
35 |
+
optimizer:
|
36 |
+
type: Adam
|
37 |
+
true_wd: False
|
38 |
+
wd: 0.0
|
39 |
+
bn_wd: False
|
40 |
+
clip_grad: 20
|
41 |
+
lr: 0.0001
|
42 |
+
args: {
|
43 |
+
betas: !!python/tuple [0.9, 0.999], # for default Adam
|
44 |
+
}
|
45 |
+
scheduler: {
|
46 |
+
periods: [6, 2],
|
47 |
+
gamma: 0.1,
|
48 |
+
}
|
49 |
+
|
50 |
+
model:
|
51 |
+
name: 'modules.model_vision.BaseIterVision'
|
52 |
+
checkpoint: ~
|
53 |
+
vision: {
|
54 |
+
loss_weight: 1.,
|
55 |
+
attention: 'position',
|
56 |
+
backbone: 'transformer',
|
57 |
+
backbone_ln: 3,
|
58 |
+
iter_size: 3,
|
59 |
+
backbone_alpha_d: 0.5,
|
60 |
+
}
|
configs/pretrain_language_model.yaml
ADDED
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
global:
|
2 |
+
name: pretrain-language-model
|
3 |
+
phase: train
|
4 |
+
stage: pretrain-language
|
5 |
+
workdir: workdir
|
6 |
+
seed: ~
|
7 |
+
|
8 |
+
dataset:
|
9 |
+
train: {
|
10 |
+
roots: ['data/WikiText-103.csv'],
|
11 |
+
batch_size: 4096
|
12 |
+
}
|
13 |
+
test: {
|
14 |
+
roots: ['data/WikiText-103_eval_d1.csv'],
|
15 |
+
batch_size: 4096
|
16 |
+
}
|
17 |
+
|
18 |
+
training:
|
19 |
+
epochs: 80
|
20 |
+
show_iters: 50
|
21 |
+
eval_iters: 6000
|
22 |
+
save_iters: 3000
|
23 |
+
|
24 |
+
optimizer:
|
25 |
+
type: Adam
|
26 |
+
true_wd: False
|
27 |
+
wd: 0.0
|
28 |
+
bn_wd: False
|
29 |
+
clip_grad: 20
|
30 |
+
lr: 0.0001
|
31 |
+
args: {
|
32 |
+
betas: !!python/tuple [0.9, 0.999], # for default Adam
|
33 |
+
}
|
34 |
+
scheduler: {
|
35 |
+
periods: [70, 10],
|
36 |
+
gamma: 0.1,
|
37 |
+
}
|
38 |
+
|
39 |
+
model:
|
40 |
+
name: 'modules.model_language.BCNLanguage'
|
41 |
+
language: {
|
42 |
+
num_layers: 4,
|
43 |
+
loss_weight: 1.,
|
44 |
+
use_self_attn: False
|
45 |
+
}
|
configs/pretrain_vm.yaml
ADDED
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
global:
|
2 |
+
name: pretrain-vm
|
3 |
+
phase: train
|
4 |
+
stage: pretrain-vision
|
5 |
+
workdir: workdir
|
6 |
+
seed: ~
|
7 |
+
|
8 |
+
dataset:
|
9 |
+
train: {
|
10 |
+
roots: ['output_tbell_dataset/'],
|
11 |
+
batch_size: 20
|
12 |
+
}
|
13 |
+
test: {
|
14 |
+
roots: ['output_tbell_dataset/'],
|
15 |
+
batch_size: 20
|
16 |
+
}
|
17 |
+
data_aug: True
|
18 |
+
multiscales: False
|
19 |
+
num_workers: 1
|
20 |
+
|
21 |
+
training:
|
22 |
+
epochs: 8
|
23 |
+
show_iters: 50
|
24 |
+
eval_iters: 50
|
25 |
+
# save_iters: 3000
|
26 |
+
|
27 |
+
optimizer:
|
28 |
+
type: Adam
|
29 |
+
true_wd: False
|
30 |
+
wd: 0.0
|
31 |
+
bn_wd: False
|
32 |
+
clip_grad: 20
|
33 |
+
lr: 0.0001
|
34 |
+
args: {
|
35 |
+
betas: !!python/tuple [0.9, 0.999], # for default Adam
|
36 |
+
}
|
37 |
+
scheduler: {
|
38 |
+
periods: [6, 2],
|
39 |
+
gamma: 0.1,
|
40 |
+
}
|
41 |
+
|
42 |
+
model:
|
43 |
+
name: 'modules.model_vision.BaseVision'
|
44 |
+
checkpoint: ~
|
45 |
+
vision: {
|
46 |
+
loss_weight: 1.,
|
47 |
+
attention: 'position',
|
48 |
+
backbone: 'transformer',
|
49 |
+
backbone_ln: 3,
|
50 |
+
backbone_alpha_d: 0.5,
|
51 |
+
}
|
configs/template.yaml
ADDED
@@ -0,0 +1,67 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
global:
|
2 |
+
name: exp
|
3 |
+
phase: train
|
4 |
+
stage: pretrain-vision
|
5 |
+
workdir: /tmp/workdir
|
6 |
+
seed: ~
|
7 |
+
|
8 |
+
dataset:
|
9 |
+
train: {
|
10 |
+
roots: ['data/training/MJ/MJ_train/',
|
11 |
+
'data/training/MJ/MJ_test/',
|
12 |
+
'data/training/MJ/MJ_valid/',
|
13 |
+
'data/training/ST'],
|
14 |
+
batch_size: 128
|
15 |
+
}
|
16 |
+
test: {
|
17 |
+
roots: ['data/evaluation/IIIT5k_3000',
|
18 |
+
'data/evaluation/SVT',
|
19 |
+
'data/evaluation/SVTP',
|
20 |
+
'data/evaluation/IC13_857',
|
21 |
+
'data/evaluation/IC15_1811',
|
22 |
+
'data/evaluation/CUTE80'],
|
23 |
+
batch_size: 128
|
24 |
+
}
|
25 |
+
charset_path: data/charset_36.txt
|
26 |
+
num_workers: 4
|
27 |
+
max_length: 25 # 30
|
28 |
+
image_height: 32
|
29 |
+
image_width: 128
|
30 |
+
case_sensitive: False
|
31 |
+
eval_case_sensitive: False
|
32 |
+
data_aug: True
|
33 |
+
multiscales: False
|
34 |
+
pin_memory: True
|
35 |
+
smooth_label: False
|
36 |
+
smooth_factor: 0.1
|
37 |
+
one_hot_y: True
|
38 |
+
use_sm: False
|
39 |
+
|
40 |
+
training:
|
41 |
+
epochs: 6
|
42 |
+
show_iters: 50
|
43 |
+
eval_iters: 3000
|
44 |
+
save_iters: 20000
|
45 |
+
start_iters: 0
|
46 |
+
stats_iters: 100000
|
47 |
+
|
48 |
+
optimizer:
|
49 |
+
type: Adadelta # Adadelta, Adam
|
50 |
+
true_wd: False
|
51 |
+
wd: 0. # 0.001
|
52 |
+
bn_wd: False
|
53 |
+
args: {
|
54 |
+
# betas: !!python/tuple [0.9, 0.99], # betas=(0.9,0.99) for AdamW
|
55 |
+
# betas: !!python/tuple [0.9, 0.999], # for default Adam
|
56 |
+
}
|
57 |
+
clip_grad: 20
|
58 |
+
lr: [1.0, 1.0, 1.0] # lr: [0.005, 0.005, 0.005]
|
59 |
+
scheduler: {
|
60 |
+
periods: [3, 2, 1],
|
61 |
+
gamma: 0.1,
|
62 |
+
}
|
63 |
+
|
64 |
+
model:
|
65 |
+
name: 'modules.model_iternet.IterNet'
|
66 |
+
checkpoint: ~
|
67 |
+
strict: True
|
configs/train_iternet.yaml
ADDED
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
global:
|
2 |
+
name: train-iternet
|
3 |
+
phase: train
|
4 |
+
stage: train-super
|
5 |
+
workdir: workdir
|
6 |
+
seed: ~
|
7 |
+
|
8 |
+
dataset:
|
9 |
+
train: {
|
10 |
+
roots: ['output_pixelplanet_dataset/'],
|
11 |
+
batch_size: 20
|
12 |
+
}
|
13 |
+
test: {
|
14 |
+
roots: ['output_pixelplanet_dataset/'],
|
15 |
+
batch_size: 20
|
16 |
+
}
|
17 |
+
data_aug: True
|
18 |
+
multiscales: False
|
19 |
+
num_workers: 8
|
20 |
+
|
21 |
+
training:
|
22 |
+
epochs: 1000
|
23 |
+
show_iters: 500
|
24 |
+
eval_iters: 500
|
25 |
+
# save_iters: 1
|
26 |
+
|
27 |
+
optimizer:
|
28 |
+
type: Adam
|
29 |
+
true_wd: False
|
30 |
+
wd: 0.0
|
31 |
+
bn_wd: False
|
32 |
+
clip_grad: 20
|
33 |
+
lr: 0.0001
|
34 |
+
args: {
|
35 |
+
betas: !!python/tuple [0.9, 0.999], # for default Adam
|
36 |
+
}
|
37 |
+
scheduler: {
|
38 |
+
periods: [6, 4],
|
39 |
+
gamma: 0.1,
|
40 |
+
}
|
41 |
+
|
42 |
+
model:
|
43 |
+
name: 'modules.model_iternet.IterNet'
|
44 |
+
iter_size: 3
|
45 |
+
ensemble: ''
|
46 |
+
use_vision: False
|
47 |
+
vision: {
|
48 |
+
checkpoint: workdir/train-iternet/best-train-iternet.pth,
|
49 |
+
loss_weight: 1.,
|
50 |
+
attention: 'position',
|
51 |
+
backbone: 'transformer',
|
52 |
+
backbone_ln: 3,
|
53 |
+
iter_size: 3,
|
54 |
+
backbone_alpha_d: 0.5,
|
55 |
+
}
|
56 |
+
# language: {
|
57 |
+
# checkpoint: workdir/pretrain-language-model/pretrain-language-model.pth,
|
58 |
+
# num_layers: 4,
|
59 |
+
# loss_weight: 1.,
|
60 |
+
# detach: True,
|
61 |
+
# use_self_attn: False
|
62 |
+
# }
|
63 |
+
alignment: {
|
64 |
+
loss_weight: 1.,
|
65 |
+
}
|
dataset.py
ADDED
@@ -0,0 +1,278 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import logging
|
2 |
+
import re
|
3 |
+
|
4 |
+
import cv2
|
5 |
+
import lmdb
|
6 |
+
import six
|
7 |
+
from fastai.vision import *
|
8 |
+
from torchvision import transforms
|
9 |
+
|
10 |
+
from transforms import CVColorJitter, CVDeterioration, CVGeometry
|
11 |
+
from utils import CharsetMapper, onehot
|
12 |
+
|
13 |
+
|
14 |
+
class ImageDataset(Dataset):
|
15 |
+
"`ImageDataset` read data from LMDB database."
|
16 |
+
|
17 |
+
def __init__(self,
|
18 |
+
path:PathOrStr,
|
19 |
+
is_training:bool=True,
|
20 |
+
img_h:int=32,
|
21 |
+
img_w:int=100,
|
22 |
+
max_length:int=25,
|
23 |
+
check_length:bool=True,
|
24 |
+
case_sensitive:bool=False,
|
25 |
+
charset_path:str='data/charset_36.txt',
|
26 |
+
convert_mode:str='RGB',
|
27 |
+
data_aug:bool=True,
|
28 |
+
deteriorate_ratio:float=0.,
|
29 |
+
multiscales:bool=True,
|
30 |
+
one_hot_y:bool=True,
|
31 |
+
return_idx:bool=False,
|
32 |
+
return_raw:bool=False,
|
33 |
+
**kwargs):
|
34 |
+
self.path, self.name = Path(path), Path(path).name
|
35 |
+
assert self.path.is_dir() and self.path.exists(), f"{path} is not a valid directory."
|
36 |
+
self.convert_mode, self.check_length = convert_mode, check_length
|
37 |
+
self.img_h, self.img_w = img_h, img_w
|
38 |
+
self.max_length, self.one_hot_y = max_length, one_hot_y
|
39 |
+
self.return_idx, self.return_raw = return_idx, return_raw
|
40 |
+
self.case_sensitive, self.is_training = case_sensitive, is_training
|
41 |
+
self.data_aug, self.multiscales = data_aug, multiscales
|
42 |
+
self.charset = CharsetMapper(charset_path, max_length=max_length+1)
|
43 |
+
self.c = self.charset.num_classes
|
44 |
+
|
45 |
+
self.env = lmdb.open(str(path), readonly=True, lock=False, readahead=False, meminit=False)
|
46 |
+
assert self.env, f'Cannot open LMDB dataset from {path}.'
|
47 |
+
with self.env.begin(write=False) as txn:
|
48 |
+
self.length = int(txn.get('num-samples'.encode()))
|
49 |
+
|
50 |
+
if self.is_training and self.data_aug:
|
51 |
+
self.augment_tfs = transforms.Compose([
|
52 |
+
CVGeometry(degrees=45, translate=(0.0, 0.0), scale=(0.5, 2.), shear=(45, 15), distortion=0.5, p=0.5),
|
53 |
+
CVDeterioration(var=20, degrees=6, factor=4, p=0.25),
|
54 |
+
CVColorJitter(brightness=0.5, contrast=0.5, saturation=0.5, hue=0.1, p=0.25)
|
55 |
+
])
|
56 |
+
self.totensor = transforms.ToTensor()
|
57 |
+
|
58 |
+
def __len__(self): return self.length
|
59 |
+
|
60 |
+
def _next_image(self, index):
|
61 |
+
next_index = random.randint(0, len(self) - 1)
|
62 |
+
return self.get(next_index)
|
63 |
+
|
64 |
+
def _check_image(self, x, pixels=6):
|
65 |
+
if x.size[0] <= pixels or x.size[1] <= pixels: return False
|
66 |
+
else: return True
|
67 |
+
|
68 |
+
def resize_multiscales(self, img, borderType=cv2.BORDER_CONSTANT):
|
69 |
+
def _resize_ratio(img, ratio, fix_h=True):
|
70 |
+
if ratio * self.img_w < self.img_h:
|
71 |
+
if fix_h: trg_h = self.img_h
|
72 |
+
else: trg_h = int(ratio * self.img_w)
|
73 |
+
trg_w = self.img_w
|
74 |
+
else: trg_h, trg_w = self.img_h, int(self.img_h / ratio)
|
75 |
+
img = cv2.resize(img, (trg_w, trg_h))
|
76 |
+
pad_h, pad_w = (self.img_h - trg_h) / 2, (self.img_w - trg_w) / 2
|
77 |
+
top, bottom = math.ceil(pad_h), math.floor(pad_h)
|
78 |
+
left, right = math.ceil(pad_w), math.floor(pad_w)
|
79 |
+
img = cv2.copyMakeBorder(img, top, bottom, left, right, borderType)
|
80 |
+
return img
|
81 |
+
|
82 |
+
if self.is_training:
|
83 |
+
if random.random() < 0.5:
|
84 |
+
base, maxh, maxw = self.img_h, self.img_h, self.img_w
|
85 |
+
h, w = random.randint(base, maxh), random.randint(base, maxw)
|
86 |
+
return _resize_ratio(img, h/w)
|
87 |
+
else: return _resize_ratio(img, img.shape[0] / img.shape[1]) # keep aspect ratio
|
88 |
+
else: return _resize_ratio(img, img.shape[0] / img.shape[1]) # keep aspect ratio
|
89 |
+
|
90 |
+
def resize(self, img):
|
91 |
+
if self.multiscales: return self.resize_multiscales(img, cv2.BORDER_REPLICATE)
|
92 |
+
else: return cv2.resize(img, (self.img_w, self.img_h))
|
93 |
+
|
94 |
+
def get(self, idx):
|
95 |
+
with self.env.begin(write=False) as txn:
|
96 |
+
image_key, label_key = f'image-{idx+1:09d}', f'label-{idx+1:09d}'
|
97 |
+
try:
|
98 |
+
label = str(txn.get(label_key.encode()), 'utf-8') # label
|
99 |
+
label = re.sub('[^0-9a-zA-Z]+', '', label)
|
100 |
+
if self.check_length and self.max_length > 0:
|
101 |
+
if len(label) > self.max_length or len(label) <= 0:
|
102 |
+
#logging.info(f'Long or short text image is found: {self.name}, {idx}, {label}, {len(label)}')
|
103 |
+
return self._next_image(idx)
|
104 |
+
label = label[:self.max_length]
|
105 |
+
|
106 |
+
imgbuf = txn.get(image_key.encode()) # image
|
107 |
+
buf = six.BytesIO()
|
108 |
+
buf.write(imgbuf)
|
109 |
+
buf.seek(0)
|
110 |
+
with warnings.catch_warnings():
|
111 |
+
warnings.simplefilter("ignore", UserWarning) # EXIF warning from TiffPlugin
|
112 |
+
image = PIL.Image.open(buf).convert(self.convert_mode)
|
113 |
+
if self.is_training and not self._check_image(image):
|
114 |
+
#logging.info(f'Invalid image is found: {self.name}, {idx}, {label}, {len(label)}')
|
115 |
+
return self._next_image(idx)
|
116 |
+
except:
|
117 |
+
import traceback
|
118 |
+
traceback.print_exc()
|
119 |
+
logging.info(f'Corrupted image is found: {self.name}, {idx}, {label}, {len(label)}')
|
120 |
+
return self._next_image(idx)
|
121 |
+
return image, label, idx
|
122 |
+
|
123 |
+
def _process_training(self, image):
|
124 |
+
if self.data_aug: image = self.augment_tfs(image)
|
125 |
+
image = self.resize(np.array(image))
|
126 |
+
return image
|
127 |
+
|
128 |
+
def _process_test(self, image):
|
129 |
+
return self.resize(np.array(image)) # TODO:move is_training to here
|
130 |
+
|
131 |
+
def __getitem__(self, idx):
|
132 |
+
image, text, idx_new = self.get(idx)
|
133 |
+
if not self.is_training: assert idx == idx_new, f'idx {idx} != idx_new {idx_new} during testing.'
|
134 |
+
|
135 |
+
if self.is_training: image = self._process_training(image)
|
136 |
+
else: image = self._process_test(image)
|
137 |
+
if self.return_raw: return image, text
|
138 |
+
image = self.totensor(image)
|
139 |
+
|
140 |
+
length = tensor(len(text) + 1).to(dtype=torch.long) # one for end token
|
141 |
+
label = self.charset.get_labels(text, case_sensitive=self.case_sensitive)
|
142 |
+
label = tensor(label).to(dtype=torch.long)
|
143 |
+
if self.one_hot_y: label = onehot(label, self.charset.num_classes)
|
144 |
+
|
145 |
+
if self.return_idx: y = [label, length, idx_new]
|
146 |
+
else: y = [label, length]
|
147 |
+
return image, y
|
148 |
+
|
149 |
+
|
150 |
+
class TextDataset(Dataset):
|
151 |
+
def __init__(self,
|
152 |
+
path:PathOrStr,
|
153 |
+
delimiter:str='\t',
|
154 |
+
max_length:int=25,
|
155 |
+
charset_path:str='data/charset_36.txt',
|
156 |
+
case_sensitive=False,
|
157 |
+
one_hot_x=True,
|
158 |
+
one_hot_y=True,
|
159 |
+
is_training=True,
|
160 |
+
smooth_label=False,
|
161 |
+
smooth_factor=0.2,
|
162 |
+
use_sm=False,
|
163 |
+
**kwargs):
|
164 |
+
self.path = Path(path)
|
165 |
+
self.case_sensitive, self.use_sm = case_sensitive, use_sm
|
166 |
+
self.smooth_factor, self.smooth_label = smooth_factor, smooth_label
|
167 |
+
self.charset = CharsetMapper(charset_path, max_length=max_length+1)
|
168 |
+
self.one_hot_x, self.one_hot_y, self.is_training = one_hot_x, one_hot_y, is_training
|
169 |
+
if self.is_training and self.use_sm: self.sm = SpellingMutation(charset=self.charset)
|
170 |
+
|
171 |
+
dtype = {'inp': str, 'gt': str}
|
172 |
+
self.df = pd.read_csv(self.path, dtype=dtype, delimiter=delimiter, na_filter=False)
|
173 |
+
self.inp_col, self.gt_col = 0, 1
|
174 |
+
|
175 |
+
def __len__(self): return len(self.df)
|
176 |
+
|
177 |
+
def __getitem__(self, idx):
|
178 |
+
text_x = self.df.iloc[idx, self.inp_col]
|
179 |
+
text_x = re.sub('[^0-9a-zA-Z]+', '', text_x)
|
180 |
+
if not self.case_sensitive: text_x = text_x.lower()
|
181 |
+
if self.is_training and self.use_sm: text_x = self.sm(text_x)
|
182 |
+
|
183 |
+
length_x = tensor(len(text_x) + 1).to(dtype=torch.long) # one for end token
|
184 |
+
label_x = self.charset.get_labels(text_x, case_sensitive=self.case_sensitive)
|
185 |
+
label_x = tensor(label_x)
|
186 |
+
if self.one_hot_x:
|
187 |
+
label_x = onehot(label_x, self.charset.num_classes)
|
188 |
+
if self.is_training and self.smooth_label:
|
189 |
+
label_x = torch.stack([self.prob_smooth_label(l) for l in label_x])
|
190 |
+
x = [label_x, length_x]
|
191 |
+
|
192 |
+
text_y = self.df.iloc[idx, self.gt_col]
|
193 |
+
text_y = re.sub('[^0-9a-zA-Z]+', '', text_y)
|
194 |
+
if not self.case_sensitive: text_y = text_y.lower()
|
195 |
+
length_y = tensor(len(text_y) + 1).to(dtype=torch.long) # one for end token
|
196 |
+
label_y = self.charset.get_labels(text_y, case_sensitive=self.case_sensitive)
|
197 |
+
label_y = tensor(label_y)
|
198 |
+
if self.one_hot_y: label_y = onehot(label_y, self.charset.num_classes)
|
199 |
+
y = [label_y, length_y]
|
200 |
+
|
201 |
+
return x, y
|
202 |
+
|
203 |
+
def prob_smooth_label(self, one_hot):
|
204 |
+
one_hot = one_hot.float()
|
205 |
+
delta = torch.rand([]) * self.smooth_factor
|
206 |
+
num_classes = len(one_hot)
|
207 |
+
noise = torch.rand(num_classes)
|
208 |
+
noise = noise / noise.sum() * delta
|
209 |
+
one_hot = one_hot * (1 - delta) + noise
|
210 |
+
return one_hot
|
211 |
+
|
212 |
+
|
213 |
+
class SpellingMutation(object):
|
214 |
+
def __init__(self, pn0=0.7, pn1=0.85, pn2=0.95, pt0=0.7, pt1=0.85, charset=None):
|
215 |
+
"""
|
216 |
+
Args:
|
217 |
+
pn0: the prob of not modifying characters is (pn0)
|
218 |
+
pn1: the prob of modifying one characters is (pn1 - pn0)
|
219 |
+
pn2: the prob of modifying two characters is (pn2 - pn1),
|
220 |
+
and three (1 - pn2)
|
221 |
+
pt0: the prob of replacing operation is pt0.
|
222 |
+
pt1: the prob of inserting operation is (pt1 - pt0),
|
223 |
+
and deleting operation is (1 - pt1)
|
224 |
+
"""
|
225 |
+
super().__init__()
|
226 |
+
self.pn0, self.pn1, self.pn2 = pn0, pn1, pn2
|
227 |
+
self.pt0, self.pt1 = pt0, pt1
|
228 |
+
self.charset = charset
|
229 |
+
logging.info(f'the probs: pn0={self.pn0}, pn1={self.pn1} ' +
|
230 |
+
f'pn2={self.pn2}, pt0={self.pt0}, pt1={self.pt1}')
|
231 |
+
|
232 |
+
def is_digit(self, text, ratio=0.5):
|
233 |
+
length = max(len(text), 1)
|
234 |
+
digit_num = sum([t in self.charset.digits for t in text])
|
235 |
+
if digit_num / length < ratio: return False
|
236 |
+
return True
|
237 |
+
|
238 |
+
def is_unk_char(self, char):
|
239 |
+
# return char == self.charset.unk_char
|
240 |
+
return (char not in self.charset.digits) and (char not in self.charset.alphabets)
|
241 |
+
|
242 |
+
def get_num_to_modify(self, length):
|
243 |
+
prob = random.random()
|
244 |
+
if prob < self.pn0: num_to_modify = 0
|
245 |
+
elif prob < self.pn1: num_to_modify = 1
|
246 |
+
elif prob < self.pn2: num_to_modify = 2
|
247 |
+
else: num_to_modify = 3
|
248 |
+
|
249 |
+
if length <= 1: num_to_modify = 0
|
250 |
+
elif length >= 2 and length <= 4: num_to_modify = min(num_to_modify, 1)
|
251 |
+
else: num_to_modify = min(num_to_modify, length // 2) # smaller than length // 2
|
252 |
+
return num_to_modify
|
253 |
+
|
254 |
+
def __call__(self, text, debug=False):
|
255 |
+
if self.is_digit(text): return text
|
256 |
+
length = len(text)
|
257 |
+
num_to_modify = self.get_num_to_modify(length)
|
258 |
+
if num_to_modify <= 0: return text
|
259 |
+
|
260 |
+
chars = []
|
261 |
+
index = np.arange(0, length)
|
262 |
+
random.shuffle(index)
|
263 |
+
index = index[: num_to_modify]
|
264 |
+
if debug: self.index = index
|
265 |
+
for i, t in enumerate(text):
|
266 |
+
if i not in index: chars.append(t)
|
267 |
+
elif self.is_unk_char(t): chars.append(t)
|
268 |
+
else:
|
269 |
+
prob = random.random()
|
270 |
+
if prob < self.pt0: # replace
|
271 |
+
chars.append(random.choice(self.charset.alphabets))
|
272 |
+
elif prob < self.pt1: # insert
|
273 |
+
chars.append(random.choice(self.charset.alphabets))
|
274 |
+
chars.append(t)
|
275 |
+
else: # delete
|
276 |
+
continue
|
277 |
+
new_text = ''.join(chars[: self.charset.max_length-1])
|
278 |
+
return new_text if len(new_text) >= 1 else text
|
demo.py
ADDED
@@ -0,0 +1,109 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import argparse
|
2 |
+
import logging
|
3 |
+
import os
|
4 |
+
import glob
|
5 |
+
import tqdm
|
6 |
+
import torch
|
7 |
+
import PIL
|
8 |
+
import cv2
|
9 |
+
import numpy as np
|
10 |
+
import torch.nn.functional as F
|
11 |
+
from torchvision import transforms
|
12 |
+
from utils import Config, Logger, CharsetMapper
|
13 |
+
|
14 |
+
def get_model(config):
|
15 |
+
import importlib
|
16 |
+
names = config.model_name.split('.')
|
17 |
+
module_name, class_name = '.'.join(names[:-1]), names[-1]
|
18 |
+
cls = getattr(importlib.import_module(module_name), class_name)
|
19 |
+
model = cls(config)
|
20 |
+
logging.info(model)
|
21 |
+
model = model.eval()
|
22 |
+
return model
|
23 |
+
|
24 |
+
def preprocess(img, width, height):
|
25 |
+
img = cv2.resize(np.array(img), (width, height))
|
26 |
+
img = transforms.ToTensor()(img).unsqueeze(0)
|
27 |
+
mean = torch.tensor([0.485, 0.456, 0.406])
|
28 |
+
std = torch.tensor([0.229, 0.224, 0.225])
|
29 |
+
return (img-mean[...,None,None]) / std[...,None,None]
|
30 |
+
|
31 |
+
def postprocess(output, charset, model_eval):
|
32 |
+
def _get_output(last_output, model_eval):
|
33 |
+
if isinstance(last_output, (tuple, list)):
|
34 |
+
for res in last_output:
|
35 |
+
if res['name'] == model_eval: output = res
|
36 |
+
else: output = last_output
|
37 |
+
return output
|
38 |
+
|
39 |
+
def _decode(logit):
|
40 |
+
""" Greed decode """
|
41 |
+
out = F.softmax(logit, dim=2)
|
42 |
+
pt_text, pt_scores, pt_lengths = [], [], []
|
43 |
+
for o in out:
|
44 |
+
text = charset.get_text(o.argmax(dim=1), padding=False, trim=False)
|
45 |
+
text = text.split(charset.null_char)[0] # end at end-token
|
46 |
+
pt_text.append(text)
|
47 |
+
pt_scores.append(o.max(dim=1)[0])
|
48 |
+
pt_lengths.append(min(len(text) + 1, charset.max_length)) # one for end-token
|
49 |
+
return pt_text, pt_scores, pt_lengths
|
50 |
+
|
51 |
+
output = _get_output(output, model_eval)
|
52 |
+
logits, pt_lengths = output['logits'], output['pt_lengths']
|
53 |
+
pt_text, pt_scores, pt_lengths_ = _decode(logits)
|
54 |
+
|
55 |
+
return pt_text, pt_scores, pt_lengths_
|
56 |
+
|
57 |
+
def load(model, file, device=None, strict=True):
|
58 |
+
if device is None: device = 'cpu'
|
59 |
+
elif isinstance(device, int): device = torch.device('cuda', device)
|
60 |
+
assert os.path.isfile(file)
|
61 |
+
state = torch.load(file, map_location=device)
|
62 |
+
if set(state.keys()) == {'model', 'opt'}:
|
63 |
+
state = state['model']
|
64 |
+
model.load_state_dict(state, strict=strict)
|
65 |
+
return model
|
66 |
+
|
67 |
+
def main():
|
68 |
+
parser = argparse.ArgumentParser()
|
69 |
+
parser.add_argument('--config', type=str, default='configs/train_iternet.yaml',
|
70 |
+
help='path to config file')
|
71 |
+
parser.add_argument('--input', type=str, default='figures/demo')
|
72 |
+
parser.add_argument('--cuda', type=int, default=-1)
|
73 |
+
parser.add_argument('--checkpoint', type=str, default='workdir/train-iternet/best-train-iternet.pth')
|
74 |
+
parser.add_argument('--model_eval', type=str, default='alignment',
|
75 |
+
choices=['alignment', 'vision', 'language'])
|
76 |
+
args = parser.parse_args()
|
77 |
+
config = Config(args.config)
|
78 |
+
if args.checkpoint is not None: config.model_checkpoint = args.checkpoint
|
79 |
+
if args.model_eval is not None: config.model_eval = args.model_eval
|
80 |
+
config.global_phase = 'test'
|
81 |
+
config.model_vision_checkpoint, config.model_language_checkpoint = None, None
|
82 |
+
device = 'cpu' if args.cuda < 0 else f'cuda:{args.cuda}'
|
83 |
+
|
84 |
+
Logger.init(config.global_workdir, config.global_name, config.global_phase)
|
85 |
+
Logger.enable_file()
|
86 |
+
logging.info(config)
|
87 |
+
|
88 |
+
logging.info('Construct model.')
|
89 |
+
model = get_model(config).to(device)
|
90 |
+
model = load(model, config.model_checkpoint, device=device)
|
91 |
+
charset = CharsetMapper(filename=config.dataset_charset_path,
|
92 |
+
max_length=config.dataset_max_length + 1)
|
93 |
+
|
94 |
+
if os.path.isdir(args.input):
|
95 |
+
paths = [os.path.join(args.input, fname) for fname in os.listdir(args.input)]
|
96 |
+
else:
|
97 |
+
paths = glob.glob(os.path.expanduser(args.input))
|
98 |
+
assert paths, "The input path(s) was not found"
|
99 |
+
paths = sorted(paths)
|
100 |
+
for path in tqdm.tqdm(paths):
|
101 |
+
img = PIL.Image.open(path).convert('RGB')
|
102 |
+
img = preprocess(img, config.dataset_image_width, config.dataset_image_height)
|
103 |
+
img = img.to(device)
|
104 |
+
res = model(img)
|
105 |
+
pt_text, _, __ = postprocess(res, charset, config.model_eval)
|
106 |
+
logging.info(f'{path}: {pt_text[0]}')
|
107 |
+
|
108 |
+
if __name__ == '__main__':
|
109 |
+
main()
|
figures/demo/096314.png
ADDED
![]() |
figures/demo/096314_.png
ADDED
![]() |
figures/demo/096314__.png
ADDED
![]() |
figures/demo/096314___.png
ADDED
![]() |
figures/demo/327806.png
ADDED
![]() |
figures/demo/327806_.png
ADDED
![]() |
figures/demo/327806__.png
ADDED
![]() |
figures/demo/327806___.png
ADDED
![]() |
figures/demo/365560__.png
ADDED
![]() |
figures/demo/365560___.png
ADDED
![]() |
figures/demo/418760__.png
ADDED
![]() |
figures/demo/418760___.png
ADDED
![]() |
figures/demo/444825.png
ADDED
![]() |
figures/demo/444825_.png
ADDED
![]() |
figures/demo/444825__.png
ADDED
![]() |
figures/demo/444825___.png
ADDED
![]() |
figures/demo/451010.png
ADDED
![]() |
figures/demo/451010_.png
ADDED
![]() |
figures/demo/451010__.png
ADDED
![]() |
figures/demo/451010___.png
ADDED
![]() |
figures/demo/502850.png
ADDED
![]() |
figures/demo/502850_.png
ADDED
![]() |
figures/demo/502850__.png
ADDED
![]() |
figures/demo/502850___.png
ADDED
![]() |
figures/demo/534995.png
ADDED
![]() |
figures/demo/534995_.png
ADDED
![]() |
figures/demo/534995__.png
ADDED
![]() |
figures/demo/534995___.png
ADDED
![]() |
figures/demo/542200__.png
ADDED
![]() |
figures/demo/542200___.png
ADDED
![]() |
figures/demo/605564.png
ADDED
![]() |
figures/demo/605564_.png
ADDED
![]() |
figures/demo/605564__.png
ADDED
![]() |
figures/demo/605564___.png
ADDED
![]() |
figures/demo/614224.png
ADDED
![]() |
figures/demo/614224_.png
ADDED
![]() |
figures/demo/614224__.png
ADDED
![]() |
figures/demo/614224___.png
ADDED
![]() |
figures/demo/684224.png
ADDED
![]() |