Add usage example

#3
by merve HF Staff - opened
Files changed (1) hide show
  1. README.md +58 -0
README.md CHANGED
@@ -8,6 +8,7 @@ base_model:
8
  - Qwen/Qwen2.5-VL-7B-Instruct
9
  library_name: transformers
10
  new_version: allenai/olmOCR-7B-0825
 
11
  ---
12
 
13
  <img alt="olmOCR Logo" src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/olmocr/olmocr.png" width="242px" style="margin-left:'auto' margin-right:'auto' display:'block'">
@@ -34,6 +35,63 @@ This model expects as input a single document image, rendered such that the long
34
  The prompt must then contain the additional metadata from the document, and the easiest way to generate this
35
  is to use the methods provided by the [olmOCR toolkit](https://github.com/allenai/olmocr).
36
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
 
38
  ## License and use
39
 
 
8
  - Qwen/Qwen2.5-VL-7B-Instruct
9
  library_name: transformers
10
  new_version: allenai/olmOCR-7B-0825
11
+ pipeline_tag: image-text-to-text
12
  ---
13
 
14
  <img alt="olmOCR Logo" src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/olmocr/olmocr.png" width="242px" style="margin-left:'auto' margin-right:'auto' display:'block'">
 
35
  The prompt must then contain the additional metadata from the document, and the easiest way to generate this
36
  is to use the methods provided by the [olmOCR toolkit](https://github.com/allenai/olmocr).
37
 
38
+ A simple way to infer using transformers is as follows:
39
+
40
+ ```python
41
+ import torch from transformers import AutoModelForImageTextToText, AutoProcessor
42
+
43
+ model_id = "allenai/olmOCR-7B-0725"
44
+ processor = AutoProcessor.from_pretrained(model_id)
45
+ model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()
46
+
47
+ PROMPT = """
48
+ Below is the image of one page of a PDF document , as well as some raw textual content that
49
+ was previously extracted for it that includes position information for each image and
50
+ block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
51
+ image ).
52
+ Just return the plain text representation of this document as if you were reading it
53
+ naturally .
54
+ Turn equations into a LaTeX representation , and tables into markdown format . Remove the
55
+ headers and footers , but keep references and footnotes .
56
+ Read any natural handwriting .
57
+ This is likely one page out of several in the document , so be sure to preserve any sentences
58
+ that come from the previous page , or continue onto the next page , exactly as they are .
59
+ If there is no text at all that you think you should read , you can output null .
60
+ Do not hallucinate .
61
+ RAW_TEXT_START
62
+ { base_text }
63
+ RAW_TEXT_END
64
+ """
65
+
66
+ messages = [
67
+ {
68
+ "role": "user",
69
+ "content": [
70
+ {
71
+ "type": "image",
72
+ "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
73
+ },
74
+ {"type": "text", "text": PROMPT},
75
+ ],
76
+ }
77
+ ]
78
+
79
+ text = processor.apply_chat_template(
80
+ messages, tokenize=False, add_generation_prompt=True
81
+ )
82
+ inputs = processor.apply_chat_template(
83
+ messages,
84
+ add_generation_prompt=True,
85
+ tokenize=True,
86
+ return_dict=True,
87
+ return_tensors="pt"
88
+ ).to(model.device)
89
+
90
+ output_ids = model.generate(**inputs, max_new_tokens=1000)
91
+ generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
92
+ output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
93
+ print(output_text)
94
+ ```
95
 
96
  ## License and use
97