Add usage example

#1
by merve HF Staff - opened
Files changed (1) hide show
  1. README.md +57 -0
README.md CHANGED
@@ -7,6 +7,7 @@ datasets:
7
  base_model:
8
  - Qwen/Qwen2.5-VL-7B-Instruct
9
  library_name: transformers
 
10
  ---
11
 
12
  <img alt="olmOCR Logo" src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/olmocr/olmocr.png" width="242px" style="margin-left:'auto' margin-right:'auto' display:'block'">
@@ -32,7 +33,63 @@ This model expects as input a single document image, rendered such that the long
32
 
33
  The prompt must then contain the additional metadata from the document, and the easiest way to generate this
34
  is to use the methods provided by the [olmOCR toolkit](https://github.com/allenai/olmocr).
 
35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
  ## License and use
38
 
 
7
  base_model:
8
  - Qwen/Qwen2.5-VL-7B-Instruct
9
  library_name: transformers
10
+ pipeline_tag: image-text-to-text
11
  ---
12
 
13
  <img alt="olmOCR Logo" src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/olmocr/olmocr.png" width="242px" style="margin-left:'auto' margin-right:'auto' display:'block'">
 
33
 
34
  The prompt must then contain the additional metadata from the document, and the easiest way to generate this
35
  is to use the methods provided by the [olmOCR toolkit](https://github.com/allenai/olmocr).
36
+ A simple way to infer using transformers is as follows:
37
 
38
+ ```python
39
+ import torch from transformers import AutoModelForImageTextToText, AutoProcessor
40
+
41
+ model_id = "allenai/olmOCR-7B-0825"
42
+ processor = AutoProcessor.from_pretrained(model_id)
43
+ model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()
44
+
45
+ PROMPT = """
46
+ Below is the image of one page of a PDF document , as well as some raw textual content that
47
+ was previously extracted for it that includes position information for each image and
48
+ block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
49
+ image ).
50
+ Just return the plain text representation of this document as if you were reading it
51
+ naturally .
52
+ Turn equations into a LaTeX representation , and tables into markdown format . Remove the
53
+ headers and footers , but keep references and footnotes .
54
+ Read any natural handwriting .
55
+ This is likely one page out of several in the document , so be sure to preserve any sentences
56
+ that come from the previous page , or continue onto the next page , exactly as they are .
57
+ If there is no text at all that you think you should read , you can output null .
58
+ Do not hallucinate .
59
+ RAW_TEXT_START
60
+ { base_text }
61
+ RAW_TEXT_END
62
+ """
63
+
64
+ messages = [
65
+ {
66
+ "role": "user",
67
+ "content": [
68
+ {
69
+ "type": "image",
70
+ "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
71
+ },
72
+ {"type": "text", "text": PROMPT},
73
+ ],
74
+ }
75
+ ]
76
+
77
+ text = processor.apply_chat_template(
78
+ messages, tokenize=False, add_generation_prompt=True
79
+ )
80
+ inputs = processor.apply_chat_template(
81
+ messages,
82
+ add_generation_prompt=True,
83
+ tokenize=True,
84
+ return_dict=True,
85
+ return_tensors="pt"
86
+ ).to(model.device)
87
+
88
+ output_ids = model.generate(**inputs, max_new_tokens=1000)
89
+ generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
90
+ output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
91
+ print(output_text)
92
+ ```
93
 
94
  ## License and use
95