merve HF Staff commited on
Commit
61e6832
·
verified ·
1 Parent(s): 03c562f

Add usage example

Browse files
Files changed (1) hide show
  1. README.md +58 -0
README.md CHANGED
@@ -34,6 +34,64 @@ This model expects as input a single document image, rendered such that the long
34
  The prompt must then contain the additional metadata from the document, and the easiest way to generate this
35
  is to use the methods provided by the [olmOCR toolkit](https://github.com/allenai/olmocr).
36
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
 
38
  ## License and use
39
 
 
34
  The prompt must then contain the additional metadata from the document, and the easiest way to generate this
35
  is to use the methods provided by the [olmOCR toolkit](https://github.com/allenai/olmocr).
36
 
37
+ A simple way to infer using transformers is as follows:
38
+
39
+ ```python
40
+ import torch from transformers import AutoModelForImageTextToText, AutoProcessor
41
+
42
+ model_id = "allenai/olmOCR-7B-0725"
43
+ processor = AutoProcessor.from_pretrained(model_id)
44
+ model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()
45
+
46
+ PROMPT = """
47
+ Below is the image of one page of a PDF document , as well as some raw textual content that
48
+ was previously extracted for it that includes position information for each image and
49
+ block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
50
+ image ).
51
+ Just return the plain text representation of this document as if you were reading it
52
+ naturally .
53
+ Turn equations into a LaTeX representation , and tables into markdown format . Remove the
54
+ headers and footers , but keep references and footnotes .
55
+ Read any natural handwriting .
56
+ This is likely one page out of several in the document , so be sure to preserve any sentences
57
+ that come from the previous page , or continue onto the next page , exactly as they are .
58
+ If there is no text at all that you think you should read , you can output null .
59
+ Do not hallucinate .
60
+ RAW_TEXT_START
61
+ { base_text }
62
+ RAW_TEXT_END
63
+ """
64
+
65
+ messages = [
66
+ {
67
+ "role": "user",
68
+ "content": [
69
+ {
70
+ "type": "image",
71
+ "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
72
+ },
73
+ {"type": "text", "text": "OCR"},
74
+ ],
75
+ }
76
+ ]
77
+
78
+ text = processor.apply_chat_template(
79
+ messages, tokenize=False, add_generation_prompt=True
80
+ )
81
+ inputs = processor.apply_chat_template(
82
+ messages,
83
+ video_fps=1,
84
+ add_generation_prompt=True,
85
+ tokenize=True,
86
+ return_dict=True,
87
+ return_tensors="pt"
88
+ ).to(model.device)
89
+
90
+ output_ids = model.generate(**inputs, max_new_tokens=1000)
91
+ generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
92
+ output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
93
+ print(output_text)
94
+ ```
95
 
96
  ## License and use
97