Simple model fails to run with AttributeError: 'super' object has no attribute '_extract_past_from_model_output'
#15
by
buckeye17-bah
- opened
I'm running the following code:
import os
import warnings
warnings.filterwarnings("ignore")
from huggingface_hub import configure_http_backend
from PIL import Image
from transformers import AutoModelForCausalLM, AutoProcessor, GenerationConfig
from tqdm.notebook import tqdm
# load the processor
model_path = "allenai/Molmo-7B-O-0924"
processor = AutoProcessor.from_pretrained(
model_path,
trust_remote_code=True,
torch_dtype="auto",
device_map="cpu"
)
# load the model
model = AutoModelForCausalLM.from_pretrained(
model_path,
trust_remote_code=True,
torch_dtype="auto",
device_map="cpu"
)
# prepare image and text prompt, using the appropriate prompt template
cropped_image_folder = './data/output_image_jpg_cropped/'
image_files = [cropped_image_folder + f for f in os.listdir(cropped_image_folder) if f.lower().endswith(('.jpg', '.jpeg'))]
for image_file in tqdm(image_files[:1]):
# Record the start time
start_time = time.time()
# process the image and text
inputs = processor.process(
images=[Image.open(image_file)],
text="Describe this image."
)
# move inputs to the correct device and make a batch of size 1
inputs = {k: v.to(model.device).unsqueeze(0) for k, v in inputs.items()}
# generate output; maximum 200 new tokens; stop generation when <|endoftext|> is generated
output = model.generate_from_batch(
inputs,
GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"),
tokenizer=processor.tokenizer
)
It produces the following error message:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[24], line 57
54 inputs = {k: v.to(model.device).unsqueeze(0) for k, v in inputs.items()}
56 # generate output; maximum 200 new tokens; stop generation when <|endoftext|> is generated
---> 57 output = model.generate_from_batch(
58 inputs,
59 GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"),
60 tokenizer=processor.tokenizer
61 )
77 # process the image and text
78 # inputs = processor.process(
79 # images=[Image.open(image_file)],
(...) 158 # pd.set_option('display.max_colwidth', None)
159 # df
File c:\Users\613186\AppData\Local\anaconda3\envs\vlm_ocr_pipeline\Lib\site-packages\torch\utils\_contextlib.py:116, in context_decorator.<locals>.decorate_context(*args, **kwargs)
113 @functools.wraps(func)
114 def decorate_context(*args, **kwargs):
115 with ctx_factory():
--> 116 return func(*args, **kwargs)
File ~\.cache\huggingface\modules\transformers_modules\allenai\Molmo-7B-O-0924\0e727957abd46f3ef741ddbda3452db1df873a6e\modeling_molmo.py:2212, in MolmoForCausalLM.generate_from_batch(self, batch, generation_config, **kwargs)
2209 if attention_mask is not None:
2210 assert attention_mask.shape == (batch_size, mask_len)
...
-> 2275 cache_name, cache = super()._extract_past_from_model_output(outputs)
2276 model_kwargs[cache_name] = cache
2277 model_kwargs["cache_position"] = model_kwargs["cache_position"][-1:] + num_new_tokens
AttributeError: 'super' object has no attribute '_extract_past_from_model_output'
Here's my virtual environment:
Package Version
------------------------- --------------
accelerate 1.4.0
aiohappyeyeballs 2.5.0
aiohttp 3.11.13
aiosignal 1.3.2
anyio 4.8.0
argon2-cffi 23.1.0
argon2-cffi-bindings 21.2.0
arrow 1.3.0
asttokens 3.0.0
async-lru 2.0.4
attrs 25.1.0
babel 2.17.0
beautifulsoup4 4.13.3
bleach 6.2.0
Brotli 1.1.0
certifi 2025.1.31
cffi 1.17.1
charset-normalizer 3.4.1
colorama 0.4.6
comm 0.2.2
datasets 3.3.2
debugpy 1.8.13
decorator 5.2.1
defusedxml 0.7.1
dill 0.3.8
docopt 0.6.2
einops 0.8.1
exceptiongroup 1.2.2
executing 2.1.0
fastjsonschema 2.21.1
filelock 3.17.0
fqdn 1.5.1
frozenlist 1.5.0
fsspec 2024.12.0
h11 0.14.0
h2 4.2.0
hpack 4.1.0
httpcore 1.0.7
httpx 0.28.1
huggingface_hub 0.29.2
hyperframe 6.1.0
idna 3.10
importlib_metadata 8.6.1
ipykernel 6.29.5
ipython 9.0.0
ipython_pygments_lexers 1.1.1
ipywidgets 8.1.5
isoduration 20.11.0
jedi 0.19.2
Jinja2 3.1.6
json5 0.10.0
jsonpointer 3.0.0
jsonschema 4.23.0
jsonschema-specifications 2024.10.1
jupyter 1.1.1
jupyter_client 8.6.3
jupyter-console 6.6.3
jupyter_core 5.7.2
jupyter-events 0.12.0
jupyter-lsp 2.2.5
jupyter_server 2.15.0
jupyter_server_terminals 0.5.3
jupyterlab 4.3.5
jupyterlab_pygments 0.3.0
jupyterlab_server 2.27.3
jupyterlab_widgets 3.0.13
MarkupSafe 3.0.2
matplotlib-inline 0.1.7
mistune 3.1.2
mpmath 1.3.0
multidict 6.1.0
multiprocess 0.70.16
narwhals 1.30.0
nbclient 0.10.2
nbconvert 7.16.6
nbformat 5.10.4
nest_asyncio 1.6.0
networkx 3.4.2
notebook 7.3.2
notebook_shim 0.2.4
num2words 0.5.14
numpy 2.2.3
overrides 7.7.0
packaging 24.2
pandas 2.2.3
pandocfilters 1.5.1
parso 0.8.4
pickleshare 0.7.5
pillow 11.1.0
pip 25.0.1
platformdirs 4.3.6
plotly 6.0.0
prometheus_client 0.21.1
prompt_toolkit 3.0.50
propcache 0.2.1
psutil 7.0.0
pure_eval 0.2.3
pyarrow 19.0.1
pycparser 2.22
Pygments 2.19.1
PyMuPDF 1.25.3
PySocks 1.7.1
python-dateutil 2.9.0.post0
python-json-logger 3.3.0
pytz 2024.1
pywin32 307
pywinpty 2.0.15
PyYAML 6.0.2
pyzmq 26.2.1
referencing 0.36.2
regex 2024.11.6
requests 2.32.3
rfc3339-validator 0.1.4
rfc3986-validator 0.1.1
rpds-py 0.23.1
safetensors 0.5.3
Send2Trash 1.8.3
setuptools 75.8.2
six 1.17.0
sniffio 1.3.1
soupsieve 2.6
stack_data 0.6.3
sympy 1.13.1
terminado 0.18.1
timm 1.0.15
tinycss2 1.4.0
tokenizers 0.21.0
torch 2.6.0
torchvision 0.21.0
tornado 6.4.2
tqdm 4.67.1
traitlets 5.14.3
transformers 4.50.0.dev0
types-python-dateutil 2.9.0.20241206
typing_extensions 4.12.2
tzdata 2025.1
uri-template 1.3.0
urllib3 2.3.0
wcwidth 0.2.13
webcolors 24.11.1
webencodings 0.5.1
websocket-client 1.8.0
wheel 0.45.1
widgetsnbextension 4.0.13
win_inet_pton 1.1.0
xxhash 3.5.0
yarl 1.18.3
zipp 3.21.0
zstandard 0.23.0
I also had this issue, and I think it's a problem with huggingface/transformers. Release 4.49.0
removes _extract_past_from_model_output()
from src/transformers/generation/utils.py
; see the diff between 4.48.3
and 4.49.0
. Looks like this was the specific commit
Potential fix
For now, you can fix the issue by locking the package version of huggingface/transformers. To reproduce, we can use the example from the model card:
from transformers import AutoModelForCausalLM, AutoProcessor, GenerationConfig
from PIL import Image
import requests
# load the processor
processor = AutoProcessor.from_pretrained(
'allenai/Molmo-7B-D-0924',
trust_remote_code=True,
torch_dtype='auto',
device_map='cuda:0'
)
# load the model
model = AutoModelForCausalLM.from_pretrained(
'allenai/Molmo-7B-D-0924',
trust_remote_code=True,
torch_dtype='auto',
device_map='cuda:0'
)
# process the image and text
inputs = processor.process(
images=[Image.open(requests.get("https://picsum.photos/id/237/536/354", stream=True).raw)],
text="Describe this image."
)
# move inputs to the correct device and make a batch of size 1
inputs = {k: v.to(model.device).unsqueeze(0) for k, v in inputs.items()}
# generate output; maximum 200 new tokens; stop generation when <|endoftext|> is generated
output = model.generate_from_batch(
inputs,
GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>", do_sample=False),
tokenizer=processor.tokenizer
)
# only get generated tokens; decode them to text
generated_tokens = output[0,inputs['input_ids'].size(1):]
generated_text = processor.tokenizer.decode(generated_tokens, skip_special_tokens=True)
# print the generated text
print(generated_text)
Broken version
Here's a Pipfile
to reproduce the error:
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
torch = "*"
torchvision = "*"
torchaudio = "*"
transformers = "==4.49.0"
einops = "*"
accelerate = "*"
[dev-packages]
[requires]
python_version = "3.10"
Output:
$ python tester.py
Loading checkpoint shards: 100%|ββββββββββββββββββββββββββββ 7/7 [04:52<00:00, 41.75s/it]
Traceback (most recent call last):
File "/home/kgarg0/projects/testing-molmo-broken/tester.py", line 31, in <module>
output = model.generate_from_batch(
File "/home/kgarg0/.local/share/virtualenvs/testing-molmo-broken-msdNl6Us/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/kgarg0/.cache/huggingface/modules/transformers_modules/allenai/Molmo-7B-D-0924/1721478b71306fb7dc671176d5c204dc7a4d27d7/modeling_molmo.py", line 2212, in generate_from_batch
out = super().generate(
File "/home/kgarg0/.local/share/virtualenvs/testing-molmo-broken-msdNl6Us/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/kgarg0/.local/share/virtualenvs/testing-molmo-broken-msdNl6Us/lib/python3.10/site-packages/transformers/generation/utils.py", line 2223, in generate
result = self._sample(
File "/home/kgarg0/.local/share/virtualenvs/testing-molmo-broken-msdNl6Us/lib/python3.10/site-packages/transformers/generation/utils.py", line 3217, in _sample
model_kwargs = self._update_model_kwargs_for_generation(
File "/home/kgarg0/.cache/huggingface/modules/transformers_modules/allenai/Molmo-7B-D-0924/1721478b71306fb7dc671176d5c204dc7a4d27d7/modeling_molmo.py", line 2275, in _update_model_kwargs_for_generation
cache_name, cache = super()._extract_past_from_model_output(outputs)
AttributeError: 'super' object has no attribute '_extract_past_from_model_output'
Output
Fixed version
To fix, change transformers = "==4.49.0"
to transformers = "==4.48.3"
.
Output:
$ python tester.py
Loading checkpoint shards: 100%|ββββββββββββββββββββββββββββ 7/7 [04:52<00:00, 41.75s/it]
This image captures a young black Labrador puppy, likely around six months old, sitting on a weathered wooden deck. The puppy's sleek, short fur is entirely black, including its nose, eyes, and ears, which are slightly floppy. The dog is positioned in the center of the frame, looking up directly at the camera with a curious and attentive expression. Its front paws are visible, with one slightly tucked under its body, while its back paws are hidden from view. The wooden deck beneath the puppy is made of light brown planks with visible knots and signs of wear, adding a rustic charm to the scene. The overall composition is simple yet striking, with the puppy's glossy black coat contrasting beautifully against the light wooden background.
@kgarg0 thanks for your help! That fixed my problem!
buckeye17-bah
changed discussion status to
closed