File size: 4,217 Bytes
37b94de
1c2d033
37b94de
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1c2d033
37b94de
 
 
 
 
 
 
 
 
 
 
1c2d033
37b94de
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
Traceback (most recent call last):
  File "/tmp/OpenGVLab_InternVL3_5-GPT-OSS-20B-A4B-Preview_00v4b7X.py", line 13, in <module>
    pipe = pipeline("image-text-to-text", model="OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview", trust_remote_code=True)
  File "/tmp/.cache/uv/environments-v2/b5957c1e9479bbc2/lib/python3.13/site-packages/transformers/pipelines/__init__.py", line 1008, in pipeline
    framework, model = infer_framework_load_model(
                       ~~~~~~~~~~~~~~~~~~~~~~~~~~^
        adapter_path if adapter_path is not None else model,
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<5 lines>...
        **model_kwargs,
        ^^^^^^^^^^^^^^^
    )
    ^
  File "/tmp/.cache/uv/environments-v2/b5957c1e9479bbc2/lib/python3.13/site-packages/transformers/pipelines/base.py", line 332, in infer_framework_load_model
    raise ValueError(
        f"Could not load model {model} with any of the following classes: {class_tuple}. See the original errors:\n\n{error}\n"
    )
ValueError: Could not load model OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForImageTextToText'>,). See the original errors:

while loading with AutoModelForImageTextToText, an error is thrown:
Traceback (most recent call last):
  File "/tmp/.cache/uv/environments-v2/b5957c1e9479bbc2/lib/python3.13/site-packages/transformers/pipelines/base.py", line 292, in infer_framework_load_model
    model = model_class.from_pretrained(model, **kwargs)
  File "/tmp/.cache/uv/environments-v2/b5957c1e9479bbc2/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 603, in from_pretrained
    raise ValueError(
    ...<2 lines>...
    )
ValueError: Unrecognized configuration class <class 'transformers_modules.OpenGVLab.InternVL3_5-GPT-OSS-20B-A4B-Preview.9f42af53b34f5549dfeaae005ebc8f1fcff85638.configuration_internvl_chat.InternVLChatConfig'> for this kind of AutoModel: AutoModelForImageTextToText.
Model type should be one of AriaConfig, AyaVisionConfig, BlipConfig, Blip2Config, ChameleonConfig, Cohere2VisionConfig, DeepseekVLConfig, DeepseekVLHybridConfig, Emu3Config, EvollaConfig, FuyuConfig, Gemma3Config, Gemma3nConfig, GitConfig, Glm4vConfig, GotOcr2Config, IdeficsConfig, Idefics2Config, Idefics3Config, InstructBlipConfig, InternVLConfig, JanusConfig, Kosmos2Config, Llama4Config, LlavaConfig, LlavaNextConfig, LlavaNextVideoConfig, LlavaOnevisionConfig, Mistral3Config, MllamaConfig, PaliGemmaConfig, PerceptionLMConfig, Pix2StructConfig, PixtralVisionConfig, Qwen2_5_VLConfig, Qwen2VLConfig, ShieldGemma2Config, SmolVLMConfig, UdopConfig, VipLlavaConfig, VisionEncoderDecoderConfig.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/tmp/.cache/uv/environments-v2/b5957c1e9479bbc2/lib/python3.13/site-packages/transformers/pipelines/base.py", line 310, in infer_framework_load_model
    model = model_class.from_pretrained(model, **fp32_kwargs)
  File "/tmp/.cache/uv/environments-v2/b5957c1e9479bbc2/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 603, in from_pretrained
    raise ValueError(
    ...<2 lines>...
    )
ValueError: Unrecognized configuration class <class 'transformers_modules.OpenGVLab.InternVL3_5-GPT-OSS-20B-A4B-Preview.9f42af53b34f5549dfeaae005ebc8f1fcff85638.configuration_internvl_chat.InternVLChatConfig'> for this kind of AutoModel: AutoModelForImageTextToText.
Model type should be one of AriaConfig, AyaVisionConfig, BlipConfig, Blip2Config, ChameleonConfig, Cohere2VisionConfig, DeepseekVLConfig, DeepseekVLHybridConfig, Emu3Config, EvollaConfig, FuyuConfig, Gemma3Config, Gemma3nConfig, GitConfig, Glm4vConfig, GotOcr2Config, IdeficsConfig, Idefics2Config, Idefics3Config, InstructBlipConfig, InternVLConfig, JanusConfig, Kosmos2Config, Llama4Config, LlavaConfig, LlavaNextConfig, LlavaNextVideoConfig, LlavaOnevisionConfig, Mistral3Config, MllamaConfig, PaliGemmaConfig, PerceptionLMConfig, Pix2StructConfig, PixtralVisionConfig, Qwen2_5_VLConfig, Qwen2VLConfig, ShieldGemma2Config, SmolVLMConfig, UdopConfig, VipLlavaConfig, VisionEncoderDecoderConfig.