VC-Inspector-7B

arXiv Models Dataset

Introduction

VC-Inspector-7B is a lightweight, open-source large multimodal model (LMM) for reference-free evaluation of video captions with a focus on factual accuracy. Unlike existing metrics that suffer from limited context handling, weak factuality assessment, or reliance on proprietary services, VC-Inspector offers a reproducible, fact-aware alternative that aligns closely with human judgments.

This model is fine-tuned from Qwen2.5-VL-7B-Instruct using LoRA on our synthetic dataset ActivityNet-FG-It, which contains 44K video-caption pairs with controlled factual errors and quality annotations.

Key Features

  • Reference-free Evaluation: Evaluates video captions without requiring ground-truth references
  • Factual Grounding: Detects factual errors in objects and actions within captions
  • Interpretable Outputs: Generates quality scores (1-5) with natural language explanations
  • Cross-domain Generalization: Works on both video and image caption evaluation
  • State-of-the-art Performance: Outperforms GPT-4o-based methods on VATEX-Eval

Model Architecture

VC-Inspector-7B is built on Qwen2.5-VL-7B-Instruct with the following modifications:

  • Vision Encoder: Frozen (preserves generalization)
  • Visual-Language Projector: Frozen
  • LLM Component: Fine-tuned with LoRA (rank=32, alpha=32)

Evaluation Results

Correlation with Human Judgments on VATEX-Eval

Metric Type Kendall's τ_b Spearman's ρ
EMScore Reference-free 22.88 29.79
CLIPScore Reference-free 22.33 29.09
ViCLIPScore Reference-free 30.92 39.86
G-VEval (GPT-4o) Reference-free 39.40 -
Qwen2.5-VL-7B (base) Reference-free 34.70 39.40
VC-Inspector-7B Reference-free 42.58 45.99

Cross-domain Evaluation on Image Caption Benchmarks

Metric Flickr8K-Expert (τ_b) Flickr8K-CF (τ_b)
CLIPScore (ref-free) 51.10 34.40
PAC-S (ref-free) 53.90 36.00
VC-Inspector-7B 63.43 45.97

Synthetic Dataset Evaluation

Dataset Kendall's τ_b Spearman's ρ
ActivityNet-FG-Eval 49.53 62.01
YouCook2-FG-Eval 44.29 55.31

Requirements

pip install torch transformers accelerate
pip install qwen-vl-utils[decord]==0.0.8
pip install flash-attn --no-build-isolation

Quickstart

Using Transformers

from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info

# Load model
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
    "dipta007/VCInspector-7B",
    torch_dtype="auto",
    device_map="auto",
)
processor = AutoProcessor.from_pretrained("dipta007/VCInspector-7B")

# Prepare input
caption = "A man is playing guitar in a field"
prompt = f"""<caption>{caption}</caption>

You are given a video and a caption describing the video content. Please rate the helpfulness, relevance, accuracy, level of details of the caption. The overall score should be on a scale of 1 to 5, where a higher score indicates better overall performance. Please first output a single line containing only one integer indicating the score. In the subsequent line, please provide a comprehensive explanation of your evaluation, avoiding any potential bias. STRICTLY FOLLOW THE FORMAT."""

messages = [
    {
        "role": "user",
        "content": [
            {"type": "video", "video": "path/to/video.mp4", "max_pixels": 360 * 420, "fps": 1.0},
            {"type": "text", "text": prompt},
        ],
    }
]

# Process and generate
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
image_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
    **video_kwargs,
)
inputs = inputs.to("cuda")

generated_ids = model.generate(**inputs, max_new_tokens=256)
generated_ids_trimmed = [
    out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text[0])

Example Output

4
The caption does not accurately capture the video content. For example, the objects (guitar) are incorrect.

Using with ms-swift (vLLM backend)

from swift.llm import VllmEngine, InferRequest, RequestConfig
import os

os.environ["VIDEO_MAX_PIXELS"] = "50176"
os.environ["FPS_MAX_FRAMES"] = "12"

engine = VllmEngine(
    "dipta007/VCInspector-7B",
    max_model_len=32768,
    limit_mm_per_prompt={"image": 32}
)

# Prepare request
request = InferRequest(
    messages=[{"role": "user", "content": f"<image>\n{prompt}"}],
    images=["frame1.jpg", "frame2.jpg", ...]  # Video frames
)
config = RequestConfig(max_tokens=256, temperature=0.0)
response = engine.infer([request], config)
print(response[0].choices[0].message.content)

Output Format

VC-Inspector outputs two components:

  1. Quality Score (Line 1): Integer from 1-5

    • 5: Caption is accurate and comprehensive
    • 4: Minor factual errors
    • 3: Moderate factual errors
    • 2: Significant factual errors
    • 1: Major factual errors or completely incorrect
  2. Explanation (Line 2+): Natural language explanation identifying:

    • Incorrect objects (e.g., "guitar" instead of "violin")
    • Incorrect actions (e.g., "running" instead of "walking")

Training Details

Hyperparameter Value
Base Model Qwen2.5-VL-7B-Instruct
Training Data ActivityNet-FG-It (44K samples)
Epochs 1
Global Batch Size 128
Learning Rate 1e-4
LR Scheduler Cosine (min: 1e-5)
LoRA Rank 32
LoRA Alpha 32
LoRA Dropout 0.05
Number of Frames 32
Training Time ~32 GPU hours (A100)

Limitations

  • Primarily targets object and action correctness; attributes, spatial relationships, and fine-grained temporal ordering are not explicitly modeled
  • Training relies on synthetically generated captions and pseudo-scores
  • Higher computational cost compared to embedding-based metrics (though more lightweight than GPT-4o)

Citation

If you find this work useful, please cite our paper:

@misc{dipta2025advancingreferencefreeevaluationvideo,
      title={Advancing Reference-free Evaluation of Video Captions with Factual Analysis},
      author={Shubhashis Roy Dipta and Tz-Ying Wu and Subarna Tripathi},
      year={2025},
      eprint={2509.16538},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2509.16538},
}

Acknowledgements

This work builds upon Qwen2.5-VL and uses ms-swift for training.

Downloads last month
17
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for dipta007/VCInspector-7B

Finetuned
(950)
this model
Quantizations
2 models

Dataset used to train dipta007/VCInspector-7B

Collection including dipta007/VCInspector-7B

Paper for dipta007/VCInspector-7B