first vision language model built off openai/gpt-oss-20b just dropped! π₯
InternVL3.5 comes with 32 models π€― pre-trained, fine-tuned, aligned in various sizes OpenGVLab/internvl35-68ac87bd52ebe953485927fb comes with gpt-oss or Qwen3 for LLM part ‡οΈ
We just released TRL v0.20 with major multimodal upgrades!
ποΈ VLM support for GRPO (highly requested by the community!) ποΈ New GSPO trainer (from @Qwen, released last week, VLM-ready) π New MPO trainer (multimodal by design, as in the paper)
Yet Another New Multimodal Fine-Tuning Recipe π₯§
π§βπ³ In this @HuggingFace Face Cookbook notebook, we demonstrate how to align a multimodal model (VLM) using Mixed Preference Optimization (MPO) using trl.
π‘ This recipe is powered by the new MPO support in trl, enabled through a recent upgrade to the DPO trainer!
We align the multimodal model using multiple optimization objectives (losses), guided by a preference dataset (chosen vs. rejected multimodal pairs).