DiffusionVL: Translating Any Autoregressive Models into
Diffusion Vision Language Models

SOTA dVLM Performance with <5% Data & 2.0Γ— Inference Speedup!

Lunbin Zeng1,*, Jingfeng Yao1,*, Bencheng Liao1, Hongyuan Tao1, Wenyu Liu1, Xinggang Wang1, βœ‰οΈ

1Huazhong University of Science and Technology

*equal contribution, βœ‰οΈcorresponding author, [email protected]

arXiv Hugging Face Paper GitHub Hugging Face

πŸ“° News

  • [2025.12.25] πŸŽ„ We have completed our release plan ahead of schedule. DiffusionVL is now fully open-sourced. Merry Christmas to the community!
  • [2025.12.18] πŸŽ‰ Our paper DiffusionVL is released on arXiv! We also release the DiffusionVL models translated from Qwen2.5VL on Hugging Face.

πŸš€ Release Plan

  • Release paper
  • Release DiffusionVL model weights (translated from AR-VLMs)
  • Release DiffusionVL model weights (translated from AR-LMs)
  • Release evaluation code
  • Release training code

πŸ“„ Introduction

The diffusion paradigm has emerged as a promising alternative to autoregressive (AR) models, offering the potential for efficient parallel decoding. However, existing diffusion vision language models (dVLMs) largely lag behind mainstream autoregressive vision language models in performance, primarily due to the capability limitations of their base diffusion language models.

DiffusionVL bridges this gap by answering a fundamental question: Can we directly translate any existing autoregressive models into powerful diffusion vision language models? We propose a diffusion finetuning framework that "translates" any pretrained AR model into a diffusion vision language model through a simple paradigm shift and modality shift. Unlike prior dVLMs restricted by fixed generation lengths, DiffusionVL introduces a novel block decoding strategy. This allows for arbitrary-length generation and KV-cache reuse. With this integrated design, despite training with less than 5% of the training data required by previous methods, DiffusionVL translated from AR-VLMs achieves a state-of-the-art performance among exsiting dVLMs and delivers a 2.0Γ— inference speedup.

✨ Highlights

  • Universal Translation Framework: Translate any AR models into dVLMs with a simple yet effective approach.

  • Superior Performance: Achieve SOTA dVLM performance using <5% training data (738K vs 16.5M samples).

  • 2.0Γ— Faster Inference: Block decoding strategy enables KV-cache reuse and 2.0Γ— speedup over previous dVLMs.

Benchmark Image Framework

πŸš€ Get Started

Document Description
Installation Environment setup, data and model preparation
Training & Evaluation Train and evaluate DiffusionVL models
Inference Quick inference with pre-trained models

❀️ Acknowledgements

This repo is mainly built on Qwen2.5-VL, LLaDA-V, BD3LMs and SDAR, lmms-eval. We thank the authors for their open-source contributions.

πŸ“ Citation

If you find our work useful, please cite our paper:

@misc{zeng2025diffusionvltranslatingautoregressivemodels,
      title={DiffusionVL: Translating Any Autoregressive Models into Diffusion Vision Language Models},
      author={Lunbin Zeng and Jingfeng Yao and Bencheng Liao and Hongyuan Tao and Wenyu Liu and Xinggang Wang},
      year={2025},
      eprint={2512.15713},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2512.15713},
}
Downloads last month
57
Safetensors
Model size
4B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including hustvl/DiffusionVL-Qwen2.5VL-3B