{ "title": "EVEv2: Improved Baselines for Encoder-Free Vision-Language Models", "authors": [ "Haiwen Diao", "Xiaotong Li", "Yufeng Cui", "Yueze Wang", "Haoge Deng", "Ting Pan", "Wenxuan Wang", "Huchuan Lu", "Xinlong Wang" ], "abstract": "Existing encoder-free vision-language models (VLMs) are rapidly narrowing the\nperformance gap with their encoder-based counterparts, highlighting the\npromising potential for unified multimodal systems with structural simplicity\nand efficient deployment. We systematically clarify the performance gap between\nVLMs using pre-trained vision encoders, discrete tokenizers, and minimalist\nvisual layers from scratch, deeply excavating the under-examined\ncharacteristics of encoder-free VLMs. We develop efficient strategies for\nencoder-free VLMs that rival mainstream encoder-based ones. After an in-depth\ninvestigation, we launch EVEv2.0, a new and improved family of encoder-free\nVLMs. We show that: (i) Properly decomposing and hierarchically associating\nvision and language within a unified model reduces interference between\nmodalities. (ii) A well-designed training strategy enables effective\noptimization for encoder-free VLMs. Through extensive evaluation, our EVEv2.0\nrepresents a thorough study for developing a decoder-only architecture across\nmodalities, demonstrating superior data efficiency and strong vision-reasoning\ncapability. Code is publicly available at: https://github.com/baaivision/EVE.", "pdf_url": "http://arxiv.org/pdf/2502.06788v1", "entry_id": "http://arxiv.org/abs/2502.06788v1", "categories": [ "cs.CV", "cs.AI" ] }