8000 GitHub - Liar406/Look_Again
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Liar406/Look_Again

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 

Repository files navigation

🎁 This is the project of Qwen Look Again.

🙂 Our code will be launched soon.

📣 News

  • [2025/05/29] We will open source our code as soon as possible. 😊

Abstract

Inference time scaling drives extended reasoning to enhance the performance of Vision-Language Models (VLMs), thus forming powerful Vision-Language Reasoning Models (VLRMs). However, long reasoning dilutes visual tokens, causing visual information to receive less attention and may trigger hallucinations. Although introducing text-only reflection processes shows promise in language models, we demonstrate that it is insufficient to suppress hallucinations in VLMs. To address this issue, we introduce Qwen-LookAgain (Qwen-LA), a novel VLRM designed to mitigate hallucinations by incorporating a vision-text reflection process that guides the model to re-attention visual information during reasoning. We first propose a reinforcement learning method Balanced Reflective Policy Optimization (BRPO), which guides the model to decide when to generate vision-text reflection on its own and balance the number and length of reflections. Then, we formally prove that VLRMs lose attention to visual tokens as reasoning progresses, and demonstrate that supplementing visual information during reflection enhances visual attention. Therefore, during training and inference, Visual Token COPY and Visual Token ROUTE are introduced to force the model to re-attention visual information at the visual level, addressing the limitations of text-only reflection. Experiments on multiple visual QA datasets and hallucination metrics indicate that Qwen-LA achieves leading accuracy performance while reducing hallucinations.

The framework of Qwen-LookAgain

🚩Citation

If this work is helpful, please kindly cite as:

@article{chu2025qwen,
  title={Qwen Look Again: Guiding Vision-Language Reasoning Models to Re-attention Visual Information},
  author={Chu, Xu and Chen, Xinrong and Wang, Guanyu and Tan, Zhijie and Huang, Kui and Lv, Wenyu and Mo, Tong and Li, Weiping},
  journal={arXiv preprint arXiv:2505.23558},
  year={2025}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  
0