Paper link is : https://aclanthology.org/2024.acl-short.64/
Large language models (LLMs) have shown remarkable abilities in various tasks, including machine translation. Recent advancements have demonstrated that LLMs can improve translation quality by employing self-reflective methods to refine initial drafts through feedback loops. However, the effectiveness of this self-reflection is often constrained by limited feedback, impacting the continuous improvement of translations.
To tackle this issue, we introduce DUAL-REFLECT, a framework that leverages the duality property of translation tasks to provide effective feedback to LLMs, thereby enhancing their reflective capabilities and improving translation performance. DUAL-REFLECT stands for DUAL learning enhanced auto-REFLECtive Translation and consists of five stages:
- Draft Translation: LLMs generate an initial translation.
- Back Translation: The draft translation is translated back to the source language.
- Process Assessment: An LLM-based agent evaluates whether dual reflection is needed.
- Dual Reflection: LLMs analyze discrepancies between back-translation and the original source to identify biases and propose improvements.
- Auto Revision: LLMs revise the initial translation based on the analysis and suggestions.
Our experiments show that DUAL-REFLECT significantly enhances translation performance across various languages and benchmarks. It outperforms strong baseline methods and achieves superior results, especially in low-resource translation tasks.
To use DUAL-REFLECT, follow these steps:
-
Clone the repository:
git clone https://github.com/loulianzhang/Dual-Reflect.git
-
Navigate to the project directory:
cd dual-reflect
-
Use the Dual-Reflect Method:
python agent_with_LLM_as_judge.py # Openai python agent_with_QE_as_judge.py # Open source models
-
If you want to debug the code:
python evaluate.py
@inproceedings{chen-etal-2024-dual,
title = "{DUAL}-{REFLECT}: Enhancing Large Language Models for Reflective Translation through Dual Learning Feedback Mechanisms",
author = "Chen, Andong and
Lou, Lianzhang and
Chen, Kehai and
Bai, Xuefeng and
Xiang, Yang and
Yang, Muyun and
Zhao, Tiejun and
Zhang, Min",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-short.64",
pages = "693--704",
abstract = "Recently, large language models (LLMs) enhanced by self-reflection have achieved promising performance on machine transla004 tion. The key idea is guiding LLMs to generate translation with human-like feedback. However, existing self-reflection methods lack effective feedback information, limiting the translation performance. To address this, we introduce a DUAL-REFLECT framework, leveraging the dual learning of translation tasks to provide effective feedback, thereby enhancing the models{'} self-reflective abilities and improving translation performance. The application of this method across various translation tasks has proven its effectiveness in improving translation accuracy and eliminating ambiguities, especially in translation tasks with low-resource language pairs.",
}