8000 GitHub - loulianzhang/Dual-Reflect: All codes for our ACL2024 accepted paper "DUAL-REFLECT: Enhancing Large Language Models for Reflective Translation through Dual Learning Feedback Mechanisms"
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

All codes for our ACL2024 accepted paper "DUAL-REFLECT: Enhancing Large Language Models for Reflective Translation through Dual Learning Feedback Mechanisms"

Notifications You must be signed in to change notification settings

loulianzhang/Dual-Reflect

Repository files navigation

DUAL-REFLECT: Enhanced Translation with Dual-Reflective Learning

Paper link is : https://aclanthology.org/2024.acl-short.64/

Background

Large language models (LLMs) have shown remarkable abilities in various tasks, including machine translation. Recent advancements have demonstrated that LLMs can improve translation quality by employing self-reflective methods to refine initial drafts through feedback loops. However, the effectiveness of this self-reflection is often constrained by limited feedback, impacting the continuous improvement of translations.

To tackle this issue, we introduce DUAL-REFLECT, a framework that leverages the duality property of translation tasks to provide effective feedback to LLMs, thereby enhancing their reflective capabilities and improving translation performance. DUAL-REFLECT stands for DUAL learning enhanced auto-REFLECtive Translation and consists of five stages:

intro-main-v2_00

  1. Draft Translation: LLMs generate an initial translation.
  2. Back Translation: The draft translation is translated back to the source language.
  3. Process Assessment: An LLM-based agent evaluates whether dual reflection is needed.
  4. Dual Reflection: LLMs analyze discrepancies between back-translation and the original source to identify biases and propose improvements.
  5. Auto Revision: LLMs revise the initial translation based on the analysis and suggestions.

Our experiments show that DUAL-REFLECT significantly enhances translation performance across various languages and benchmarks. It outperforms strong baseline methods and achieves superior results, especially in low-resource translation tasks.

Installation

To use DUAL-REFLECT, follow these steps:

  1. Clone the repository:

    git clone https://github.com/loulianzhang/Dual-Reflect.git
  2. Navigate to the project directory:

    cd dual-reflect

Usage

  1. Use the Dual-Reflect Method:

    python agent_with_LLM_as_judge.py  # Openai
    
    python agent_with_QE_as_judge.py  # Open source models
    
  2. If you want to debug the code:

    python evaluate.py

Citation

@inproceedings{chen-etal-2024-dual,
    title = "{DUAL}-{REFLECT}: Enhancing Large Language Models for Reflective Translation through Dual Learning Feedback Mechanisms",
    author = "Chen, Andong  and
      Lou, Lianzhang  and
      Chen, Kehai  and
      Bai, Xuefeng  and
      Xiang, Yang  and
      Yang, Muyun  and
      Zhao, Tiejun  and
      Zhang, Min",
    editor = "Ku, Lun-Wei  and
      Martins, Andre  and
      Srikumar, Vivek",
    booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
    month = aug,
    year = "2024",
    address = "Bangkok, Thailand",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.acl-short.64",
    pages = "693--704",
    abstract = "Recently, large language models (LLMs) enhanced by self-reflection have achieved promising performance on machine transla004 tion. The key idea is guiding LLMs to generate translation with human-like feedback. However, existing self-reflection methods lack effective feedback information, limiting the translation performance. To address this, we introduce a DUAL-REFLECT framework, leveraging the dual learning of translation tasks to provide effective feedback, thereby enhancing the models{'} self-reflective abilities and improving translation performance. The application of this method across various translation tasks has proven its effectiveness in improving translation accuracy and eliminating ambiguities, especially in translation tasks with low-resource language pairs.",
}

About

All codes for our ACL2024 accepted paper "DUAL-REFLECT: Enhancing Large Language Models for Reflective Translation through Dual Learning Feedback Mechanisms"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages

0