8000 GitHub - uanushkatkd/metricx
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

uanushkatkd/metricx

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Steps to run code:

To Fine-tune Metricx-23 using LoRA:

  • Run fine_tune.sh, it'll take datasets path,ckpt save path, batch size, and grad accumulation steps. Other hyperparameters are set to default.
  • To change other hyperparameters, go through fine_tune_from_scratch.py
  • Once final ckpt is saved then run merge_adapter.py. It takes base_model, checkpoint, and output_dir as arguments.

To run inference :

  • Run score.sh, it'll take datasets path,ckpt save path, batch size, and output_file as arguments.output_file is where the scores predicted will be saved, which can be used further to calculate correlations.
  • score.py will calucate correlations, but code is not clean.

MetricX-23

This is not an officially supported Google product.

This repository contains the code for running inference on MetricX-23 models, a family of models for automatic evaluation of translations that were proposed in the WMT'23 Metrics Shared Task submission MetricX-23: The Google Submission to the WMT 2023 Metrics Shared Task. The models were trained in T5X and then converted for use in PyTorch.

Available Models

There are 6 models available on HuggingFace that vary in the number of parameters and whether or not the model is reference-based or reference-free (also known as quality estimation, or QE):

We recommend using the XXL model versions for the best agreement with human judgments of translation quality, the Large versions for best speed, and the XL for an intermediate use case.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 95.4%
  • Roff 3.5%
  • Shell 1.1%
0