8000 GitHub - grill-lab/taps: TAPS: Tool-Augmented Personalisation via Structured Tagging
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

grill-lab/taps

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🔧 TAPS: Tool-Augmented Personalisation via Structured Tagging

TAPS, or Tool-Augmented Personalision via Structured Tagging, is the first fully-automated solution that leverages a structured tagging tool as well as an internal tool detection mechanism for contextualised tool-use in a dialogue setting on the NLSI dataset.

img

⚡ Setup and Usage

Clone the current repository and install requirements:

git clone https://github.com/grill-lab/taps.git
cd taps/
pip install -r requirements.txt

Data

Download the NLSI dataset from HuggingFace.

Optimising Demonstrations

cd src/
python ../scripts/dspy_bootstrap.py \
    --model_name {model_name} \
    --prompt_file {path_to_prompt} \
    --output_dir {output_dir} \
    --max_bootstrapped_demos {max_bootstrapped_demos} \
    --max_labeled_demos {max_labeled_demos} \
    --num_candidate_programs {num_candidate_programs} \
    --num_threads {num_threads} 

To select demonstrations for Tag-S and Tag-and-Generate add --tag 'simple' or --tag 'cot' argument, respectively.

NB: To evaluate OpenAI models, set an OPENAI_KEY environment variable.

Run TAPS on NLSI

  1. Run evaluation without uncertainty:

    cd src
    python ../scripts/base_experiment.py \
        --model_name {model_name} \
        --use_gpt {use_gpt} \
        --test_dataset {path_to_test_dataset} \
        --n_shots {n_shots} \ 
        --prompt {prompt_file} \

    To evaluate in the Tag-S setting, add --simple_aug 'tag'.
    To run Tag-and-Generate, instead add --tag 'cot'.

  2. Run evaluation with a tool detector:

    cd src
    python ../scripts/uncertainty_experiment.py \
        --model_name {model_name} \
        --use_gpt {use_gpt} \
        --test_dataset {path_to_test_dataset} \
        --n_shots {n_shots} \ 
        --prompt {prompt_file} \
        --aug_preds_fpath {aug_preds_fpath} 

    NB: currently to run the experiments with uncertainty you need to first run the experiment with the data augmentation tool on the whole dataset (see point 1). You then need to pass the path to the results using the --aug_preds_fpath argument.

  3. Running experiments in the optimised setting

    To run both simple evaluation and evaluation with a tool detector with the optimised instructions, specify the following arguments:

        --setup_fpath {setup_fpath} \
        --use_setup_shots True \
        --use_setup_prompt True

    where setup_fpath is the path to a .json file output by the optimiser.

🔗 Cite Us

TBD

Licence

Our code is available under the Apache 2.0 license.

About

TAPS: Tool-Augmented Personalisation via Structured Tagging

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

0