8000 Comparative evaluation slow in basic_demo.ipynb · Issue #9 · PAIR-code/llm-comparator · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
Comparative evaluation slow in basic_demo.ipynb #9
Open
@simofildi

Description

@simofildi

Hi,
there is an issue while executing the comparative evaluation:

# The comparison.run() function is the primary interface for running a
# Comparative Evaluation. It take your prepared inputs, a judge, a buletizer,
# and a clusterer and returns a Python dictioary in the required format for use
# in the LLM Comparator web app. You can inspect this dictionary in Python if
# you like, but it's more useful once written to a file.
#
# The example below is basic, but you can use the judge_opts=, bulletizer_opts=,
# and/or clusterer_opts= parameters (all of which are optional dictionaries that
# are converted to keyword options) to further customize the behaviors. See the
# Docsrtrings for more.
comparison_result = comparison.run(
    llm_judge_inputs,
    judge,
    bulletizer,
    clusterer,
)
INFO:absl:Created 18 inputs for LLM judge.
 11%
 2/18 [02:06<16:54, 63.41s/it]
INFO:absl:Waiting 2s to retry...
INFO:absl:Waiting 4s to retry...
INFO:absl:Waiting 8s to retry...
INFO:absl:Waiting 16s to retry...
INFO:absl:Waiting 32s to retry...
INFO:absl:Waiting 2s to retry...
INFO:absl:Waiting 4s to retry...
INFO:absl:Waiting 8s to retry...
INFO:absl:Waiting 16s to retry...
INFO:absl:Waiting 32s to retry...
INFO:absl:Waiting 2s to retry...
INFO:absl:Waiting 4s to retry...
INFO:absl:Waiting 8s to retry...
INFO:absl:Waiting 16s to retry...

in the example video this process is extremely fast, but for me it is impossible to fully execute even with the demo json provided

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0