-
Notifications
You must be signed in to change notification settings - Fork 226
Update evaluator, prompt utils, default config, and eval service #402
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
@@ -29,8 +29,13 @@ | |||
# Disable remote eval plugins if the config is set | |||
logger.info(f"setting remote eval plugins to {config.get('disable_remote_eval_plugins')}") | |||
os.environ['PROMPTFOO_DISABLE_REDTEAM_REMOTE_GENERATION'] = str(config.get("disable_remote_eval_plugins")) | |||
model = config.get("llm", {}).get("model") or "gpt-4" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pass None if not configured
|
||
# Get model from config here | ||
model = eval_config.get("llm", {}).get("model", "gpt-4") | ||
suggested_categories = get_suggested_plugins(purpose, model=model) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it should be set only once through eval_init_config like email and plugin_file_path
@@ -26,6 +26,10 @@ authz: | |||
get_vector_db_details: 60 | |||
get_vector_db_policies: 3 | |||
|
|||
llm: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
invalid change , remove this
also instead of llm: and model use one key "eval_llm_model"
@@ -329,12 +329,13 @@ def validate_evaluate_request_params(paig_eval_id, generated_prompts, base_promp | |||
return True, "" | |||
|
|||
|
|||
def suggest_promptfoo_redteam_plugins_with_openai(purpose: str) -> Dict | str: | |||
def suggest_promptfoo_redteam_plugins_with_openai(purpose: str, model: str = "gpt-4") -> Dict | str: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
invalid change you should take in from library config like we are getting file path and email
response['result'] = [] | ||
try: | ||
# Fetch model from eval_config or use default | ||
model = eval_config.get("llm", {}).get("model", "gpt-4") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where is this eval_config defined , have you confused it with paig server
Change Description
Describe your changes
Issue reference
This PR fixes issue #397
printed the log — it first tries to fetch the value from the config, and if it's not provided, it falls back to the default.

