8000 Update evaluator, prompt utils, default config, and eval service by ashwinivairalkar · Pull Request #402 · privacera/paig · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Update evaluator, prompt utils, default config, and eval service #402

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

ashwinivairalkar
Copy link
Contributor

Change Description

Describe your changes

Issue reference

This PR fixes issue #397

printed the log — it first tries to fetch the value from the config, and if it's not provided, it falls back to the default.
testgpt2
testgpt1

@@ -29,8 +29,13 @@
# Disable remote eval plugins if the config is set
logger.info(f"setting remote eval plugins to {config.get('disable_remote_eval_plugins')}")
os.environ['PROMPTFOO_DISABLE_REDTEAM_REMOTE_GENERATION'] = str(config.get("disable_remote_eval_plugins"))
model = config.get("llm", {}).get("model") or "gpt-4"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pass None if not configured


# Get model from config here
model = eval_config.get("llm", {}).get("model", "gpt-4")
suggested_categories = get_suggested_plugins(purpose, model=model)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it should be set only once through eval_init_config like email and plugin_file_path

@@ -26,6 +26,10 @@ authz:
get_vector_db_details: 60
get_vector_db_policies: 3

llm:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

invalid change , remove this
also instead of llm: and model use one key "eval_llm_model"

@@ -329,12 +329,13 @@ def validate_evaluate_request_params(paig_eval_id, generated_prompts, base_promp
return True, ""


def suggest_promptfoo_redteam_plugins_with_openai(purpose: str) -> Dict | str:
def suggest_promptfoo_redteam_plugins_with_openai(purpose: str, model: str = "gpt-4") -> Dict | str:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

invalid change you should take in from library config like we are getting file path and email

response['result'] = []
try:
# Fetch model from eval_config or use default
model = eval_config.get("llm", {}).get("model", "gpt-4")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where is this eval_config defined , have you confused it with paig server

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0