8000 [RLlib; Offline RL] Implement Offline Policy Evaluation (OPE) via Importance Sampling. by simonsays1980 · Pull Request #53702 · ray-project/ray · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

[RLlib; Offline RL] Implement Offline Policy Evaluation (OPE) via Importance Sampling. #53702

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 18 commits into
base: master
Choose a base branch
from

Conversation

simonsays1980
Copy link
Collaborator
@simonsays1980 simonsays1980 commented Jun 10, 2025

Why are these changes needed?

The Offline RL API in th new API stack still misses offline policy evaluation - although it offers a validation loss. This PR introduces OPE and implements the following:

  • A new OfflinePolicyEvaluationRunner that derives from our OfflineEvaluationRunner and can be scheduled by our OfflineEvaluationRunnerGroup (users can also implement their own runner class for custom evaluation).
  • A corresponding OfflinePolicyPreEvaluator that preprocesses data for OPE.
  • Two new attributes in the AlgorithmConfig to control offline evaluation:
    • offline_evaluation_type: the evaluation type. Can be either "eval_loss", "pdis", "is".
    • offline_eval_runner_class: the runner class to use for offline evaluation. This can be a custom class. If no class is given the standard classes are used for the different evaluation types.
    • Corresponding validation logic for the new attributes.
  • Two OPE methods:
    • "is": Ordinary importance sampling.
    • "pdis": Per-decision importance sampling (which inhibits usually a lower variance than simple IS).

What is still missing:

  • Example for using OPE with some SingleAgentEpisode data.
  • Using the EnvToModule pipeline inside of the OfflinePolicyPreEvaluator.

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

@simonsays1980 simonsays1980 marked this pull request as ready for review June 10, 2025 16:53
@Copilot Copilot AI review requested due to automatic review settings June 10, 2025 16:53
@simonsays1980 simonsays1980 requested a review from a team as a code owner June 10, 2025 16:53
Copy link
Contributor
@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This pull request implements Offline Policy Evaluation (OPE) via Importance Sampling for the Offline RL API. Key changes include the introduction of new runner and pre-evaluator classes for OPE, updates to the evaluation configuration and processing in the algorithm logic, and adjustments to logging and state management for offline evaluation.

Reviewed Changes

Copilot reviewed 7 out of 7 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
rllib/utils/runners/runner_group.py Added forwarding of kwargs in runner creation.
rllib/tuned_examples/bc/cartpole_bc_with_offline_evaluation.py Configured offline evaluation type for the cartpole example.
rllib/offline/offline_evaluation_runner_group.py Updated runner class selection logic and introduced pre-learner/evaluator assignment.
rllib/offline/offline_evaluation_runner.py Applied override annotations and removed state updates for deprecated connectors.
rllib/env/single_agent_env_runner.py Renamed metric keys for per-agent and per-module returns.
rllib/algorithms/algorithm_config.py Added and validated new offline evaluation configuration attributes.
rllib/algorithms/algorithm.py Updated offline evaluation runner setup and return value structure while introducing a local-runner fallback.
Comments suppressed due to low confidence (2)

rllib/algorithms/algorithm_config.py:2992

  • The attribute name used here ('offline_eval_runner_cls') is inconsistent with the previously defined 'offline_eval_runner_class'. Consider using the same attribute name for consistency.
            self.offline_eval_runner_cls = offline_eval_runner_class

rllib/algorithms/algorithm.py:1159

  • The return structure of 'evaluate_offline' has changed compared to the previous nested format. Please ensure that downstream consumers are updated to handle the new dictionary structure.
        return {OFFLINE_EVAL_RUNNER_RESULTS: eval_results}

@simonsays1980 simonsays1980 requested a review from sven1977 June 10, 2025 17:10
@simonsays1980 simonsays1980 added rllib RLlib related issues rllib-evaluation Bug affecting policy evaluation with RLlib. rllib-offline-rl Offline RL problems labels Jun 10, 2025
@@ -2829,6 +2833,13 @@ def evaluation(
for parallel evaluation. Setting this to 0 forces sampling to be done in the
local OfflineEvaluationRunner (main process or the Algorithm's actor when
using Tune).
offline_evaluation_type: Type of offline evaluation to run. Either `"eval_loss"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: So, if a user provides offline_eval_runner_class, then the value of this field is ignored?
For more explicitness, should we not provide these 3 built-ins ("eval_loss", "is", "pdis") as classes as well and show users, where to find them in the repo? Then this config setting would be superfluous. Or do you think it's too complicated to explain?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good one. Let me think about this. Both solutions have their advantages.

@@ -1363,6 +1366,38 @@ def _evaluate_with_custom_eval_function(self) -> Tuple[ResultDict, int, int]:

return eval_results, env_steps, agent_steps

def _evaluate_offline_on_local_runner(self):
# if hasattr(env_runner, "input_reader") and env_runner.input_reader is None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this comment?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yeah! How did this even get in there?

Copy link
Contributor
@sven1977 sven1977 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approved with one question. Thanks @simonsays1980 !

@sven1977 sven1977 enabled auto-merge (squash) June 27, 2025 08:53
@github-actions github-actions bot added the go add ONLY when ready to merge, run all tests label Jun 27, 2025
@sven1977 sven1977 disabled auto-merge June 27, 2025 08:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
go add ONLY when ready to merge, run all tests rllib RLlib related issues rllib-evaluation Bug affecting policy evaluation with RLlib. rllib-offline-rl Offline RL problems
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0