8000 fix: Prevent explain_info overwrite during stream_async by andompesta · Pull Request #1194 · NVIDIA/NeMo-Guardrails · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

fix: Prevent explain_info overwrite during stream_async #1194

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

andompesta
Copy link
Contributor
@andompesta andompesta commented May 14, 2025

Description

This PR addresses an issue where self.explain_info could be overwritten during calls to stream_async, potentially leading to incomplete explanation data.

The root cause was that the generate_async task and the _run_output_rails_in_streaming function (if output streaming rails were active) could independently initialize ExplainInfo objects if the explain_info_var context variable was not set prior to the generate_async task's creation.

The fix ensures that explain_info_var and self.explain_info are initialized at the beginning of the stream_async method. This guarantees that both the generate_async task (via context variable inheritance) and _run_output_rails_in_streaming use the same ExplainInfo instance, preserving data integrity.

[same as https://github.com//pull/1193 but with pre-commit hooks done]

Related Issue(s)

#1192

Checklist

  • [ x ] I've read the CONTRIBUTING guidelines.
  • I've updated the documentation if applicable.
  • [ x ] I've added tests if applicable.
  • [ x ] @mentions of the person or team responsible for reviewing proposed changes.

Sandro Cavallari added 2 commits May 14, 2025 08:14
@Pouyanpi Pouyanpi requested review from Pouyanpi and Copilot May 14, 2025 09:01
Copy link
@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR fixes the missing LLM call tracking in the explain info and refactors its initialization logic while adding tests to ensure proper behavior with pre-commit hooks.

  • Added asynchronous tests to validate that the explain info correctly records LLM calls.
  • Refactored llmrails.py to encapsulate explain info initialization in the _ensure_explain_info() method and removed redundant updating functions.

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.

File Description
tests/test_llm_rails_context_variables.py Introduces tests for LLM call tracking and verifies expected call counts.
nemoguardrails/rails/llm/llmrails.py Refactors and unifies the logic for initializing explain info, removing redundant code.
Comments suppressed due to low confidence (1)

tests/test_llm_rails_context_variables.py:111

  • The comment above this assertion states that 6 LLM calls are expected, but the assertion checks for 5. Please update either the comment or the expected value to ensure consistency.
assert (len(chat.app.explain().llm_calls) == 5), "number of llm call not as expected. Expected 5, found {}".format(len(chat.app.explain().llm_calls))

@Pouyanpi Pouyanpi changed the title Fix/missing llm calls in explain info [new instance with pre-commit hooks] fix: Prevent explain_info overwrite during stream_async May 14, 2025
Copy link
Collaborator
@Pouyanpi Pouyanpi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thank you @andompesta

@Pouyanpi Pouyanpi added this to the v0.14.0 milestone May 14, 2025
@Pouyanpi Pouyanpi merged commit 343b759 into NVIDIA:develop May 14, 2025
29 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0