8000 fix: constructor LLM should not skip loading other config models by jeffreyscarpenter · Pull Request #1221 · NVIDIA/NeMo-Guardrails · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

fix: constructor LLM should not skip loading other config models #1221

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 10 commits into from
Jun 26, 2025

Conversation

jeffreyscarpenter
Copy link
Contributor

Description

This PR addresses an issue where the configuration for other models (e.g., content safety, topic control) was not being honored when a main model was provided via the LLMRails (or RunnableRails) constructor. The changes ensure that all model configurations are properly initialized and accessible, even when a main model is injected.

Related Issue(s)

Fixes #1220

Checklist

  • I've read the CONTRIBUTING guidelines.
  • I've updated the documentation if applicable.
  • I've added tests if applicable.
  • @mentions of the person or team responsible for reviewing proposed changes.

@tgasser-nv
Copy link
Collaborator

Thanks for providing this PR @jeffreyscarpenter . It looks good to me, there are a few steps before getting this merged:

  • Could you update the docstring in LLMRails init() to say the llm argument is used as the main LLM, and takes precedence over the main LLM (if it exists) in the config argument?
  • Could you add a Warning if the LLMRails is initialized with both an llm argument and main-model in the config argument? Include which model will be used and which is ignored.
  • Could you add tests to cover the permutations of main LLM being provided via config and llm in the LLMRails init() function?

In the longer-term it would be cleaner to get rid of the optional llm argument to LLMRails init() entirely, and check exactly one main model is provided in the RailsConfig argument. But that's not required here.

@tgasser-nv tgasser-nv self-requested a review June 12, 2025 19:53
Copy link
Collaborator
@tgasser-nv tgasser-nv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Requested a couple of changes, please also add tests to make sure the priority of the injected and RailsConfig main LLM

@Pouyanpi
Copy link
< 8000 /div>
Collaborator

Thank you @jeffreyscarpenter Nice work on identifying and fixing this bug! 👍 The original logic was definitely problematic; completely skipping model initialization when a constructor LLM was provided ...

A few things caught my attention though:

  • the test changes, I noticed you removed a bunch of expected events from test_1 (the "ask if user happy" flow). Is this intentional?
  • The self.main_llm vs self.llm thing, I see you introduced self.main_llm but we're still using self.llm everywhere else.
   # introduces self.main_llm but doesn't use it consistently
   self.main_llm = self.llm  # line 371
   # but later still uses self.llm in most places

also it'd be great if you can run pre-commits here and once the pipeline is executed have a look at the coverage report 👍🏻

@Pouyanpi Pouyanpi changed the title Honor other model configuration when main model provided via constructor fix: constructor LLM should not skip loading other config models Jun 19, 2025
@Pouyanpi Pouyanpi added bug Something isn't working status: in review labels Jun 19, 2025
@Pouyanpi Pouyanpi added this to the v0.14.1 milestone Jun 19, 2025
@jeffreyscarpenter
Copy link
Contributor Author

Thank you @Pouyanpi I have tried to address all the feedback, let me know.

  • the test changes, I noticed you removed a bunch of expected events from test_1 (the "ask if user happy" flow). Is this intentional?

Deleted inadvertently, I have restored them.

  • The self.main_llm vs self.llm thing, I see you introduced self.main_llm but we're still using self.llm everywhere else.

Removed all references to main_llm

also it'd be great if you can run pre-commits here and once the pipeline is executed have a look at the coverage report 👍🏻

Ran the pre-commits

Pouyanpi added 4 commits June 25, 2025 16:55
The LlamaGuard tests were passing before because they never actually tried to initialize the LlamaGuard model - the early return prevented it. The tests worked purely with the injected FakeLLM and manually registered llama_guard_llm.
@codecov-commenter
Copy link
codecov-commenter commented Jun 25, 2025

Codecov Report

Attention: Patch coverage is 80.00000% with 3 lines in your changes missing coverage. Please review.

Project coverage is 68.98%. Comparing base (5a4733f) to head (2d130f1).
Report is 3 commits behind head on develop.

Files with missing lines Patch % Lines
nemoguardrails/rails/llm/llmrails.py 80.00% 3 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #1221      +/-   ##
===========================================
+ Coverage    68.65%   68.98%   +0.33%     
===========================================
  Files          161      161              
  Lines        15978    15985       +7     
===========================================
+ Hits         10969    11027      +58     
+ Misses        5009     4958      -51     
Flag Coverage Δ
python 68.98% <80.00%> (+0.33%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
nemoguardrails/rails/llm/llmrails.py 87.58% <80.00%> (+0.37%) ⬆️

... and 2 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@Pouyanpi Pouyanpi self-requested a review June 25, 2025 15:48
Copy link
Collaborator
@Pouyanpi Pouyanpi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @jeffreyscarpenter I made some changes, some tests were failing because they never actually tried to initialize their respective model prior to your fix; the early return prevented it, thanks for the fix. We are good to merge 👍🏻

@Pouyanpi Pouyanpi merged commit 6d3ee5d into NVIDIA:develop Jun 26, 2025
18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working status: in review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

bug: other model configurations ignored when main model provided via constructor
4 participants
0