-
Notifications
You must be signed in to change notification settings - Fork 491
fix: constructor LLM should not skip loading other config models #1221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: constructor LLM should not skip loading other config models #1221
Conversation
Thanks for providing this PR @jeffreyscarpenter . It looks good to me, there are a few steps before getting this merged:
In the longer-term it would be cleaner to get rid of the optional |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Requested a couple of changes, please also add tests to make sure the priority of the injected and RailsConfig main LLM
…licate main llm configuration
…honoring other model configs
13d61c0
to
5bd9b36
Compare
Thank you @jeffreyscarpenter Nice work on identifying and fixing this bug! 👍 The original logic was definitely problematic; completely skipping model initialization when a constructor LLM was provided ... A few things caught my attention though:
# introduces self.main_llm but doesn't use it consistently
self.main_llm = self.llm # line 371
# but later still uses self.llm in most places also it'd be great if you can run pre-commits here and once the pipeline is executed have a look at the coverage report 👍🏻 |
Thank you @Pouyanpi I have tried to address all the feedback, let me know.
Deleted inadvertently, I have restored them.
Removed all references to
Ran the pre-commits |
The LlamaGuard tests were passing before because they never actually tried to initialize the LlamaGuard model - the early return prevented it. The tests worked purely with the injected FakeLLM and manually registered llama_guard_llm.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #1221 +/- ##
===========================================
+ Coverage 68.65% 68.98% +0.33%
===========================================
Files 161 161
Lines 15978 15985 +7
===========================================
+ Hits 10969 11027 +58
+ Misses 5009 4958 -51
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @jeffreyscarpenter I made some changes, some tests were failing because they never actually tried to initialize their respective model prior to your fix; the early return prevented it, thanks for the fix. We are good to merge 👍🏻
Description
This PR addresses an issue where the configuration for other models (e.g., content safety, topic control) was not being honored when a main model was provided via the LLMRails (or RunnableRails) constructor. The changes ensure that all model configurations are properly initialized and accessible, even when a main model is injected.
Related Issue(s)
Fixes #1220
Checklist