-
Notifications
You must be signed in to change notification settings - Fork 9k
Unified Weight Adapter system for better maintainability and future feature of Lora system #7540
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unified Weight Adapter system for better maintainability and future feature of Lora system #7540
Conversation
LoRA load/calculate_weight LoHa/LoKr/GLoRA load
For calculate weight I implement a fallback mechnism temporary for dev
* Allow disabling pe in flux code for some other models. * Initial Hunyuan3Dv2 implementation. Supports the multiview, mini, turbo models and VAEs. * Fix orientation of hunyuan 3d model. * A few fixes for the hunyuan3d models. * Update frontend to 1.13 (comfyanonymous#7331) * Add backend primitive nodes (comfyanonymous#7328) * Add backend primitive nodes * Add control after generate to int primitive * Nodes to convert images to YUV and back. Can be used to convert an image to black and white. * Update frontend to 1.14 (comfyanonymous#7343) * Native LotusD Implementation (comfyanonymous#7125) * draft pass at a native comfy implementation of Lotus-D depth and normal est * fix model_sampling kludges * fix ruff --------- Co-authored-by: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com> * Automatically set the right sampling type for lotus. * support output normal and lineart once (comfyanonymous#7290) * [nit] Format error strings (comfyanonymous#7345) * ComfyUI version v0.3.27 * Fallback to pytorch attention if sage attention fails. * Add model merging node for WAN 2.1 * Add Hunyuan3D to readme. * Support more float8 types. * Add CFGZeroStar node. Works on all models that use a negative prompt but is meant for rectified flow models. * Support the WAN 2.1 fun control models. Use the new WanFunControlToVideo node. * Add WanFunInpaintToVideo node for the Wan fun inpaint models. * Update frontend to 1.14.6 (comfyanonymous#7416) Cherry-pick the fix: Comfy-Org/ComfyUI_frontend#3252 * Don't error if wan concat image has extra channels. * ltxv: fix preprocessing exception when compression is 0. (comfyanonymous#7431) * Remove useless code. * Fix latent composite node not working when source has alpha. * Fix alpha channel mismatch on destination in ImageCompositeMasked * Add option to store TE in bf16 (comfyanonymous#7461) * User missing (comfyanonymous#7439) * Ensuring a 401 error is returned when user data is not found in multi-user context. * Returning a 401 error when provided comfy-user does not exists on server side. * Fix comment. This function does not support quads. * MLU memory optimization (comfyanonymous#7470) Co-authored-by: huzhan <huzhan@cambricon.com> * Fix alpha image issue in more nodes. * Fix problem. * Disable partial offloading of audio VAE. * Add activations_shape info in UNet models (comfyanonymous#7482) * Add activations_shape info in UNet models * activations_shape should be a list * Support 512 siglip model. * Show a proper error to the user when a vision model file is invalid. * Support the wan fun reward loras. --------- Co-authored-by: comfyanonymous <comfyanonymous@protonmail.com> Co-authored-by: Chenlei Hu <hcl@comfy.org> Co-authored-by: thot experiment <94414189+thot-experiment@users.noreply.github.com> Co-authored-by: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com> Co-authored-by: Terry Jia <terryjia88@gmail.com> Co-authored-by: Michael Kupchick <michael@lightricks.com> Co-authored-by: BVH <82035780+bvhari@users.noreply.github.com> Co-authored-by: Laurent Erignoux <lerignoux@gmail.com> Co-authored-by: BiologicalExplosion <49753622+BiologicalExplosion@users.noreply.github.com> Co-authored-by: huzhan <huzhan@cambricon.com> Co-authored-by: Raphael Walker <slickytail.mc@gmail.com>
* LoRA/LoHa/LoKr/GLoRA working well * Removed TONS of code in lora.py
@Kosinkadink ping- |
@KohakuBlueleaf Hey, looks great, would be be able to list out any manual testing you did for this PR? i.e. whether each type of lora was verified to produce the same results before and after. Just for the sake of documenting for review purposes. Thank you! |
I tested LyCORIS trained lora lokr loha on sdxl models. Since the calculation on each part is never changed, I just moved to different modules. I think those testing is enough |
Based on: comfyanonymous/ComfyUI#7727 comfyanonymous/ComfyUI#7725 comfyanonymous/ComfyUI#7540 Backwards compatible support of wd on output Unified weight adapter solution Support for OFT/BOFT
As title, I proposed a unified weight adapter system to replace current lora implementation which should be benefit for future maintainance, implementation of new algorithm and other feature (such as trainer node)
cc @comfyanonymous @yoland68