8000 Some question about training lora for HiDream-I1 · Issue #11777 · huggingface/diffusers · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
Some question about training lora for HiDream-I1 #11777
Open
@yinguoweiOvO

Description

@yinguoweiOvO

Describe the bug

Has anyone successfully trained HiDream-I1 using the diffusers repository? I trained the lora model of HiDream-I1 with the default learning rate. The output images were extremely prone to crash and even turned into noisy images. However, this problem did not occur when using the ai-toolkit repository with the same learning rate. Which Settings should I adjust?

Note:
I modified the data set loading method, but the same data set loading method does not cause this problem when used for the training of the Flux, so it should not be caused by this part of the modification.

Please help me, Thanks!

Reproduction

export MODEL_NAME="./HiDream/HiDream-E1-Full"
export INSTANCE_DIR="./train_tag.json"
export OUTPUT_DIR="./HiDream-I1/trained-hidream-lora-8"

CUDA_VISIBLE_DEVICES=3 python ./HiDream-I1/train_dreambooth_lora_hidream.py
--pretrained_model_name_or_path=$MODEL_NAME
--pretrained_tokenizer_4_name_or_path="./meta-llama/Llama-3.1-8B-Instruct"
--pretrained_text_encoder_4_name_or_path="./meta-llama/Llama-3.1-8B-Instruct"
--dataset_name=$INSTANCE_DIR
--output_dir=$OUTPUT_DIR
--image_column="image"
--caption_column="text"
--mixed_precision="bf16"
--resolution=1024
--train_batch_size=1
--gradient_accumulation_steps=4
--use_8bit_adam
--rank=8
--learning_rate=2e-4
--report_to="tensorboard"
--lr_scheduler="constant_with_warmup"
--lr_warmup_steps=100
--max_train_steps=50000
--checkpointing_steps=1000
--seed="0"
--offload \

Logs

System Info

Package Version Editable project location


accelerate 1.3.0
aiohappyeyeballs 2.4.6
aiohttp 3.11.12
aiosignal 1.3.2
annotated-types 0.7.0
async-timeout 5.0.1
attrs 25.1.0
bitsandbytes 0.45.4
blessed 1.20.0
certifi 2024.12.14
charset-normalizer 3.4.1
contourpy 1.3.2
cycler 0.12.1
datasets 3.2.0
deepspeed 0.16.3
diffusers 0.33.0.dev0 /code/diffusers
dill 0.3.8
einops 0.8.0
filelock 3.17.0
fonttools 4.58.4
frozenlist 1.5.0
fsspec 2024.9.0
gpustat 1.1.1
hjson 3.1.0
huggingface-hub 0.27.1
idna 3.10
importlib_metadata 8.6.1
Jinja2 3.1.5
kiwisolver 1.4.8
MarkupSafe 3.0.2
matplotlib 3.10.3
modelscope 1.22.3
mpmath 1.3.0
msgpack 1.1.0
multidict 6.1.0
multiprocess 0.70.16
networkx 3.4.2
ninja 1.11.1.3
numpy 2.2.2
nvidia-cublas-cu12 12.4.5.8
nvidia-cuda-cupti-cu12 12.4.127
nvidia-cuda-nvrtc-cu12 12.4.127
nvidia-cuda-runtime-cu12 12.4.127
nvidia-cudnn-cu12 9.1.0.70
nvidia-cufft-cu12 11.2.1.3
nvidia-curand-cu12 10.3.5.147
nvidia-cusolver-cu12 11.6.1.9
nvidia-cusparse-cu12 12.3.1.170
nvidia-ml-py 12.560.30
nvidia-nccl-cu12 2.21.5
nvidia-nvjitlink-cu12 12.4.127
nvidia-nvtx-cu12 12.4.127
packaging 24.2
pandas 2.2.3
peft 0.15.0
pillow 11.1.0
pip 24.2
prodigyopt 1.1.2
propcache 0.2.1
protobuf 5.29.3
psutil 6.1.1
py-cpuinfo 9.0.0
pyarrow 19.0.0
pydantic 2.10.6
pydantic_core 2.27.2
pyparsing 3.2.3
python-dateutil 2.9.0.post0
pytz 2025.1
PyYAML 6.0.2
regex 2024.11.6
requests 2.32.3
safetensors 0.5.2
sentencepiece 0.2.0
setuptools 75.1.0
six 1.17.0
sympy 1.13.1
tokenizers 0.21.0
torch 2.5.1
torchvision 0.20.1
tqdm 4.67.1
transformers 4.48.1
triton 3.1.0
typing_extensions 4.12.2
tzdata 2025.1
urllib3 2.3.0
wcwidth 0.2.13
wheel 0.44.0
xxhash 3.5.0
yarl 1.18.3
zipp 3.21.0

Who can help?

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0