Description
(codet) shf@shf-Z790-UD:/media/shf/sda1/code/vlm/EAGLE-main/Eagle2/streamlit_demo$ sh run.sh
You can now view your Streamlit app in your browser.
Local URL: http://localhost:8501
Network URL: http://172.20.4.58:8501
2025-04-24 16:53:27 | INFO | controller | args: Namespace(dispatch_method='shortest_queue', host='0.0.0.0', port=10075)
2025-04-24 16:53:27 | INFO | controller | Init controller
2025-04-24 16:53:27 | ERROR | stderr | INFO: Started server process [91163]
2025-04-24 16:53:27 | ERROR | stderr | INFO: Waiting for application startup.
2025-04-24 16:53:27 | ERROR | stderr | INFO: Application startup complete.
2025-04-24 16:53:27 | ERROR | stderr | INFO: Uvicorn running on http://0.0.0.0:10075 (Press CTRL+C to quit)
args: Namespace(controller_url='http://127.0.0.1:10075', max_image_limit=128, sd_worker_url='http://0.0.0.0:40006')
2025-04-24 16:53:28 | INFO | stdout | INFO: 127.0.0.1:37634 - "POST /refresh_all_workers HTTP/1.1" 200 OK
2025-04-24 16:53:28 | INFO | stdout | INFO: 127.0.0.1:37636 - "POST /list_models HTTP/1.1" 200 OK
models: []
total_image_num: 0 len(uploaded_files): 0 max_image_limit: 128
len(model_list): 0
input_disable_flag: True
input_disable_flag: True
[]
2025-04-24 16:53:28 | INFO | model_worker | args: Namespace(controller_address='http://localhost:10075', device='cuda', host='0.0.0.0', limit_model_concurrency=5, load_8bit=False, model_name='Eagle-9B', model_path='/media/shf/sda1/code/vlm/EAGLE-main/nvidia/Eagle2-9B', port=6212, stream_interval=1, worker_address='http://127.0.0.1:6212')
2025-04-24 16:53:28 | INFO | model_worker | Loading the model Eagle-9B on worker 69ac01 ...
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
2025-04-24 16:53:28 | INFO | transformers_modules.Eagle2-9B.configuration_eagle_chat | keep_aspect_ratio: False
2025-04-24 16:53:28 | INFO | transformers_modules.Eagle2-9B.configuration_eagle_chat | vision_select_layer: -1
2025-04-24 16:53:28 | INFO | transformers_modules.Eagle2-9B.configuration_eagle_chat | min_dynamic_patch: 1
2025-04-24 16:53:28 | INFO | transformers_modules.Eagle2-9B.configuration_eagle_chat | max_dynamic_patch: 12
2025-04-24 16:53:28 | INFO | stdout | #######: siglip /media/shf/sda1/code/vlm/EAGLE-main/nvidia/Eagle2-9B
2025-04-24 16:53:28 | INFO | transformers_modules.Eagle2-9B.configuration_eagle_chat | keep_aspect_ratio: False
2025-04-24 16:53:28 | INFO | transformers_modules.Eagle2-9B.configuration_eagle_chat | vision_select_layer: -1
2025-04-24 16:53:28 | INFO | transformers_modules.Eagle2-9B.configuration_eagle_chat | min_dynamic_patch: 1
2025-04-24 16:53:28 | INFO | transformers_modules.Eagle2-9B.configuration_eagle_chat | max_dynamic_patch: 12
2025-04-24 16:53:28 | ERROR | stderr | Traceback (most recent call last):
2025-04-24 16:53:28 | ERROR | stderr | File "/home/shf/miniforge-pypy3/envs/codet/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1364, in _get_module
2025-04-24 16:53:28 | ERROR | stderr | return importlib.import_module("." + module_name, self.name)
2025-04-24 16:53:28 | ERROR | stderr | File "/home/shf/miniforge-pypy3/envs/codet/lib/python3.8/importlib/init.py", line 127, in import_module
2025-04-24 16:53:28 | ERROR | stderr | return _bootstrap._gcd_import(name[level:], package, level)
2025-04-24 16:53:28 | ERROR | stderr | File "", line 1014, in _gcd_import
2025-04-24 16:53:28 | ERROR | stderr | File "", line 991, in _find_and_load
2025-04-24 16:53:28 | ERROR | stderr | File "", line 975, in _find_and_load_unlocked
2025-04-24 16:53:28 | ERROR | stderr | File "", line 671, in _load_unlocked
2025-04-24 16:53:28 | ERROR | stderr | File "", line 843, in exec_module
2025-04-24 16:53:28 | ERROR | stderr | File "", line 219, in _call_with_frames_removed
2025-04-24 16:53:28 | ERROR | stderr | File "/home/shf/miniforge-pypy3/envs/codet/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 55, in
2025-04-24 16:53:28 | ERROR | stderr | from flash_attn import flash_attn_func, flash_attn_varlen_func
2025-04-24 16:53:28 | ERROR | stderr | File "/home/shf/miniforge-pypy3/envs/codet/lib/python3.8/site-packages/flash_attn/init.py", line 3, in
2025-04-24 16:53:28 | ERROR | stderr | from flash_attn.flash_attn_interface import (
2025-04-24 16:53:28 | ERROR | stderr | File "/home/shf/miniforge-pypy3/envs/codet/lib/python3.8/site-packages/flash_attn/flash_attn_interface.py", line 10, in
2025-04-24 16:53:28 | ERROR | stderr | import flash_attn_2_cuda as flash_attn_cuda
2025-04-24 16:53:28 | ERROR | stderr | ImportError: /home/shf/miniforge-pypy3/envs/codet/lib/python3.8/site-packages/flash_attn_2_cuda.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN2at4_ops5zeros4callEN3c108ArrayRefINS2_6SymIntEEENS2_8optionalINS2_10ScalarTypeEEENS6_INS2_6LayoutEEENS6_INS2_6DeviceEEENS6_IbEE
2025-04-24 16:53:28 | ERROR | stderr |
2025-04-24 16:53:28 | ERROR | stderr | The above exception was the direct cause of the following exception:
2025-04-24 16:53:28 | ERROR | stderr |
2025-04-24 16:53:28 | ERROR | stderr | Traceback (most recent call last):
2025-04-24 16:53:28 | ERROR | stderr | File "model_worker.py", line 466, in
2025-04-24 16:53:28 | ERROR | stderr | worker = ModelWorker(args.controller_address,
2025-04-24 16:53:28 | ERROR | stderr | File "model_worker.py", line 210, in init
2025-04-24 16:53:28 | ERROR | stderr | self.model = AutoModel.from_pretrained(model_path, torch_dtype=torch.bfloat16,
2025-04-24 16:53:28 | ERROR | stderr | File "/home/shf/miniforge-pypy3/envs/codet/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 553, in from_pretrained
2025-04-24 16:53:28 | ERROR | stderr | model_class = get_class_from_dynamic_module(
2025-04-24 16:53:28 | ERROR | stderr | File "/home/shf/miniforge-pypy3/envs/codet/lib/python3.8/site-packages/transformers/dynamic_module_utils.py", line 500, in get_class_from_dynamic_module
2025-04-24 16:53:28 | ERROR | stderr | return get_class_in_module(class_name, final_module.replace(".py", ""))
2025-04-24 16:53:28 | ERROR | stderr | File "/home/shf/miniforge-pypy3/envs/codet/lib/python3.8/site-packages/transformers/dynamic_module_utils.py", line 200, in get_class_in_module
2025-04-24 16:53:28 | ERROR | stderr | module = importlib.import_module(module_path)
2025-04-24 16:53:28 | ERROR | stderr | File "/home/shf/miniforge-pypy3/envs/codet/lib/python3.8/importlib/init.py", line 127, in import_module
2025-04-24 16:53:28 | ERROR | stderr | return _bootstrap._gcd_import(name[level:], package, level)
2025-04-24 16:53:28 | ERROR | stderr | File "", line 1014, in _gcd_import
2025-04-24 16:53:28 | ERROR | stderr | File "", line 991, in _find_and_load
2025-04-24 16:53:28 | ERROR | stderr | File "", line 975, in _find_and_load_unlocked
2025-04-24 16:53:28 | ERROR | stderr | File "", line 671, in _load_unlocked
2025-04-24 16:53:28 | ERROR | stderr | File "", line 843, in exec_module
2025-04-24 16:53:28 | ERROR | stderr | File "", line 219, in _call_with_frames_removed
2025-04-24 16:53:28 | ERROR | stderr | File "/home/shf/.cache/huggingface/modules/transformers_modules/Eagle2-9B/modeling_eagle_chat.py", line 14, in
2025-04-24 16:53:28 | ERROR | stderr | from transformers import (AutoModel, GenerationConfig, LlamaForCausalLM,
2025-04-24 16:53:28 | ERROR | stderr | File "", line 1039, in _handle_fromlist
2025-04-24 16:53:28 | ERROR | stderr | File "/home/shf/miniforge-pypy3/envs/codet/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1355, in getattr
2025-04-24 16:53:28 | ERROR | stderr | value = getattr(module, name)
2025-04-24 16:53:28 | ERROR | stderr | File "/home/shf/miniforge-pypy3/envs/codet/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1354, in getattr
2025-04-24 16:53:28 | ERROR | stderr | module = self._get_module(self._class_to_module[name])
2025-04-24 16:53:28 | ERROR | stderr | File "/home/shf/miniforge-pypy3/envs/codet/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1366, in _get_module
2025-04-24 16:53:28 | ERROR | stderr | raise RuntimeError(
2025-04-24 16