8000 Request for the environments · Issue #1 · SongW-SW/CEB · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Request for the environments #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Yangjinluan opened this issue Dec 15, 2024 · 1 comment
Open

Request for the environments #1

Yangjinluan opened this issue Dec 15, 2024 · 1 comment

Comments

@Yangjinluan
Copy link
Yangjinluan commented Dec 15, 2024

Hi! It's pioneering work! Could you release the package versions to help us reproduce the evaluation results? I hope you can provide the code to evaluate one model on all bias-related datasets. Moreover, could you provide explanations for the following bug?

微信图片_20241215184426

@Yangjinluan Yangjinluan changed the title Request for the environmnets Request for the environments Dec 15, 2024
@sararous
Copy link

File "/root/miniconda3/envs/CEB/lib/python3.9/site-packages/vllm/config.py", line 22, in
from vllm.model_executor.layers.quantization import (QUANTIZATION_METHODS,
File "/root/miniconda3/envs/CEB/lib/python3.9/site-packages/vllm/model_executor/init.py", line 1, in
from vllm.model_executor.parameter import (BasevLLMParameter,
File "/root/miniconda3/envs/CEB/lib/python3.9/site-packages/vllm/model_executor/parameter.py", line 7, in
from vllm.distributed import get_tensor_model_parallel_rank
File "/root/miniconda3/envs/CEB/lib/python3.9/site-packages/vllm/distributed/init.py", line 1, in
from .communication_op import *
File "/root/miniconda3/envs/CEB/lib/python3.9/site-packages/vllm/distributed/communication_op.py", line 6, in
from .parallel_state import get_tp_group
File "/root/miniconda3/envs/CEB/lib/python3.9/site-packages/vllm/distributed/parallel_state.py", line 38, in
import vllm.distributed.kv_transfer.kv_transfer_agent as kv_transfer
File "/root/miniconda3/envs/CEB/lib/python3.9/site-packages/vllm/distributed/kv_transfer/kv_transfer_agent.py", line 15, in
from vllm.distributed.kv_transfer.kv_connector.factory import (
File "/root/miniconda3/envs/CEB/lib/python3.9/site-packages/vllm/distributed/kv_transfer/kv_connector/factory.py", line 3, in
from .base import KVConnectorBase
File "/root/miniconda3/envs/CEB/lib/python3.9/site-packages/vllm/distributed/kv_transfer/kv_connector/base.py", line 14, in
from vllm.sequence import IntermediateTensors
File "/root/miniconda3/envs/CEB/lib/python3.9/site-packages/vllm/sequence.py", line 18, in
from vllm.multimodal import MultiModalDataDict, MultiModalPlaceholderDict
File "/root/miniconda3/envs/CEB/lib/python3.9/site-packages/vllm/multimodal/init.py", line 6, in
from .registry import MultiModalRegistry
File "/root/miniconda3/envs/CEB/lib/python3.9/site-packages/vllm/multimodal/registry.py", line 18, in
from .processing import BaseMultiModalProcessor
File "/root/miniconda3/envs/CEB/lib/python3.9/site-packages/vllm/multimodal/processing.py", line 31, in
class PromptReplacement:
File "/root/miniconda3/envs/CEB/lib/python3.9/site-packages/vllm/multimodal/processing.py", line 38, in PromptReplacement
replacement: Union[Callable[[int], _PromptSeq],
File "/root/miniconda3/envs/CEB/lib/python3.9/typing.py", line 243, in inner
return func(*args, **kwds)
File "/root/miniconda3/envs/CEB/lib/python3.9/typing.py", line 316, in getitem
return self._getitem(self, parameters)
File "/root/miniconda3/envs/CEB/lib/python3.9/typing.py", line 421, in Union
parameters = _remove_dups_flatten(parameters)
File "/root/miniconda3/envs/CEB/lib/python3.9/typing.py", line 215, in _remove_dups_flatten
all_params = set(params)
TypeError: unhashable type: 'list'
I want to konw how to reslove this problem. thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants
0