8000 [serve.llm] LLM serving seems not working with mistral tokenizer. · Issue #53873 · ray-project/ray · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
[serve.llm] LLM serving seems not working with mistral tokenizer. #53873
Open
@kanwang

Description

@kanwang

What happened + What you expected to happen

I tried serving a few mistral models like https://huggingface.co/mistralai/Devstral-Small-2505 or https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503. I've install are necessary dependencies (vllm==0.8.5.post1) but serving still wasn't working. From stack trace looks like it's from https://github.com/ray-project/ray/blob/master/python/ray/llm/_internal/serve/deployments/utils/node_initialization_utils.py#L151-L154 where we are trying to load the tokenizer. Could be because mistral models requires different tokenizer.

The suggestion vllm args are:

vllm serve mistralai/Devstral-Small-2505 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2

Stacktrace:

Traceback (most recent call last):
  File "/home/ray/.venv/lib/python3.12/site-packages/ray/serve/_private/deployment_state.py", line 694, in check_ready
    ) = ray.get(self._ready_obj_ref)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ray/.venv/lib/python3.12/site-packages/ray/_private/auto_init_hook.py", line 21, in auto_init_wrapper
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/ray/.venv/lib/python3.12/site-packages/ray/_private/client_mode_hook.py", line 103, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/ray/.venv/lib/python3.12/site-packages/ray/_private/worker.py", line 2822, in get
    values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ray/.venv/lib/python3.12/site-packages/ray/_private/worker.py", line 930, in get_objects
    raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(RuntimeError): �[36mray::ServeReplica:llm_app:LLMDeployment:mistralai--devstral-small-2505.initialize_and_get_metadata()�[39m (pid=1570, ip=10.112.221.167, actor_id=ab8257849af39417e37e0ae801000000, repr=<ray.serve._private.replica.ServeReplica:llm_app:LLMDeployment:mistralai--devstral-small-2505 object at 0x7a8cba05ab70>)
  File "/home/ray/.local/share/uv/python/cpython-3.12.10-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 449, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/home/ray/.local/share/uv/python/cpython-3.12.10-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ray/.venv/lib/python3.12/site-packages/ray/serve/_private/replica.py", line 984, in initialize_and_get_metadata
    await self._replica_impl.initialize(deployment_config)
  File "/home/ray/.venv/lib/python3.12/site-packages/ray/serve/_private/replica.py", line 713, in initialize
    raise RuntimeError(traceback.format_exc()) from None
RuntimeError: Traceback (most recent call last):
  File "/home/ray/.venv/lib/python3.12/site-packages/ray/serve/_private/replica.py", line 690, in initialize
    self._user_callable_asgi_app = await asyncio.wrap_future(
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ray/.venv/lib/python3.12/site-packages/ray/serve/_private/replica.py", line 1384, in initialize_callable
    await self._call_func_or_gen(
  File "/home/ray/.venv/lib/python3.12/site-packages/ray/serve/_private/replica.py", line 1347, in _call_func_or_gen
    result = await result
             ^^^^^^^^^^^^
  File "/home/ray/.venv/lib/python3.12/site-packages/ray/llm/_internal/serve/deployments/llm/llm_server.py", line 440, in __init__
    await asyncio.wait_for(self._start_engine(), timeout=ENGINE_START_TIMEOUT_S)
  File "/home/ray/.local/share/uv/python/cpython-3.12.10-linux-x86_64-gnu/lib/python3.12/asyncio/tasks.py", line 520, in wait_for
    return await fut
           ^^^^^^^^^
  File "/home/ray/.venv/lib/python3.12/site-packages/ray/llm/_internal/serve/deployments/llm/llm_server.py", line 486, in _start_engine
    await self.engine.start()
  File "/home/ray/.venv/lib/python3.12/site-packages/ray/llm/_internal/serve/deployments/llm/vllm/vllm_engine.py", line 232, in start
    self.engine = await self._start_engine()
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ray/.venv/lib/python3.12/site-packages/ray/llm/_internal/serve/deployments/llm/vllm/vllm_engine.py", line 271, in _start_engine
    return await self._start_engine_v0()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ray/.venv/lib/python3.12/site-packages/ray/llm/_internal/serve/deployments/llm/vllm/vllm_engine.py", line 364, in _start_engine_v0
    ) = await self._prepare_engine_config(use_v1=False)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ray/.venv/lib/python3.12/site-packages/ray/llm/_internal/serve/deployments/llm/vllm/vllm_engine.py", line 287, in _prepare_engine_config
    node_initialization = await self.initialize_node(self.llm_config)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ray/.venv/lib/python3.12/site-packages/ray/llm/_internal/serve/deployments/llm/vllm/vllm_engine.py", line 218, in initialize_node
    return await initialize_node_util(llm_config)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ray/.venv/lib/python3.12/site-packages/ray/llm/_internal/serve/deployments/utils/node_initialization_utils.py", line 109, in initialize_node
    await _initialize_local_node(
  File "/home/ray/.local/share/uv/python/cpython-3.12.10-linux-x86_64-gnu/lib/python3.12/concurrent/futures/thread.py", line 59, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ray/.venv/lib/python3.12/site-packages/ray/llm/_internal/serve/deployments/utils/node_initialization_utils.py", line 155, in _initialize_local_node
    _ = transformers.AutoTokenizer.from_pretrained(
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ray/.venv/lib/python3.12/site-packages/transformers/models/auto/tokenization_auto.py", line 1032, in from_pretrained
    return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ray/.venv/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 2025, in from_pretrained
    return cls._from_pretrained(
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/ray/.venv/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 2063, in _from_pretrained
    slow_tokenizer = (cls.slow_tokenizer_class)._from_pretrained(
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ray/.venv/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 2278, in _from_pretrained
    tokenizer = cls(*init_inputs, **init_kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ray/.venv/lib/python3.12/site-packages/transformers/models/llama/tokenization_llama.py", line 171, in __init__
    self.sp_model = self.get_spm_processor(kwargs.pop("from_slow", False))
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ray/.venv/lib/python3.12/site-packages/transformers/models/llama/tokenization_llama.py", line 198, in get_spm_processor
    tokenizer.Load(self.vocab_file)
  File "/home/ray/.venv/lib/python3.12/site-packages/sentencepiece/__init__.py", line 961, in Load
    return self.LoadFromFile(model_file)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ray/.venv/lib/python3.12/site-packages/sentencepiece/__init__.py", line 316, in LoadFromFile
    return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: not a string

Versions / Dependencies

  • vllm==0.8.5.post1
  • ray==2.46.0

Reproduction script

serving config we used:

serveConfigV2:
  http_options:
    host: 0.0.0.0
    port: 8000
    request_timeout_s: 300
    keep_alive_timeout_s: 10
  logging_config:
    encoding: JSON
    log_level: INFO
    logs_dir: null
    enable_access_log: true
  applications:
  - name: llm_app
    args:
      llm_configs:
        - model_loading_config:
            model_id: "mistralai/devstral-small-2505"
            model_source: "/home/ray/llm/mistralai/devstral-small-2505"
          accelerator_type: "L4"
          deployment_config:
            max_ongoing_requests: 16
            autoscaling_config:
              target_ongoing_requests: 10
              min_replicas: 2
              max_replicas: 4
              downscale_delay_s: 1200
          engine_kwargs:
            tensor_parallel_size: 4
            pipeline_parallel_size: 1
            gpu_memory_utilization: 0.95
            max_model_len: 100000
            tokenizer_mode: "mistral"
            config_format: "mistral"
            load_format: "mistral"
            enable_chunked_prefill: true
            enable_prefix_caching: true
    import_path: ray.serve.llm:build_openai_app
    name: llm_app
    route_prefix: "/"

Issue Severity

Medium: It is a significant difficulty but I can work around it.

Metadata

Metadata

Assignees

Labels

bugSomething that is supposed to be working; but isn'tllmserveRay Serve Related Issuestability

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions

    0