8000 OpenAPI is not working for data generation using Ollama http://localhost:11434 · Issue #2287 · instructlab/instructlab · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
OpenAPI is not working for data generation using Ollama http://localhost:11434 #2287
Closed as not planned
@aseelert

Description

@aseelert

Describe the bug

ilab model serve --endpoint-url http://localhost:11434

is not working with Ollama. According the Ollama/WebUI documentation the curl commands are working as expected. I tried also with API key, bu

sudo docker run -d \
--network=host \
--gpus=all \
-v ollama:/root/.ollama \
-v open-webui:/app/backend/data \
--name open-webui \
--restart=always \
ghcr.io/open-webui/open-webui:ollama

To Reproduce

ilab data generate --pipeline full --gpus 1 --num-cpus 16 --sdg-scale-factor 500 \
--endpoint-url http://127.0.0.1:11434 --api-key 500sk-0a726e394d28462a8f891c2f83777c4f  \
--enable-serving-output --output-dir ./outputtest --batch-size 0

Error*

Generating synthetic data using 'full' pipeline, '/home/itzuser/.cache/instructlab/models/mixtral-8x7b-instruct-v0.1.Q3_K_M.gguf' model, '/home/itzuser/.local/share/instructlab/taxonomy' taxonomy, against http://127.0.0.1:11434 server
Traceback (most recent call last):
  File "/home/itzuser/instructlab/venv/lib64/python3.11/site-packages/instructlab/sdg/pipeline.py", line 196, in _generate_single
    block = block_type(self.ctx, self, block_name, **block_config)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/itzuser/instructlab/venv/lib64/python3.11/site-packages/instructlab/sdg/llmblock.py", line 88, in __init__
    self.server_supports_batched = server_supports_batched(
                                   ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/itzuser/instructlab/venv/lib64/python3.11/site-packages/instructlab/sdg/llmblock.py", line 43, in server_supports_batched
    response = client.completions.create(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/itzuser/instructlab/venv/lib64/python3.11/site-packages/openai/_utils/_utils.py", line 274, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/itzuser/instructlab/venv/lib64/python3.11/site-packages/openai/resources/completions.py", line 539, in create
    return self._post(
           ^^^^^^^^^^^
  File "/home/itzuser/instructlab/venv/lib64/python3.11/site-packages/openai/_base_client.py", line 1260, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/itzuser/instructlab/venv/lib64/python3.11/site-packages/openai/_base_client.py", line 937, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/home/itzuser/instructlab/venv/lib64/python3.11/site-packages/openai/_base_client.py", line 1041, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: 404 page not found

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/itzuser/instructlab/venv/bin/ilab", line 8, in <module>
    sys.exit(ilab())
             ^^^^^^
  File "/home/itzuser/instructlab/venv/lib64/python3.11/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/itzuser/instructlab/venv/lib64/python3.11/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "/home/itzuser/instructlab/venv/lib64/python3.11/site-packages/click/core.py", line 1688, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/itzuser/instructlab/venv/lib64/python3.11/site-packages/click/core.py", line 1688, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/itzuser/instructlab/venv/lib64/python3.11/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/itzuser/instructlab/venv/lib64/python3.11/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/itzuser/instructlab/venv/lib64/python3.11/site-packages/click/decorators.py", line 33, in new_func
    return f(get_current_context(), *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/itzuser/instructlab/venv/lib64/python3.11/site-packages/instructlab/clickext.py", line 306, in wrapper
    return f(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^
  File "/home/itzuser/instructlab/venv/lib64/python3.11/site-packages/instructlab/data/generate.py", line 305, in generate
    generate_data(
  File "/home/itzuser/instructlab/venv/lib64/python3.11/site-packages/instructlab/sdg/generate_data.py", line 401, in generate_data
    new_generated_data = pipe.generate(ds, leaf_node_path)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/itzuser/instructlab/venv/lib64/python3.11/site-packages/instructlab/sdg/pipeline.py", line 154, in generate
    return self._generate_single(dataset)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/itzuser/instructlab/venv/lib64/python3.11/site-packages/instructlab/sdg/pipeline.py", line 203, in _generate_single
    raise PipelineBlockError(
instructlab.sdg.pipeline.PipelineBlockError: PipelineBlockError(<class 'instructlab.sdg.llmblock.LLMBlock'>/gen_contexts): 404 page not found

Expected behavior

Screenshots

Device Info (please complete the following information):
Python 3.11.2

Additional context
sys.version: 3.11.2 (main, Jul 18 2024, 00:00:00) [GCC 11.3.1 20221121 (Red Hat 11.3.1-4)]
sys.platform: linux
os.name: posix
platform.release: 5.14.0-284.82.1.el9_2.x86_64
platform.machine: x86_64
platform.node: itzvsi-550001nu41-hsscbt6c
platform.python_version: 3.11.2
os-release.ID: rhel
os-release.VERSION_ID: 9.2
os-release.PRETTY_NAME: Red Hat Enterprise Linux 9.2 (Plow)
instructlab.version: 0.18.4
instructlab-dolomite.version: 0.1.1
instructlab-eval.version: 0.1.2
instructlab-quantize.version: 0.1.0
instructlab-schema.version: 0.3.1
instructlab-sdg.version: 0.2.7
instructlab-training.version: 0.4.2
torch.version: 2.3.1+cu121
torch.backends.cpu.capability: AVX512
torch.version.cuda: 12.1
torch.version.hip: None
torch.cuda.available: True
torch.backends.cuda.is_built: True
torch.backends.mps.is_built: False
torch.backends.mps.is_available: False
torch.cuda.bf16: True
torch.cuda.current.device: 0
torch.cuda.0.name: NVIDIA L4
torch.cuda.0.free: 21.9 GB
torch.cuda.0.total: 22.1 GB
torch.cuda.0.capability: 8.9 (see https://developer.nvidia.com/cuda-gpus#compute)
llama_cpp_python.version: 0.2.90
llama_cpp_python.supports_gpu_offload: True

Metadata

Metadata

Assignees

No one assigned

    Labels

    SDGSDG specific issuesbugSomething isn't workingstale

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0