8000 通用版不能加载baichuan2-7b-chat-int4模型 · Issue #29 · IEIT-Yuan/YuanChat · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content 8000
通用版不能加载baichuan2-7b-chat-int4模型 #29
Open
@DreamistW

Description

@DreamistW

模型列表中有baichuan2-7b-chat-int4的模型,且可以下载
image

但是在加载模型时,遇到以下错误信息,软件提示加载模型失败

2024-07-24 09:59:22.098 | INFO     | pkg.server.process.process_model:load_model:103 - load_model model_id:5, type:1
2024-07-24 09:59:22.158 | INFO     | pkg.plugins.chat_model_plugin.baichuan2_hf:load_model:60 - Creat tokenizer...
2024-07-24 09:59:22.385 | INFO     | pkg.plugins.chat_model_plugin.baichuan2_hf:load_model:63 - init model ...
2024-07-24 09:59:22.385 | INFO     | pkg.plugins.chat_model_plugin.baichuan2_hf:load_model:65 - using cpu
Xformers is not installed correctly. If you want to use memory_efficient_attention to accelerate training use the following command to install Xformers
pip install xformers.
2024-07-24 09:59:22.432 | ERROR    | pkg.server.process.process_model:load_model_by_model_info:129 - load_model_by_model_info error, model_id:5, err: Needs import model weight init func to run quantize.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0