-
Notifications
You must be signed in to change notification settings - Fork 107
Issues: quic/ai-hub-models
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[BUG] [QCS6490] DeeplabV3_plus_mobilenet_Quantized shows significant accuracy drop
bug
Something isn't working
#200
opened May 14, 2025 by
arunsark-quic
Device memory size in performance profiling
question
Please ask any questions on Slack. This issue will be closed once responded to.
#199
opened May 12, 2025 by
saba-er
[MODEL REQUEST] requesting new model (Qwen3 Series (32B → 4B) for NPU-Optimized Inference with Tools/Function Calling & OpenAI API Compatibility on QAI-Hub)
Feature Request
New feature or request
#195
opened Apr 28, 2025 by
zytoh0
10000
[BUG] genie-t2t-run Fails to run exaone 2.4b model on QCS6490 chipset
question
Please ask any questions on Slack. This issue will be closed once responded to.
#192
opened Apr 22, 2025 by
yunhyejung
[ERROR] "Unable to load backend. dlerror(): libcdsprpc.so: cannot open shared object file: No such file or directory"
bug
Something isn't working
#191
opened Apr 21, 2025 by
chenjun2hao
[BUG] File bug report:can not use --device "Snapdragon 8 Gen 3 QRD"
bug
Something isn't working
#189
opened Apr 18, 2025 by
ZJY0516
qai_hub_models.models.llama_v3_8b_chat_quantized.export --device "Snapdragon X Plus 8-Core CRD" error
bug
Something isn't working
#188
opened Apr 15, 2025 by
jinwater88
8gen4为啥比8gen1+npu还要慢。
question
Please ask any questions on Slack. This issue will be closed once responded to.
#187
opened Apr 14, 2025 by
chenjun2hao
[MODEL REQUEST] Add Qwen2.5-Coder-32B/14B/7B-Instruct to QAI-Hub with Tools/Function Calling & OpenAI API Compatibility
Feature Request
New feature or request
#186
opened Apr 9, 2025 by
zytoh0
[Feature Request] Could I update the code of quantization to aimet version2?
question
Please ask any questions on Slack. This issue will be closed once responded to.
#184
opened Apr 4, 2025 by
codereba
[BUG] Auto BYOM Issue:"Failed to generate a QNN-compatible Genie binary context, preventing model inference in the Android 14 environment."
bug
Something isn't working
#183
opened Apr 2, 2025 by
mbdibd
How can I compile my own Qwen model?
question
Please ask any questions on Slack. This issue will be closed once responded to.
#182
opened Mar 28, 2025 by
azaganidis
[MODEL REQUEST] DeepSeek coder
Feature Request
New feature or request
#180
opened Mar 18, 2025 by
BrickDesignerNL
[Face attributes Model] Output of sunglasses probability is always 1
question
Please ask any questions on Slack. This issue will be closed once responded to.
#179
opened Mar 17, 2025 by
chouxscream
[MODEL REQUEST] Add Gemma3 models
Feature Request
New feature or request
#178
opened Mar 12, 2025 by
BrickDesignerNL
Stable-Diffusion-v2.1 please provide steps to run on Elite Soc Mobiles
Feature Request
New feature or request
#176
opened Mar 9, 2025 by
Vinaysukhesh98
[BUG] File bug report about model [trocr]
bug
Something isn't working
#175
opened Mar 6, 2025 by
westerfeld44629
SDV2.1 export failed
assigned
We're actively working on this issue and hope to provide an update soon
bug
Something isn't working
#174
opened Feb 28, 2025 by
Vinaysukhesh98
[Feature Request] Phi-4 multimodal support on Qualcomm X Elite NPU
Feature Request
New feature or request
#172
opened Feb 27, 2025 by
BrickDesignerNL
[BUG] IOT BYOM Issue: Can ChatApp be supported on the QCS6490 platform? Either 1.5B or 0.5B is fine.
assigned
We're actively working on this issue and hope to provide an update soon
#171
opened Feb 27, 2025 by
kaixwangwei
How to convert PyTorch model to FP16 TFLite?
question
Please ask any questions on Slack. This issue will be closed once responded to.
#169
opened Feb 25, 2025 by
tumuyan
Issue: genie-t2t-run.exe Fails to Run llama_v3_2_3b_chat on Windows 11 (Snapdragon® X Elite), 16 GB Memory. Why does?
question
Please ask any questions on Slack. This issue will be closed once responded to.
#168
opened Feb 25, 2025 by
mikoaro
[MODEL REQUEST] requesting new model
Feature Request
New feature or request
#167
opened Feb 24, 2025 by
kaixwangwei
[Feature Request] Add Qualcomm X Elite support to Huggingface optimum
Feature Request
New feature or request
#166
opened Feb 23, 2025 by
BrickDesignerNL
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.