-
-
Notifications
You must be signed in to change notification settings - Fork 130
Issues: Blaizzy/mlx-vlm
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Model comparison for recognising and describing a cityscape and providing a description and keywords
#375
opened May 23, 2025 by
jrp2014
KeyError: 'image_token_index' when training with LoRA on Qwen2.5-VL-3B-Instruct-bf16
#354
opened May 16, 2025 by
mathav95raj
Add Support for KV Cache Quantization
enhancement
New feature or request
#345
opened May 6, 2025 by
Blaizzy
Implement Persistent Prompt Cache to Reduce Time-to-First-Token in Chat Contexts
enhancement
New feature or request
#344
opened May 6, 2025 by
Blaizzy
Output from generate is a tuple, rather than a string
bug
Something isn't working
#334
opened May 2, 2025 by
jrp2014
Inaccurate Coordinate Outputs for MLX-Quantized UI-TARS-1.5 (4bit/6bit)
#330
opened Apr 28, 2025 by
francedot
InternVL3 Multi-Image Support Broken
bug
Something isn't working
#328
opened Apr 23, 2025 by
tmoroney
Adjusting sample_utils.py for top_k and min_p Parameters
enhancement
New feature or request
#317
opened Apr 20, 2025 by
Blaizzy
Incorrect Output When Extracting Authors and Affiliations from Image using mlx_vlm.generate
#306
opened Apr 18, 2025 by
Huy2002-IT
Match the output of scipy.ndimage.zoom on multimodality's vision module
resize_image()
#302
opened Apr 17, 2025 by
Blaizzy
Issue with Finetuning Mistral (Mistral-Small-3.1-24B-Instruct-2503-4bit)
bug
Something isn't working
#284
opened Mar 30, 2025 by
keshavpeswani
NAMO-R1 Model is Amazing! How to Convert it to MLX?
#277
opened Mar 26, 2025 by
yourappleintelligence
Previous Next
ProTip!
Exclude everything labeled
bug
with -label:bug.