You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Presumably this is a model configuration issue, rather than an mlx-vlm one?
2025-05-04 22:22:25,344 - INFO - Processing '20250503-170407_DSC03793_DxO.jpg' with model: mlx-community/gemma-3-12b-pt-8bit
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Fetching 13 files: 100%|█████████████████████████████████████████████████████████████| 13/13 [00:00<00:00, 32092.97it/s]
Fetching 13 files: 100%|██████████████████████████████████████████████████████████████| 13/13 [00:00<00:00, 8024.42it/s]
2025-05-04 22:23:04,029 - ERROR - Failed during 'model_load' for model mlx-community/gemma-3-12b-pt-8bit: ValueError: No chat template is set for this processor. Please either set the `chat_template` attribute, or provide a chat template as an argument. See https://huggingface.co/docs/transformers/main/en/chat_templating for more information.
[FAIL] gemma-3-12b-pt-8bit (Stage: model_load)
ERROR: Model gemma-3-12b-pt-8bit failed during 'model_load'.
The text was updated successfully, but these errors were encountered:
Presumably this is a model configuration issue, rather than an mlx-vlm one?
The text was updated successfully, but these errors were encountered: