-
Notifications
You must be signed in to change notification settings - Fork 219
Open
Description
~$ ramalama version
ramalama version 0.9.1
The model and ramalama-cuda container get pulled correctly; however, we get a conmon error:
~$ ramalama run cogito
Error: container create failed (no logs from conmon): conmon bytes "": readObjectStart: expect { or n, but found , error found in #0 byte of ...||..., bigger context ...||...
in a fedora container, nvidia-smi works correctly.
~$ podman run --rm --device=nvidia.com/gpu=all docker.io/fedora nvidia-smi
Mon Jun 9 20:00:17 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.133.20 Driver Version: 570.133.20 CUDA Version: 12.8 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce GTX 1080 Off | 00000000:9E:00.0 Off | N/A |
| 27% 35C P8 10W / 180W | 5MiB / 8192MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
Precision-7820-Tower:~$
Metadata
Metadata
Assignees
Labels
No labels