-
Notifications
You must be signed in to change notification settings - Fork 17
Album downloads go into queued state & never downloads #55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Try increasing the max concurrent downloads setting |
Oh my, this helped. No more issues. Thanks very much. |
Actually it helped for a bit but the queue issue still occurs. I have set concurrent downloads from 10 to 20 to 100 at the highest, even if i add a single album with a single track, it goes into Queue position 1 and just waits there. |
Please restart the container, reproduce the bug while monitoring the logs and share them.
|
Here i have deleted/cancelled the download for the album that's in queued state. (it's the only album i requested for download). Max concurrent download limit set to 100 @Xoconoch
|
I am having this same issue. Spotizerr is running in kubernetes with a PVC mounted to /app/prgs Restarted containers, set max concurrent downloads up to 100, nothing in the logs that indicates a cause to the problem.
This appears to be something related to the age of running the container. I'm not quite sure, but even after restarting the k8s deployment, it still stays stuck. However, deleting my PVCs and then deploying again, it'll run fine for a while. |
This should be better handled in the newest version. Please test it and share your findings. |
@Xoconoch awesome update. Im already testing it. The issue was with the age of the container, so waiting for a day or even two to check if the issue persists. Will report findings here. |
I've updated my k8s deployment and loaded the queue, will report back in 24-48 hours |
@Xoconoch made it overnight and downloaded ~24 albums but still hit this same outcome. Logs are just repeating
|
Fuck me |
Some resources are leaking. I'm seeing leftover file handles from talking to all the prg files that are not closed. It would require more infra, but have you checked out something like Celery and external task workers rather than trying to manage it in threads in Python? You could drop your task management code and allow something else to handle it. Another option would be using a DB to handle the progress rather than the file blobs. SQLite is an option with no server required. Let me know if you're interested in something like that. I'm happy to help discuss ideas. |
Sure, I’ll check out those options. So far queue management has 1 requirement:
The queue has to be the same for the whole instance, regardless of when or from where the user access it. I am kind of stuck with progress reporting here, because currently the code just creates new deezspot-library instances for every download and redirects their output to a unique file blob. |
Check out Celery - it should fit the bill. There's also RQ which can be a little simpler. They'll both need some sort of external infra for the queue - redis, rabbitmq, etc. It could get added to the docker-compose file and spun up as a sidecar. |
Try cooldockerizer93/spotizerr:dev |
I can take a look later this week. Taking a look at the code changes, I'll need a Redis instance. Is the celery worker still running in the same container as the api or do I need separate API and worker pods? Any env vars to set besides Redis? |
It should all be within the container (redis, workers management, celery and the main app), no additional setup needed. I made it that way so no extra containers are needed, although I don't know if it is optimal. Note that the amount of workers is set by the max concurrent downloads config param. |
Best practices would have the container doing one thing only and then run a container for each function. That way you're not responsible for installing and updating redis in your container, just use the redis container. |
So one container for redis and another for API+workers would be good? |
That looks good to me, what do you think @andrewthetechie |
I updated my k8s deployment this evening and queued up a set of albums to download with real-time download on. I'll report back in in 24-48 hours. https://github.com/andrewthetechie/home-k8s-pub/tree/main/k8s/apps/spotizerr for anyone who wants to check out how I have it deployed in k8s. I haven't yet split the API and worker container, but plan to after the initial testing to see if celery has resolved the issue with the downloads getting stuck. |
Downloaded ~50 albums with only three failures. Moving to celery appears to have resolved the queuing issues. I added 80 more albums to my queue for further testing and will update status in a few days. |
I've seen enough from my side to say the celery queues fix the bug I was hitting. Albums have continued to download for the last 48 hours. |
Like @andrewthetechie i do not see any issues from my end too. Queue system works as intended. Im closing the issue. |
As the title. When i click download for albums, they go into queued state also says position number but the never download even after long time
Docker latest, Using spotify credentials
The text was updated successfully, but these errors were encountered: