8000 Album downloads go into queued state & never downloads · Issue #55 · Xoconoch/spotizerr · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Album downloads go into queued state & never downloads #55

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
tarunkumar519 opened this issue Feb 21, 2025 · 24 comments
Closed

Album downloads go into queued state & never downloads #55

tarunkumar519 opened this issue Feb 21, 2025 · 24 comments

Comments

@tarunkumar519
Copy link

As the title. When i click download for albums, they go into queued state also says position number but the never download even after long time

Docker latest, Using spotify credentials

@Xoconoch
Copy link
Owner

Try increasing the max concurrent downloads setting

@tarunkumar519
Copy link
Author

Try increasing the max concurrent downloads setting

Oh my, this helped. No more issues. Thanks very much.

@tarunkumar519
Copy link
Author

Actually it helped for a bit but the queue issue still occurs.

I have set concurrent downloads from 10 to 20 to 100 at the highest, even if i add a single album with a single track, it goes into Queue position 1 and just waits there.

@tarunkumar519 tarunkumar519 reopened this Feb 21, 2025
@Xoconoch
Copy link
Owner

Please restart the container, reproduce the bug while monitoring the logs and share them.

docker logs [container name]

@tarunkumar519
Copy link
Author
tarunkumar519 commented Feb 22, 2025

Here i have deleted/cancelled the download for the album that's in queued state. (it's the only album i requested for download). Max concurrent download limit set to 100 @Xoconoch

2025-02-22 04:39:47,713 INFO werkzeug waitress-1 : Request: GET /api/album/download/cancel 2025-02-22 04:39:47,714 INFO werkzeug waitress-1 : Response: 200 OK | Duration: 0.91ms 2025-02-22 04:39:48,361 INFO werkzeug waitress-2 : Request: GET /api/prgs/album_1.prg 2025-02-22 04:39:48,362 INFO werkzeug waitress-2 : Response: 200 OK | Duration: 0.6ms 2025-02-22 04:39:53,551 INFO werkzeug waitress-3 : Request: DELETE /api/prgs/delete/album_1.prg 2025-02-22 04:39:53,552 INFO werkzeug waitress-3 : Response: 200 OK | Duration: 0.79ms 2025-02-22 04:40:29,524 INFO werkzeug waitress-0 : Request: GET /api/config 2025-02-22 04:40:29,525 INFO werkzeug waitress-0 : Response: 200 OK | Duration: 0.8ms 2025-02-22 04:40:29,633 INFO werkzeug waitress-1 : Request: POST /api/config 2025-02-22 04:40:29,635 INFO werkzeug waitress-1 : Response: 200 OK | Duration: 1.2ms 2025-02-22 04:40:32,816 INFO werkzeug waitress-2 : Request: GET /api/search 2025-02-22 04:40:33,800 INFO werkzeug waitress-2 : Response: 200 OK | Duration: 983.93ms 2025-02-22 04:40:35,930 INFO werkzeug waitress-3 : Request: GET /static/images/download.svg 2025-02-22 04:40:35,931 INFO werkzeug waitress-3 : Response: 200 OK | Duration: 1.17ms 2025-02-22 04:40:35,934 INFO werkzeug waitress-0 : Request: GET /static/images/view.svg 2025-02-22 04:40:35,935 INFO werkzeug waitress-0 : Response: 200 OK | Duration: 1.1ms 2025-02-22 04:40:42,711 INFO werkzeug waitress-1 : Request: GET /api/search 2025-02-22 04:40:43,514 INFO werkzeug waitress-1 : Response: 200 OK | Duration: 803.13ms 2025-02-22 04:40:51,842 INFO werkzeug waitress-2 : Request: GET /api/config 2025-02-22 04:40:51,842 INFO werkzeug waitress-2 : Response: 200 OK | Duration: 0.37ms 2025-02-22 04:40:51,928 INFO werkzeug waitress-3 : Request: GET /api/album/download 2025-02-22 04:40:51,931 INFO werkzeug waitress-3 : Response: 202 ACCEPTED | Duration: 2.16ms 2025-02-22 04:40:53,206 INFO werkzeug waitress-0 : Request: GET /api/config 2025-02-22 04:40:53,207 INFO werkzeug waitress-0 : Response: 200 OK | Duration: 0.79ms 2025-02-22 04:40:53,290 INFO werkzeug waitress-1 : Request: POST /api/config 2025-02-22 04:40:53,291 INFO werkzeug waitress-1 : Response: 200 OK | Duration: 1.2ms 2025-02-22 04:40:54,027 INFO werkzeug waitress-2 : Request: GET /api/prgs/album_1.prg 2025-02-22 04:40:54,028 INFO werkzeug waitress-2 : Response: 200 OK | Duration: 0.83ms 2025-02-22 04:40:56,496 INFO werkzeug waitress-3 : Request: GET /api/prgs/album_1.prg 2025-02-22 04:40:56,497 INFO werkzeug waitress-3 : Response: 200 OK | Duration: 0.91ms 2025-02-22 04:40:58,183 INFO werkzeug waitress-0 : Request: GET /api/prgs/album_1.prg 2025-02-22 04:40:58,183 INFO werkzeug waitress-0 : Response: 200 OK | Duration: 0.29ms

Image

@andrewthetechie
Copy link
andrewthetechie commented Feb 23, 2025

I am having this same issue. Spotizerr is running in kubernetes with a PVC mounted to /app/prgs

Restarted containers, set max concurrent downloads up to 100, nothing in the logs that indicates a cause to the problem.

2025-03-01 09:49:35,958 INFO werkzeug waitress-0 : Request: GET /api/prgs/album_1.prg
2025-03-01 09:49:35,959 INFO werkzeug waitress-0 : Response: 200 OK | Duration: 1.32ms
2025-03-01 09:49:37,956 INFO werkzeug waitress-3 : Request: GET /api/prgs/album_1.prg

This appears to be something related to the age of running the container. I'm not quite sure, but even after restarting the k8s deployment, it still stays stuck. However, deleting my PVCs and then deploying again, it'll run fine for a while.

@Xoconoch
Copy link
Owner

This should be better handled in the newest version. Please test it and share your findings.

@tarunkumar519
Copy link
Author

@Xoconoch awesome update. Im already testing it. The issue was with the age of the container, so waiting for a day or even two to check if the issue persists. Will report findings here.

@andrewthetechie
Copy link

I've updated my k8s deployment and loaded the queue, will report back in 24-48 hours

@andrewthetechie
Copy link

@Xoconoch made it overnight and downloaded ~24 albums but still hit this same outcome.

Logs are just repeating

2025-03-17 08:48:59,712 INFO werkzeug waitress-2 : Request: GET /api/prgs/album_22.prg
2025-03-17 08:48:59,714 INFO werkzeug waitress-3 : Request: GET /api/prgs/album_23.prg
2025-03-17 08:48:59,715 INFO werkzeug waitress-0 : Request: GET /api/prgs/album_24.prg
2025-03-17 08:48:59,716 WARNING waitress.queue MainThread : Task queue depth is 1
2025-03-17 08:48:59,718 INFO werkzeug waitress-2 : Response: 200 OK | Duration: 5.66ms
2025-03-17 08:48:59,719 INFO werkzeug waitress-0 : Response: 200 OK | Duration: 3.59ms
2025-03-17 08:48:59,720 INFO werkzeug waitress-2 : Request: GET /api/prgs/album_25.prg
2025-03-17 08:48:59,721 INFO werkzeug waitress-3 : Response: 200 OK | Duration: 7.41ms
2025-03-17 08:48:59,723 INFO werkzeug waitress-2 : Response: 200 OK | Duration: 3.17ms
2025-03-17 08:49:01,108 INFO werkzeug waitress-0 : Request: GET /api/prgs/album_26.prg
2025-03-17 08:49:01,110 INFO werkzeug waitress-0 : Response: 200 OK | Duration: 1.26ms
2025-03-17 08:49:01,695 INFO werkzeug waitress-3 : Request: GET /api/prgs/playlist_1.prg
2025-03-17 08:49:01,697 INFO werkzeug waitress-2 : Request: GET /api/prgs/album_19.prg
2025-03-17 08:49:01,698 INFO werkzeug waitress-0 : Request: GET /api/prgs/album_20.prg
2025-03-17 08:49:01,699 WARNING waitress.queue MainThread : Task queue depth is 1
2025-03-17 08:49:01,702 WARNING waitress.queue MainThread : Task queue depth is 2
2025-03-17 08:49:01,702 WARNING waitress.queue MainThread : Task queue depth is 3
2025-03-17 08:49:01,703 INFO werkzeug waitress-2 : Response: 200 OK | Duration: 6.07ms
2025-03-17 08:49:01,704 INFO werkzeug waitress-0 : Response: 200 OK | Duration: 5.55ms
2025-03-17 08:49:01,704 INFO werkzeug waitress-3 : Response: 200 OK | Duration: 8.95ms
2025-03-17 08:49:01,707 INFO werkzeug waitress-2 : Request: GET /api/prgs/album_21.prg
2025-03-17 08:49:01,709 INFO werkzeug waitress-0 : Request: GET /api/prgs/playlist_2.prg
2025-03-17 08:49:01,710 INFO werkzeug waitress-3 : Request: GET /api/prgs/album_22.prg
2025-03-17 08:49:01,712 WARNING waitress.queue MainThread : Task queue depth is 1
2025-03-17 08:49:01,713 INFO werkzeug waitress-0 : Response: 200 OK | Duration: 4.25ms
2025-03-17 08:49:01,714 INFO werkzeug waitress-3 : Response: 200 OK | Duration: 3.82ms
2025-03-17 08:49:01,714 INFO werkzeug waitress-2 : Response: 200 OK | Duration: 7.48ms
2025-03-17 08:49:01,717 WARNING waitress.queue MainThread : Task queue depth is 2
2025-03-17 08:49:01,720 INFO werkzeug waitress-3 : Request: GET /api/prgs/album_23.prg
2025-03-17 08:49:01,720 INFO werkzeug waitress-0 : Request: GET /api/prgs/album_24.prg
2025-03-17 08:49:01,722 INFO werkzeug waitress-2 : Request: GET /api/prgs/album_25.prg
2025-03-17 08:49:01,723 INFO werkzeug waitress-3 : Response: 200 OK | Duration: 3.37ms
2025-03-17 08:49:01,724 INFO werkzeug waitress-2 : Response: 200 OK | Duration: 1.41ms
2025-03-17 08:49:01,724 INFO werkzeug waitress-0 : Response: 200 OK | Duration: 4.09ms
2025-03-17 08:49:03,111 INFO werkzeug waitress-2 : Request: GET /api/prgs/album_26.prg```

No amount of tweaking the concurrent workers or deleting albums from the top of the queue gets it going again. 

If I delete the container and all contents of /app/prgs it proceeds again, but I lose my download queue.

@Xoconoch
Copy link
Owner

Fuck me

@andrewthetechie
Copy link
andrewthetechie commented Mar 17, 2025

Some resources are leaking. I'm seeing leftover file handles from talking to all the prg files that are not closed.

It would require more infra, but have you checked out something like Celery and external task workers rather than trying to manage it in threads in Python? You could drop your task management code and allow something else to handle it.

Another option would be using a DB to handle the progress rather than the file blobs. SQLite is an option with no server required.

Let me know if you're interested in something like that. I'm happy to help discuss ideas.

@Xoconoch
Copy link
Owner

Sure, I’ll check out those options. So far queue management has 1 requirement:

  • Be consistent across page refreshes and devices

The queue has to be the same for the whole instance, regardless of when or from where the user access it.

I am kind of stuck with progress reporting here, because currently the code just creates new deezspot-library instances for every download and redirects their output to a unique file blob.

8000
@andrewthetechie
Copy link

Check out Celery - it should fit the bill. There's also RQ which can be a little simpler.

They'll both need some sort of external infra for the queue - redis, rabbitmq, etc. It could get added to the docker-compose file and spun up as a sidecar.

@Xoconoch
Copy link
Owner

Try cooldockerizer93/spotizerr:dev

@andrewthetechie
Copy link

I can take a look later this week.

Taking a look at the code changes, I'll need a Redis instance. Is the celery worker still running in the same container as the api or do I need separate API and worker pods? Any env vars to set besides Redis?

@Xoconoch
Copy link
Owner

It should all be within the container (redis, workers management, celery and the main app), no additional setup needed. I made it that way so no extra containers are needed, although I don't know if it is optimal.

Note that the amount of workers is set by the max concurrent downloads config param.

@andrewthetechie
Copy link

Best practices would have the container doing one thing only and then run a container for each function. That way you're not responsible for installing and updating redis in your container, just use the redis container.

@Xoconoch
Copy link
Owner

So one container for redis and another for API+workers would be good?

@tarunkumar519
Copy link
Author

So one container for redis and another for API+workers would be good?

That looks good to me, what do you think @andrewthetechie

@andrewthetechie
Copy link
andrewthetechie commented Mar 22, 2025

I updated my k8s deployment this evening and queued up a set of albums to download with real-time download on. I'll report back in in 24-48 hours.

https://github.com/andrewthetechie/home-k8s-pub/tree/main/k8s/apps/spotizerr for anyone who wants to check out how I have it deployed in k8s. I haven't yet split the API and worker container, but plan to after the initial testing to see if celery has resolved the issue with the downloads getting stuck.

@andrewthetechie
Copy link

Downloaded ~50 albums with only three failures. Moving to celery appears to have resolved the queuing issues.

I added 80 more albums to my queue for further testing and will update status in a few days.

@andrewthetechie
Copy link

I've seen enough from my side to say the celery queues fix the bug I was hitting. Albums have continued to download for the last 48 hours.

@tarunkumar519
Copy link
Author

Like @andrewthetechie i do not see any issues from my end too. Queue system works as intended. Im closing the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants
0