Skip to content

Conversation

semohr
Copy link
Contributor

@semohr semohr commented Aug 22, 2025

Heya o/

Thanks for creating this wrapper. I found it really useful for fetching track metadata. While using it, I quickly ran into a TooManyRequests error. The Tidal API does provide a retry-after header to indicate how long to wait before trying again, but this package currently doesn’t expose it.

This PR adds a retry_after property to the TooManyRequests error, so client code can handle rate limiting more gracefully.

As a side note, I noticed that the error handling code might benefit from some refactoring, there are quite a few re-raises that don’t seem very useful. That might be worth looking into in the future.

Best,
S

@blacklight
Copy link
Collaborator

This is a very good addition.

I was discussing a similar with @tehkillerbee just a few days ago.

However I only see the exception handling and attribute exposure logic here, I don't see any place where retry_after is used to implement active throttling.

And we also have to make a call of what's the proper way to handle it - return empty chunks for the responses that couldn't be filled, that the client will hopefully fill up on the next refresh, or implement an active wait that freezes the current API call and retries after the given amount of seconds, with the risk of freezing the client too unless they manage the calls asynchronously?

@semohr
Copy link
Contributor Author

semohr commented Aug 27, 2025

It’s not always desirable to freeze the client. My recommendation would be to simply expose the retry_after information and let users handle these errors themselves. This keeps the API wrapper lightweight and transparent, allowing clients to decide whether to implement retries, backoff, or any other strategy.

If you want to handle throttling automatically across the board, though, you’d likely need to switch to an asynchronous approach, such as using aiohttp, so that retrying won’t block the client. That way, you can implement active waits or backoff strategies without freezing synchronous code.

@semohr
Copy link
Contributor Author

semohr commented Aug 27, 2025

As a side note, if you don’t want to add custom retry logic, an alternative is to simply expose the HTTPErrors directly (or extend them as raised by requests). These exceptions already contain the original headers and response, so users can inspect retry_after or any other relevant information themselves and decide how to handle it.

@blacklight blacklight self-requested a review August 27, 2025 12:48
@blacklight
Copy link
Collaborator

I agree, let's keep it simple and let the client transparently handle the error downstream.

I'll try in the upcoming days to leverage these exceptions in the error handling logic of mopidy-tidal as a next step.

Later on, if we see that there are common patterns around 429 handling in several clients, we may consider providing them as utilities, but without trying to implicitly solve the client's problem.

Copy link
Collaborator

@tehkillerbee tehkillerbee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your contributions, these are very welcome additions.

Only a short review, as I am AFK but LGTM 👍

Changed import from session to exceptions
@tehkillerbee tehkillerbee merged commit 19ca091 into EbbLabs:main Aug 31, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants