-
Notifications
You must be signed in to change notification settings - Fork 120
Centralized error handling and added after_retry to TooManyRequests error. #366
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This is a very good addition. I was discussing a similar with @tehkillerbee just a few days ago. However I only see the exception handling and attribute exposure logic here, I don't see any place where And we also have to make a call of what's the proper way to handle it - return empty chunks for the responses that couldn't be filled, that the client will hopefully fill up on the next refresh, or implement an active wait that freezes the current API call and retries after the given amount of seconds, with the risk of freezing the client too unless they manage the calls asynchronously? |
It’s not always desirable to freeze the client. My recommendation would be to simply expose the retry_after information and let users handle these errors themselves. This keeps the API wrapper lightweight and transparent, allowing clients to decide whether to implement retries, backoff, or any other strategy. If you want to handle throttling automatically across the board, though, you’d likely need to switch to an asynchronous approach, such as using aiohttp, so that retrying won’t block the client. That way, you can implement active waits or backoff strategies without freezing synchronous code. |
As a side note, if you don’t want to add custom retry logic, an alternative is to simply expose the HTTPErrors directly (or extend them as raised by requests). These exceptions already contain the original headers and response, so users can inspect retry_after or any other relevant information themselves and decide how to handle it. |
I agree, let's keep it simple and let the client transparently handle the error downstream. I'll try in the upcoming days to leverage these exceptions in the error handling logic of mopidy-tidal as a next step. Later on, if we see that there are common patterns around 429 handling in several clients, we may consider providing them as utilities, but without trying to implicitly solve the client's problem. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your contributions, these are very welcome additions.
Only a short review, as I am AFK but LGTM 👍
Changed import from session to exceptions
Heya o/
Thanks for creating this wrapper. I found it really useful for fetching track metadata. While using it, I quickly ran into a TooManyRequests error. The Tidal API does provide a retry-after header to indicate how long to wait before trying again, but this package currently doesn’t expose it.
This PR adds a
retry_after
property to the TooManyRequests error, so client code can handle rate limiting more gracefully.As a side note, I noticed that the error handling code might benefit from some refactoring, there are quite a few re-raises that don’t seem very useful. That might be worth looking into in the future.
Best,
S