-
Notifications
You must be signed in to change notification settings - Fork 697
[ENH] Consistent 3D output for single-target point predictions in TimeXer
v1.
#1936
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
TimeXer
v1 and predictTimeXer
v1.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #1936 +/- ##
=======================================
Coverage ? 87.07%
=======================================
Files ? 136
Lines ? 8612
Branches ? 0
=======================================
Hits ? 7499
Misses ? 1113
Partials ? 0
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am worried that we are removing tests but not adding tests.
If we are removing an API stepout, then we should remove a stepout in the testes to match, no?
it was an unintended removal, happened by mistake, sorry. The code stays the same in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, so it was a bug! Good that we do reviews.
Related question: is there a stepout we can remove now?
…to a single handling block
with that question in mind, I tried tweaking with a few of the tensor shapes, and I was able to reduce the number of stepouts. now, the default return is a 3d tensor - |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mention you reduced the number of stepouts, but I do not see a change in tests - did you forget to push them?
no i meant stepouts in the actual model code. We were handling quantile and non-quantile predictions separately inside the model. But with this PR, I have unified the handling to a single format for all types of metrics used (quantile or point). The code for the tests ultimately remains the same, since we are testing the output shape of predict API for If there is a misunderstanding, please let me know. |
Yes, I meant tests. I think thta one of the following must be true:
|
Seems that all the shape are well managed. Here you are using the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Repeating my review request:
- if there was a stepout for the forecaster in tests, please remove the stepout
- if there was no stepout, please review the tests why there was no stepout and/or why they still passed
I see a possibility of using |
I looked at the code, and I think there is missing test. Until now we were only testing the output for |
I've added the test for the expected shapes from the forward method of the model, within the |
Hi, I have a minor doubt about the tests for the output shape: pytorch-forecasting/pytorch_forecasting/tests/test_all_estimators.py Lines 326 to 357 in 1956bc8
They are already testing the I am confused why do we need to add these tests again? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding the new test!
- I would make it a separate test in the class, could you split it?
- I think there should be a function that determines expected output shape. I would do this as follows:
- give every (non-multitarget) loss a tag
loss_dims: int
, which tells you the third dimension of the tensor - in the
_expected_fwd_shape
or similar, call that; and if it is multitarget, then iterate over the components
- give every (non-multitarget) loss a tag
I would also add the test in a separate PR, to be merged before this.
I see what you mean. This PR's changes are solely a patch on the existing set of tests specific to |
So the workflow would be: add the missing test for the old version (in a separate PR) -> replace with tests for the new output format for timexer (in this PR)? |
I would say, add the test for the post-1936 state. Then, hopefully, only |
I was going through this change request and trying to implement it with the workflow you just suggested, I don't understand which class you are referring to. I am not editing the tests in |
Reference Issues/PRs
What does this implement/fix? Explain your changes.
Small change in the code to provide a 3d output tensor for point predictions with 3rd dimension set to 1. With this change the output contract for
TimeXer
is(batch_size, predictions, 1)
where the 3rd dimension indicates a single target.(batch_size, predictions, num_quantiles)
where the 3rd dimension indicates the number of quantiles for which the output is generated.What should a reviewer concentrate their feedback on?
PR checklist
pre-commit install
.To run hooks independent of commit, execute
pre-commit run --all-files