-
Notifications
You must be signed in to change notification settings - Fork 65
Issues with pytorch bindings in the case where structural_feasibility=False. #414
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: devel
Are you sure you want to change the base?
Conversation
Change name as cvxpy python librairie is used in another file.
test now handle qp infeasible for Qp with and without eq/neq
Now bindings work with and without eq/neq
Now the forward pass works well and has tests with and without equalities/inequalities. |
There are still some issues with the PR:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's a first review pass on this.
torch.empty(nBatch, ctx.neq, dtype=Q.dtype) | ||
if ctx.neq > 0 | ||
else torch.empty() | ||
else torch.tensor([]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this to ensure at least a 1st-order tensor? Does torch.empty()
create a zero-dim tensor? Aren't there args for it to set its dimensions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
torch.empty() does not work.
TypeError: empty() received an invalid combination of arguments - got (), but expected one of:
* (tuple of ints size, *, tuple of names names, torch.memory_format memory_format = None, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)
* (tuple of ints size, *, torch.memory_format memory_format = None, Tensor out = None, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://docs.pytorch.org/docs/stable/generated/torch.empty.html
That means we just need to pass in arguments, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could do torch.empty(0)
>>> torch.empty(0)
tensor([])
rhs[dim + n_eq + n_in_sol : dim + n_eq + n_in][active_set] = ( | ||
-dl_dnus[i][active_set] | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a file formatting change? Did it come from pre-commit
and ruff?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its a formatting change. It comes from ruff on my computer.
Also please rebase on the target branch. |
This PR addresses the issue raised in #406.
Summary of Changes
I’ve added a test file under test/src/ to verify the correctness of the PyTorch bindings (with and without structural feasibility) by comparing them to the NumPy bindings.
The test only checks that gradients are returned, not that they are numerically correct (yet).
If needed, I can extend the test to compare PyTorch gradients to the NumPy ones.
For now, the test only covers the case where:
The goal was to test that both forward and backward passes work as expected in a simple, feasible QP setup.
Note on cvxpy.py Rename
To avoid a conflict with the official cvxpy Python library, I had to rename the file:
test/src/cvxpy.py → test/src/cvxpy_test.py
This is because I use the actual cvxpy library inside the new test file.
If the rename causes a problem for your codebase structure, I can:
Please do not merge this PR until we confirm the cvxpy.py rename is acceptable.
Known Limitations
This PR does not guarantee the bindings behave correctly in all edge cases.
I suspect there may still be minor bugs in cases such as:
For now, this PR ensures that forward/backward behavior is sound in the simple case I test.
For now, the PR gives an idea of the kind of issues we encounter in the bindings. These issues apparently only concern cases where structural_feasibility=False. I will likely continue the PR either next week or at the beginning of September to ensure everything is okay.