Skip to content

Conversation

oomcth
Copy link

@oomcth oomcth commented Aug 8, 2025

This PR addresses the issue raised in #406.

Summary of Changes

I’ve added a test file under test/src/ to verify the correctness of the PyTorch bindings (with and without structural feasibility) by comparing them to the NumPy bindings.

The test only checks that gradients are returned, not that they are numerically correct (yet).

If needed, I can extend the test to compare PyTorch gradients to the NumPy ones.

For now, the test only covers the case where:

  • There is exactly one equality and one inequality constraint.
  • The QP problem is feasible.

The goal was to test that both forward and backward passes work as expected in a simple, feasible QP setup.

Note on cvxpy.py Rename

To avoid a conflict with the official cvxpy Python library, I had to rename the file:

test/src/cvxpy.py → test/src/cvxpy_test.py

This is because I use the actual cvxpy library inside the new test file.

If the rename causes a problem for your codebase structure, I can:

  • Either remove the use of cvxpy in the test.
  • Or move it to a location that doesn’t interfere.

Please do not merge this PR until we confirm the cvxpy.py rename is acceptable.

Known Limitations

This PR does not guarantee the bindings behave correctly in all edge cases.

I suspect there may still be minor bugs in cases such as:

  • QPs with no equality or no inequality constraints.
  • Higher-dimensional constraints or unusual input combinations.

For now, this PR ensures that forward/backward behavior is sound in the simple case I test.

For now, the PR gives an idea of the kind of issues we encounter in the bindings. These issues apparently only concern cases where structural_feasibility=False. I will likely continue the PR either next week or at the beginning of September to ensure everything is okay.

@pgraverdy pgraverdy marked this pull request as draft August 25, 2025 09:23
@jorisv jorisv added the pr status wip To not review in weekly meeting label Sep 22, 2025
@oomcth
Copy link
Author

oomcth commented Sep 29, 2025

Now the forward pass works well and has tests with and without equalities/inequalities.
I'll write tests for the backward pass tomorrow and, if needed, edit the Torch function accordingly.

@oomcth
Copy link
Author

oomcth commented Oct 5, 2025

There are still some issues with the PR:

  • The test tolerance is way too large (1e-2). I still need to set some parameters.
  • Gradients are incorrect for the inequality matrices and vectors in the non-structurally-feasible QP case. I think it is normal. But I believe that the tests are not handling well the inequalities' gradients.

Copy link
Member

@ManifoldFR ManifoldFR left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's a first review pass on this.

torch.empty(nBatch, ctx.neq, dtype=Q.dtype)
if ctx.neq > 0
else torch.empty()
else torch.tensor([])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this to ensure at least a 1st-order tensor? Does torch.empty() create a zero-dim tensor? Aren't there args for it to set its dimensions?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torch.empty() does not work.

TypeError: empty() received an invalid combination of arguments - got (), but expected one of:
 * (tuple of ints size, *, tuple of names names, torch.memory_format memory_format = None, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)
 * (tuple of ints size, *, torch.memory_format memory_format = None, Tensor out = None, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://docs.pytorch.org/docs/stable/generated/torch.empty.html
That means we just need to pass in arguments, no?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could do torch.empty(0)

>>> torch.empty(0)
tensor([])

Comment on lines +530 to +532
rhs[dim + n_eq + n_in_sol : dim + n_eq + n_in][active_set] = (
-dl_dnus[i][active_set]
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a file formatting change? Did it come from pre-commit and ruff?

Copy link
Author

@oomcth oomcth Oct 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its a formatting change. It comes from ruff on my computer.

@ManifoldFR
Copy link
Member

Also please rebase on the target branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

pr status wip To not review in weekly meeting

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants