Skip to content

Conversation

MaxenceGollier
Copy link
Collaborator

@dpo @MohamedLaghdafHABIBOULLAH

We should merge after #199.

Copy link

codecov bot commented Aug 7, 2025

Codecov Report

❌ Patch coverage is 98.08917% with 3 lines in your changes missing coverage. Please review.
✅ Project coverage is 86.05%. Comparing base (e0f214d) to head (7df41cc).
⚠️ Report is 140 commits behind head on master.

Files with missing lines Patch % Lines
src/LM_alg.jl 97.81% 3 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           master     #200       +/-   ##
===========================================
+ Coverage   61.53%   86.05%   +24.51%     
===========================================
  Files          11       13        +2     
  Lines        1292     1563      +271     
===========================================
+ Hits          795     1345      +550     
+ Misses        497      218      -279     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

σ::T
meta::NLPModelMeta{T, V}
counters::Counters
end
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here too, would it be possible to reuse LLSModels?

Copy link
Collaborator Author

@MaxenceGollier MaxenceGollier Sep 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure how, LLSModels does $\frac{1}{2} ∥Ax-b∥^2_2$, not $\frac{1}{2} ∥Ax-b∥^2_2 + \frac{\sigma}{2}∥x∥^2_2$,
sure we could write

$$\frac{1}{2} \|Ax-b\|^2_2 + \frac{\sigma}{2}\|x\|^2_2 = \frac{1}{2} \| \begin{bmatrix} A \\ \sqrt{\sigma} I \end{bmatrix} x - \begin{bmatrix} b \\ 0 \end{bmatrix} \|_2^2$$

but this will be impractical in my opinion.

@MaxenceGollier
Copy link
Collaborator Author

LM Test

Without bounds

subsolver : R2

On my branch,

using LinearAlgebra, NLPModels, NLPModelsModifiers, RegularizedProblems, ProximalOperators, Random, ShiftedProximalOperators
Random.seed!(0)
bpdn = bpdn_model(1)[2]
λ = norm(grad(bpdn, zeros(bpdn.meta.nvar)), Inf) / 10
h = NormL0(λ)

import Pkg; Pkg.add(url = "https://github.com/MaxenceGollier/RegularizedOptimization.jl.git", rev = "LM-JSO")
using RegularizedOptimization
options = ROSolverOptions(β = 1.0, α = 1.0, ϵa = 1e-6, ϵr = 1e-6, verbose = 1)

LM(bpdn, h, options)
[ Info:  outer  inner     f(x)     h(x)  √(ξ1/ν)        ρ        σ      ‖x‖      ‖s‖     ‖J‖² LM 
[ Info:      0     20  1.8e+00  0.0e+00  9.3e-01  1.0e+00  7.4e-04  0.0e+00  3.1e+00  1.0e+00 ↘
[ Info:      1     23  1.1e-02  4.6e-01  2.8e-03  1.0e+00  2.5e-04  3.1e+00  9.0e-03  1.0e+00 ↘
[ Info:      2  10001  1.1e-02  4.6e-01  4.1e-06  1.0e+00  8.2e-05  3.1e+00  1.7e-05  1.0e+00 ↘
[ Info:      3      0  1.1e-02  4.6e-01  5.1e-08  1.0e+00  2.7e-05  3.1e+00  1.9e-08  1.0e+00
[ Info: LM: terminating with √(ξ1/ν) = 5.055168491119672e-8
"Execution stats: first-order stationary"

On master,

using LinearAlgebra, NLPModels, NLPModelsModifiers, RegularizedProblems, ProximalOperators, Random, ShiftedProximalOperators
Random.seed!(0)
bpdn = bpdn_model(1)[2]
λ = norm(grad(bpdn, zeros(bpdn.meta.nvar)), Inf) / 10
h = NormL0(λ)

import Pkg; Pkg.add(url = "https://github.com/MaxenceGollier/RegularizedOptimization.jl.git", rev = "master")
using RegularizedOptimization
options = ROSolverOptions(β = 1.0, α = 1.0, ϵa = 1e-6, ϵr = 1e-6, verbose = 10)

LM(bpdn, h, options)
[ Info:  outer    inner     f(x)     h(x) √(ξ1/ν)      √ξ        ρ       σ     ‖x‖     ‖s‖   ‖Jₖ‖² reg
[ Info:      1       20  1.8e+00  0.0e+00 9.3e-01 1.1e+00  1.0e+00 7.4e-04 0.0e+00 3.1e+00 1.0e+00 ↘
[ Info:      2       23  1.1e-02  4.6e-01 2.8e-03 3.5e-03  1.0e+00 2.5e-04 3.1e+00 9.0e-03 1.0e+00 ↘
[ Info:      3    10001  1.1e-02  4.6e-01 4.1e-06 5.8e-06  1.0e+00 8.2e-05 3.1e+00 1.7e-05 1.0e+00 ↘
[ Info:      4        1  1.1e-02  4.6e-01 5.1e-08 5.1e-08          2.7e-05 3.1e+00 1.9e-08 1.0e+00
[ Info: LM: terminating with √(ξ1/ν) = 5.055168491119672e-8
"Execution stats: first-order stationary"

subsolver : R2DH

On my branch,

using LinearAlgebra, NLPModels, NLPModelsModifiers, RegularizedProblems, ProximalOperators, Random, ShiftedProximalOperators
Random.seed!(0)
bpdn = bpdn_model(1)[2]
λ = norm(grad(bpdn, zeros(bpdn.meta.nvar)), Inf) / 10
h = NormL0(λ)

import Pkg; Pkg.add(url = "https://github.com/MaxenceGollier/RegularizedOptimization.jl.git", rev = "LM-JSO")
using RegularizedOptimization
options = ROSolverOptions(β = 1.0, α = 1.0, ϵa = 1e-6, ϵr = 1e-6, verbose = 1)

LM(bpdn, h, options, subsolver = R2DHSolver)
[ Info:  outer  inner     f(x)     h(x)  √(ξ1/ν)        ρ        σ      ‖x‖      ‖s‖     ‖J‖² LM 
[ Info:      0      6  1.8e+00  0.0e+00  9.3e-01  1.0e+00  7.4e-04  0.0e+00  3.1e+00  1.0e+00 ↘
[ Info:      1      6  1.1e-02  4.6e-01  2.4e-03  1.0e+00  2.5e-04  3.1e+00  7.3e-03  1.0e+00 ↘
[ Info:      2  10001  1.1e-02  4.6e-01  4.1e-06  1.0e+00  8.2e-05  3.1e+00  1.7e-05  1.0e+00 ↘
[ Info:      3      0  1.1e-02  4.6e-01  4.7e-08  1.0e+00  2.7e-05  3.1e+00  1.4e-09  1.0e+00
[ Info: LM: terminating with √(ξ1/ν) = 4.713968924434026e-8
"Execution stats: first-order stationary"

On master,

using LinearAlgebra, NLPModels, NLPModelsModifiers, RegularizedProblems, ProximalOperators, Random, ShiftedProximalOperators
Random.seed!(0)
bpdn = bpdn_model(1)[2]
λ = norm(grad(bpdn, zeros(bpdn.meta.nvar)), Inf) / 10
h = NormL0(λ)

import Pkg; Pkg.add(url = "https://github.com/MaxenceGollier/RegularizedOptimization.jl.git", rev = "master")
using RegularizedOptimization
options = ROSolverOptions(β = 1.0, α = 1.0, ϵa = 1e-6, ϵr = 1e-6, verbose = 10)

LM(bpdn, h, options, subsolver = R2DH)
[ Info:  outer    inner     f(x)     h(x) √(ξ1/ν)      √ξ        ρ       σ     ‖x‖     ‖s‖   ‖Jₖ‖² reg
[ Info:      1        6  1.8e+00  0.0e+00 9.3e-01 1.1e+00  1.0e+00 7.4e-04 0.0e+00 3.1e+00 1.0e+00 ↘
[ Info:      2        6  1.1e-02  4.6e-01 2.4e-03 2.9e-03  1.0e+00 2.5e-04 3.1e+00 7.3e-03 1.0e+00 ↘
[ Info:      3    10001  1.1e-02  4.6e-01 4.1e-06 5.9e-06  1.0e+00 8.2e-05 3.1e+00 1.7e-05 1.0e+00 ↘
[ Info:      4        1  1.1e-02  4.6e-01 4.7e-08 4.7e-08          2.7e-05 3.1e+00 1.4e-09 1.0e+00
[ Info: LM: terminating with √(ξ1/ν) = 4.713968924434026e-8
"Execution stats: first-order stationary"

With bounds

subsolver : R2

On my branch,

using LinearAlgebra, NLPModels, NLPModelsModifiers, RegularizedProblems, ProximalOperators, Random, ShiftedProximalOperators
Random.seed!(0)
bpdn = bpdn_model(1, bounds = true)[2]
λ = norm(grad(bpdn, zeros(bpdn.meta.nvar)), Inf) / 10
h = NormL0(λ)

import Pkg; Pkg.add(url = "https://github.com/MaxenceGollier/RegularizedOptimization.jl.git", rev = "LM-JSO")
using RegularizedOptimization
options = ROSolverOptions(β = 1.0, α = 1.0, ϵa = 1e-6, ϵr = 1e-6, verbose = 1)

LM(bpdn, h, options)
[ Info:  outer  inner     f(x)     h(x)  √(ξ1/ν)        ρ        σ      ‖x‖      ‖s‖     ‖J‖² LM 
[ Info:      0     13  2.2e+00  0.0e+00  1.2e+00  1.0e+00  7.4e-04  0.0e+00  2.8e+00  1.0e+00 ↘
[ Info:      1     15  3.1e-01  4.8e-01  2.5e-03  1.0e+00  2.5e-04  2.8e+00  5.9e-03  1.0e+00 ↘
[ Info:      2  10001  3.1e-01  4.8e-01  3.3e-06  1.0e+00  8.2e-05  2.8e+00  1.4e+00  1.0e+00 ↘
[ Info:      3     22  8.1e-03  6.0e-01  1.2e-04  1.0e+00  2.7e-05  3.2e+00  4.2e-04  1.0e+00 ↘
[ Info:      4      0  8.1e-03  6.0e-01  1.1e-07  1.0e+00  9.1e-06  3.2e+00  1.0e-07  1.0e+00
[ Info: LM: terminating with √(ξ1/ν) = 1.1401465572257465e-7
"Execution stats: first-order stationary"

On master,

using LinearAlgebra, NLPModels, NLPModelsModifiers, RegularizedProblems, ProximalOperators, Random, ShiftedProximalOperators
Random.seed!(0)
bpdn = bpdn_model(1, bounds = true)[2]
λ = norm(grad(bpdn, zeros(bpdn.meta.nvar)), Inf) / 10
h = NormL0(λ)

import Pkg; Pkg.add(url = "https://github.com/MaxenceGollier/RegularizedOptimization.jl.git", rev = "master")
using RegularizedOptimization
options = ROSolverOptions(β = 1.0, α = 1.0, ϵa = 1e-6, ϵr = 1e-6, verbose = 10)

LM(bpdn, h, options)
[ Info:  outer    inner     f(x)     h(x) √(ξ1/ν)      √ξ        ρ       σ     ‖x‖     ‖s‖   ‖Jₖ‖² reg
[ Info:      1       13  2.2e+00  0.0e+00 1.2e+00 1.2e+00  1.0e+00 7.4e-04 0.0e+00 2.8e+00 1.0e+00 ↘
[ Info:      2       15  3.1e-01  4.8e-01 2.5e-03 2.7e-03  1.0e+00 2.5e-04 2.8e+00 5.9e-03 1.0e+00 ↘
[ Info:      3    10001  3.1e-01  4.8e-01 3.3e-06 4.3e-01  1.0e+00 8.2e-05 2.8e+00 1.4e+00 1.0e+00 ↘
[ Info:      4       22  8.1e-03  6.0e-01 1.2e-04 1.6e-04  1.0e+00 2.7e-05 3.2e+00 4.2e-04 1.0e+00 ↘
[ Info:      5        1  8.1e-03  6.0e-01 1.1e-07 1.1e-07          9.1e-06 3.2e+00 1.0e-07 1.0e+00
[ Info: LM: terminating with √(ξ1/ν) = 1.1401465572257465e-7
"Execution stats: first-order stationary"

subsolver : R2DH

On my branch,

using LinearAlgebra, NLPModels, NLPModelsModifiers, RegularizedProblems, ProximalOperators, Random, ShiftedProximalOperators
Random.seed!(0)
bpdn = bpdn_model(1, bounds = true)[2]
λ = norm(grad(bpdn, zeros(bpdn.meta.nvar)), Inf) / 10
h = NormL0(λ)

import Pkg; Pkg.add(url = "https://github.com/MaxenceGollier/RegularizedOptimization.jl.git", rev = "LM-JSO")
using RegularizedOptimization
options = ROSolverOptions(β = 1.0, α = 1.0, ϵa = 1e-6, ϵr = 1e-6, verbose = 1)

LM(bpdn, h, options, subsolver = R2DHSolver)
[ Info:  outer  inner     f(x)     h(x)  √(ξ1/ν)        ρ        σ      ‖x‖      ‖s‖     ‖J‖² LM 
[ Info:      0      6  2.2e+00  0.0e+00  1.2e+00  1.0e+00  7.4e-04  0.0e+00  3.2e+00  1.0e+00 ↘
[ Info:      1      6  8.2e-03  6.0e-01  2.1e-03  1.0e+00  2.5e-04  3.2e+00  5.7e-03  1.0e+00 ↘
[ Info:      2  10001  8.1e-03  6.0e-01  2.6e-06  1.0e+00  8.2e-05  3.2e+00  9.6e-06  1.0e+00 ↘
[ Info:      3      0  8.1e-03  6.0e-01  4.7e-08  1.0e+00  2.7e-05  3.2e+00  7.9e-10  1.0e+00
[ Info: LM: terminating with √(ξ1/ν) = 4.713968924434026e-8
"Execution stats: first-order stationary"

On master,

using LinearAlgebra, NLPModels, NLPModelsModifiers, RegularizedProblems, ProximalOperators, Random, ShiftedProximalOperators
Random.seed!(0)
bpdn = bpdn_model(1, bounds = true)[2]
λ = norm(grad(bpdn, zeros(bpdn.meta.nvar)), Inf) / 10
h = NormL0(λ)

import Pkg; Pkg.add(url = "https://github.com/MaxenceGollier/RegularizedOptimization.jl.git", rev = "master")
using RegularizedOptimization
options = ROSolverOptions(β = 1.0, α = 1.0, ϵa = 1e-6, ϵr = 1e-6, verbose = 10)

LM(bpdn, h, options, subsolver = R2DHSolver)
[ Info:  outer    inner     f(x)     h(x) √(ξ1/ν)      √ξ        ρ       σ     ‖x‖     ‖s‖   ‖Jₖ‖² reg
[ Info:      1        6  2.2e+00  0.0e+00 1.2e+00 1.3e+00  1.0e+00 7.4e-04 0.0e+00 3.2e+00 1.0e+00 ↘
[ Info:      2        6  8.2e-03  6.0e-01 2.1e-03 2.4e-03  1.0e+00 2.5e-04 3.2e+00 5.7e-03 1.0e+00 ↘
[ Info:      3    10001  8.1e-03  6.0e-01 2.6e-06 3.5e-06  1.0e+00 8.2e-05 3.2e+00 9.6e-06 1.0e+00 ↘
[ Info:      4        1  8.1e-03  6.0e-01 4.7e-08 4.7e-08          2.7e-05 3.2e+00 7.9e-10 1.0e+00
[ Info: LM: terminating with √(ξ1/ν) = 4.713968924434026e-8
"Execution stats: first-order stationary"

@MaxenceGollier
Copy link
Collaborator Author

I think we are ready for this one @dpo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants