-
Notifications
You must be signed in to change notification settings - Fork 11
Description
I have two suggestions for R2N that would be very useful for me. I will try to implement them in a near future.
-
When I use R2N with the full Hessian, I'd need the Hessian stored in a sparse matrix format and to be able to access it through
R2NModel
I suggest having two different R2NModel structs: one called R2NModel_op that has the current implementation, i.e the one with the linear operator for the Hessian stored:
RegularizedOptimization.jl/src/R2NModel.jl
Lines 17 to 25 in 9bb4a9a
mutable struct R2NModel{T <: Real, V <: AbstractVector{T}, G <: AbstractLinearOperator{T}} <: AbstractNLPModel{T, V} B::G ∇f::V v::V σ::T meta::NLPModelMeta{T, V} counters::Counters end
the other one would be called R2NModel_full and would have the Hessian stored in, say, SparseMatrixCOO format. The type ofR2NModel
used byR2N
would be dispatched as a keyword argument in theR2NSolver
constructor (defaulting to theLinearOperator
version). For this to work out we'd also need to modify the lineRegularizedOptimization.jl/src/R2N.jl
Line 420 in 9bb4a9a
solver.subpb.model.B = hess_op(nlp, xk) -
In a constrained optimization context, I'd need to make a different update for the quasi-Newton approximation, with the difference of gradients of Lagrangians instead of the objective. I think the following lines should be a function called
update_qn!
that can be modified by the user:
RegularizedOptimization.jl/src/R2N.jl
Lines 415 to 419 in 9bb4a9a
if quasiNewtTest @. ∇fk⁻ = ∇fk - ∇fk⁻ push!(nlp, s, ∇fk⁻) end
Please let me know what you think @dpo @MohamedLaghdafHABIBOULLAH !