Skip to content

Conversation

KeshavVenkatesh
Copy link

@KeshavVenkatesh KeshavVenkatesh commented Oct 5, 2025

Checklist

  • Added the algorithm for the RKV76IIA method
  • Added a convergence test for the RKV76IIA method
  • Creating this pull request to get feedback on the implementation

Additional Notes

  • This particular convergence test fails during the order of convergence check.
  • Info on failed tests:
  1. Order between dt=1//8 and dt=1//16: -0.7854954880938649
    RKV76IIa Convergence Tests: Test Failed
    Expression: ≈(order, 7, atol = testTol)
    Evaluated: -0.7854954880938649 ≈ 7 (atol=0.3)

  2. Order between dt=1//16 and dt=1//32: 1.5849625007211563
    RKV76IIa Convergence Tests: Test Failed at C:\Users\srira\Downloads\Ordinary Differential Equations\OrdinaryDiffEq.jl\lib\OrdinaryDiffEqVerner\test\rkv76iia_tests.jl:73
    Expression: ≈(order, 7, atol = testTol)
    Evaluated: 1.5849625007211563 ≈ 7 (atol=0.3)

  3. Order between dt=1//32 and dt=1//64: 0.967333811079678
    RKV76IIa Convergence Tests: Test Failed at C:\Users\srira\Downloads\Ordinary Differential Equations\OrdinaryDiffEq.jl\lib\OrdinaryDiffEqVerner\test\rkv76iia_tests.jl:73
    Expression: ≈(order, 7, atol = testTol)
    Evaluated: 0.967333811079678 ≈ 7 (atol=0.3)

@ChrisRackauckas
Copy link
Member

Show plots of the convergence tests, just plot(sim) using the plot recipe

@KeshavVenkatesh
Copy link
Author

,

Show plots of the convergence tests, just plot(sim) using the plot recipe

convergence_rkv76iia

Could you please let me know if this is the plot you are looking for?

@KeshavVenkatesh
Copy link
Author

I will also take a look at the RKV76IIA algorithm's implementation to find out why the order of convergence is failing at smaller values of dt. If you have any suggestions as to what I should look for specifically, please let me know.

@oscardssmith
Copy link
Member

Your plot is using too small dt for Float64 tolerance. you can see that it levels off at y=~1e-14 which is roughly floating point error. You should probably switch to using BigFloat for the test.

@ChrisRackauckas
Copy link
Member

Yes it's just saturating because that is as accurate as Float64 can get. Doing this test in big float would let it keep going

@KeshavVenkatesh
Copy link
Author

I tried to convert the errors obtained from the convergence test to BigFloat, but that did not help with the convergence. This is the output for the code:

Out-of-place solution at t=1: 0.36787944117147253
Expected value: 0.36787944117144233

Testing order 7:
dt = 1//4, error = 4.087773236764922e-12
dt = 1//8, error = 3.798205619585314e-14
dt = 1//16, error = 6.546007605532576e-14
dt = 1//32, error = 2.1828311187557115e-14
dt = 1//64, error = 1.1170170151155611e-14
Order between dt=1//4 and dt=1//8: 6.749853347562831925005443312238533940673969705963336027706295162463012652846084
Order between dt=1//8 and dt=1//16: -0.7852972692785772597867943127951613256220467034287479973075014653126841550382554
Order between dt=1//16 and dt=1//32: 1.584414762352023612766097130426965010072873882633238027110177664600350876210189
Order between dt=1//32 and dt=1//64: 0.9665493541991219980382499685074483282026106254799138626449239186346385050363408
Test Summary: Total Time
RKV76IIa Convergence Tests 0 2.8s

I also looked at some tests defined for OrdinaryDiffEqLowStorageRK, and I tried to write similar tests for the RKV76IIA method, but the code is generating this compilation error:

RKV76IIa: Error During Test at OrdinaryDiffEq.jl/lib/OrdinaryDiffEqVerner/test/convergence_tests.jl:64
Got exception outside of a @test
MethodError: no method matching OrdinaryDiffEqVerner.RKV76IIaTableau(::Type{BigFloat}, ::Type{BigFloat})

As far as I can see, it is not easy to convert the RKV76IIA algorithm to support BigFloat without major refactoring.

Could you please provide specific guidance on how I can resolve the BigFloat issue and get the tests working? I am also attaching the code I have written as reference (convergence_test.txt).

Comment on lines +3978 to +3985
function RKV76IIaTableau(T::Type{<:CompiledFloats}, T2::Type{<:CompiledFloats})
# Nodes
c1 = convert(T2, 0)
c2 = convert(T2, 0.069)
c3 = convert(T2, 0.118)
c4 = convert(T2, 0.177)
c5 = convert(T2, 0.501)
c6 = convert(T2, 0.7737799115305331003715765296862487670813)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This dispatch will only get the numbers in Float64, which is why it cuts off. For BigFloats dispatch, write it like this:

https://github.com/SciML/OrdinaryDiffEq.jl/blob/master/lib/OrdinaryDiffEqVerner/src/verner_tableaus.jl#L892-L902

https://github.com/SciML/OrdinaryDiffEq.jl/blob/master/lib/OrdinaryDiffEqVerner/src/verner_tableaus.jl#L458-L465

The reason is that machine floats are only 64-bit, so 0.01710144927536231884057971014492753623188 will automatically truncate to the Float64. You need to input it as a string and tell it to parse the string to a BigInt/BigFloat in order to force it to keep the full precision. That requires a separate dispatch because it's slower, so this one is hit for CompiledFloats (i.e. Float32, Float64) while the other will be for arbitrary other number types, like BigFloat.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants