MPI-based program run slower and slower as execution proceeds #4737
Unanswered
spring-png
asked this question in
Q&A
Replies: 1 comment 2 replies
-
|
Maybe there is just not enough work. The high water mark of memory used by Fab is only 22 MB. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment




Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I ran the amrex program on the supercomputing cluster and I found the it become slower and slower. The total grids didn't vary too much, but the computational time to advance the same steps become more and more. Below is part of the profile and Memprofile


I think the reasom maybe that most time was used for the communication.
But when I ran the program on an older supercomputing cluster, this issue did not occur. In the old system, the MPI initialized with thread support level 1 (perhaps the version of the MPI is old), while the current initialized with thread support level 0. So, I wonder whether the MPI support level matters? And how can I activate thread support level 1 in the current system? The version of the MPI in the current system is oneAPI/2022.1. And is there any other suggestion to deal with the issue?
Beta Was this translation helpful? Give feedback.
All reactions