Skip to content

Move cleanup of previous jobs into execute function #732

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 17 commits into from
Jul 16, 2025
Merged

Conversation

jan-janssen
Copy link
Member

@jan-janssen jan-janssen commented Jul 14, 2025

Summary by CodeRabbit

  • New Features

    • Improved handling and synchronization of job data files, ensuring files are always present and up-to-date before job submission.
  • Bug Fixes

    • Enhanced file existence checks to prevent errors when accessing missing files.
  • Tests

    • Updated tests to reflect new file handling requirements and improved import structure for better reliability.

jan-janssen and others added 4 commits July 13, 2025 12:04
When `shutdown(wait=True)` is called - the default - then the executor waits until all future objects completed. In contrast when `shutdown(wait=True)` is called the future objects are cancelled on the queuing system.
@jan-janssen jan-janssen marked this pull request as draft July 14, 2025 21:12
Copy link
Contributor

coderabbitai bot commented Jul 14, 2025

Walkthrough

This update refactors several task execution functions to require explicit file_name and data_dict parameters, standardizing how HDF5 files are created, updated, and synchronized across subprocess and queue-based execution. Related test cases and import statements are updated to match the new interfaces and data handling logic.

Changes

File(s) Change Summary
executorlib/task_scheduler/file/hdf.py Added file existence check to get_queue_id before attempting to open with h5py.
executorlib/task_scheduler/file/queue_spawner.py Refactored execute_with_pysqa to require file_name and data_dict, improved HDF5 file lifecycle and queue ID synchronization.
executorlib/task_scheduler/file/shared.py Removed explicit HDF5 file dumping; now passes data_dict directly to execution function.
executorlib/task_scheduler/file/subprocess_spawner.py Refactored execute_in_subprocess to require file_name and data_dict, updated file handling, parameter order, and docstring.
tests/test_cache_fileexecutor_mpi.py Moved execute_in_subprocess import inside try block, removed unused imports.
tests/test_cache_fileexecutor_serial.py Updated imports, refactored test to use new execute_in_subprocess signature, and explicit file/data handling in tests.

Sequence Diagram(s)

sequenceDiagram
    participant Caller
    participant SubprocessSpawner
    participant HDF5File

    Caller->>SubprocessSpawner: execute_in_subprocess(command, file_name, data_dict, ...)
    SubprocessSpawner->>HDF5File: Remove existing file (if any)
    SubprocessSpawner->>HDF5File: Dump data_dict to file_name
    SubprocessSpawner->>SubprocessSpawner: Spawn subprocess with command
Loading
sequenceDiagram
    participant Caller
    participant QueueSpawner
    participant HDF5File

    Caller->>QueueSpawner: execute_with_pysqa(command, file_name, data_dict, ...)
    QueueSpawner->>HDF5File: Get queue_id from file (if exists)
    alt queue_id missing or status unknown
        QueueSpawner->>HDF5File: Remove and re-dump file with data_dict
    else file does not exist
        QueueSpawner->>HDF5File: Dump data_dict to file_name
    end
    QueueSpawner->>QueueSpawner: Submit job to queue
    QueueSpawner->>HDF5File: Dump queue_id to file
Loading

Possibly related PRs

Suggested reviewers

  • liamhuber

Poem

A hop, a skip, a file to write,
Now data and names are held just right.
With subprocesses spawned and queues in sync,
HDF5s dance in a tidy blink.
Tests leap along, no error in sight—
This bunny’s code is running light!
🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@jan-janssen
Copy link
Member Author

@liamhuber This stacked pull request should address the challenges in #721 . I am going to open a separate pull request for the deleting of jobs with wrong resource specifications.

Copy link

codecov bot commented Jul 14, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 96.92%. Comparing base (ba2c702) to head (150f8f0).
Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #732      +/-   ##
==========================================
+ Coverage   96.90%   96.92%   +0.01%     
==========================================
  Files          30       30              
  Lines        1357     1364       +7     
==========================================
+ Hits         1315     1322       +7     
  Misses         42       42              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@liamhuber
Copy link
Member

@liamhuber This stacked pull request should address the challenges in #721 . I am going to open a separate pull request for the deleting of jobs with wrong resource specifications.

Yep, all working!

@jan-janssen jan-janssen marked this pull request as ready for review July 16, 2025 14:09
Base automatically changed from shutdown_case to main July 16, 2025 14:09
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ba2c702 and 150f8f0.

📒 Files selected for processing (6)
  • executorlib/task_scheduler/file/hdf.py (1 hunks)
  • executorlib/task_scheduler/file/queue_spawner.py (3 hunks)
  • executorlib/task_scheduler/file/shared.py (2 hunks)
  • executorlib/task_scheduler/file/subprocess_spawner.py (1 hunks)
  • tests/test_cache_fileexecutor_mpi.py (1 hunks)
  • tests/test_cache_fileexecutor_serial.py (2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (5)
tests/test_cache_fileexecutor_mpi.py (1)
executorlib/task_scheduler/file/subprocess_spawner.py (1)
  • execute_in_subprocess (10-59)
tests/test_cache_fileexecutor_serial.py (1)
executorlib/task_scheduler/file/subprocess_spawner.py (2)
  • execute_in_subprocess (10-59)
  • terminate_subprocess (62-71)
executorlib/task_scheduler/file/shared.py (1)
executorlib/task_scheduler/file/hdf.py (1)
  • get_output (58-74)
executorlib/task_scheduler/file/queue_spawner.py (2)
executorlib/task_scheduler/file/hdf.py (2)
  • get_queue_id (94-108)
  • dump (11-28)
executorlib/standalone/inputcheck.py (1)
  • check_file_exists (196-203)
executorlib/task_scheduler/file/subprocess_spawner.py (2)
executorlib/standalone/inputcheck.py (1)
  • check_file_exists (196-203)
executorlib/task_scheduler/file/hdf.py (1)
  • dump (11-28)
🔇 Additional comments (13)
tests/test_cache_fileexecutor_mpi.py (1)

8-8: LGTM! Import consolidation improves error handling.

Moving the execute_in_subprocess import into the try block ensures it's only imported when the package is available, consistent with the conditional import pattern used throughout the test file.

executorlib/task_scheduler/file/hdf.py (1)

104-104: LGTM! Defensive programming improvement.

Adding the file existence check prevents h5py from attempting to open non-existent files, making the function more robust and aligning with the improved file lifecycle management across the codebase.

executorlib/task_scheduler/file/shared.py (2)

12-12: LGTM! Refactoring moves file handling responsibility.

Removing the dump import is consistent with the refactoring that moves file handling (cleanup of previous jobs) into the individual execute functions, as stated in the PR objectives.


159-159: LGTM! Direct data_dict passing aligns with refactoring.

Passing data_dict directly to the execute function is part of the refactoring to move cleanup responsibilities into the execute function, improving the separation of concerns.

tests/test_cache_fileexecutor_serial.py (3)

9-12: LGTM! Import consolidation improves error handling.

Moving the subprocess spawner imports into the try block ensures they're only imported when the package is available, consistent with the conditional import pattern.


203-206: LGTM! Test setup aligns with new function signature.

Creating the test file and directory structure properly sets up the test environment for the updated execute_in_subprocess function that now requires explicit file_name and data_dict parameters.


208-220: LGTM! Function calls updated for new signature.

The updated function calls correctly pass the required file_name and data_dict parameters, ensuring the tests work with the refactored execute_in_subprocess function interface.

executorlib/task_scheduler/file/queue_spawner.py (2)

13-14: LGTM! Parameter changes align with refactoring objectives.

Making file_name and data_dict mandatory parameters enables the function to handle cleanup of previous jobs internally, which aligns with the PR objectives.


47-55: LGTM! Improved file lifecycle management.

The new file handling logic effectively moves cleanup of previous jobs into the execute function. The logic correctly:

  1. Retrieves the queue_id from existing files
  2. Conditionally removes and re-dumps files when the queue_id is missing or job status is unknown
  3. Ensures the file is always present and synchronized before job submission

This addresses the PR objective of moving cleanup responsibilities into the execute function.

executorlib/task_scheduler/file/subprocess_spawner.py (4)

7-7: LGTM: Import addition is appropriate.

The import of dump function is correctly added and necessary for the new file handling logic.


26-28: LGTM: Docstring properly updated.

The docstring has been correctly updated to reflect the new parameters and their descriptions.


42-44: Excellent: Cleanup logic successfully moved into execute function.

The new file handling logic properly implements the PR objective of moving cleanup into the execute function:

  1. Removes existing file to ensure clean state
  2. Dumps new data using the standardized dump function
  3. Maintains consistency with the existing check_file_exists validation

This approach ensures data consistency and prevents issues with stale or corrupted files from previous executions.


58-58: LGTM: Improved fallback directory handling.

The cwd assignment now properly uses cache_directory as a fallback when resource_dict.get("cwd") returns None, which is more robust than the previous implementation.

Comment on lines +12 to +14
file_name: str,
data_dict: dict,
cache_directory: Optional[str] = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify breaking change impact across the codebase.

The function signature has been significantly changed:

  • file_name is now required (was optional)
  • data_dict is a new required parameter
  • cache_directory parameter position has changed

This is a breaking change that will require updates to all callers of this function.


🏁 Script executed:

#!/bin/bash
# Description: Find all calls to execute_in_subprocess to verify they match the new signature
# Expected: All calls should pass file_name and data_dict as required parameters

rg -A 5 "execute_in_subprocess\(" --type py

Length of output: 1393


Update execute_in_subprocess callers to match the new signature

The recent signature change in executorlib/task_scheduler/file/subprocess_spawner.py breaks existing calls. Please update all callers—particularly in your tests—to use the new parameters:

  • In tests/test_cache_fileexecutor_serial.py:
    • Rename config_directory=cache_directory=
    • Remove or refactor the unsupported backend= argument
    • Ensure you’re passing the three required parameters:
      execute_in_subprocess(
          command=[…],
          file_name=file_name,
          data_dict={…},
          cache_directory="test",       # renamed from config_directory
          task_dependent_lst=None       # if needed
      )
  • Search for any other calls to execute_in_subprocess( and apply the same updates.
🤖 Prompt for AI Agents
In executorlib/task_scheduler/file/subprocess_spawner.py around lines 12 to 14,
the function execute_in_subprocess has updated parameters, renaming
config_directory to cache_directory and removing the backend argument. Update
all calls to execute_in_subprocess, especially in
tests/test_cache_fileexecutor_serial.py, by renaming config_directory= to
cache_directory=, removing the backend= argument, and ensuring the three
required parameters (command, file_name, data_dict) plus optional
cache_directory and task_dependent_lst are passed correctly. Also, search the
codebase for other calls to execute_in_subprocess and apply these same changes.

@jan-janssen jan-janssen merged commit 09ae798 into main Jul 16, 2025
31 checks passed
@jan-janssen jan-janssen deleted the move_cleanup branch July 16, 2025 14:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants