Skip to content

Conversation

fclairamb
Copy link
Owner

…e data transfers

Before this change, the server would immediately close control connections when IdleTimeout was reached, regardless of whether a data transfer was active. This caused issues with large or slow file transfers that could take longer than the configured idle timeout.

Now the server checks if there is an active data transfer (isTransferOpen) before closing the connection. If a transfer is active, it extends the deadline instead of closing the connection, allowing the transfer to complete.

Fixes #430

🤖 Generated with Claude Code

claude bot and others added 2 commits September 1, 2025 22:51
…e data transfers

Before this change, the server would immediately close control connections
when IdleTimeout was reached, regardless of whether a data transfer was
active. This caused issues with large or slow file transfers that could
take longer than the configured idle timeout.

Now the server checks if there is an active data transfer (isTransferOpen)
before closing the connection. If a transfer is active, it extends the
deadline instead of closing the connection, allowing the transfer to
complete.

Fixes #430

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Florent Clairambault <fclairamb@users.noreply.github.com>
Copy link

codecov bot commented Sep 14, 2025

Codecov Report

❌ Patch coverage is 33.33333% with 10 lines in your changes missing coverage. Please review.
✅ Project coverage is 86.40%. Comparing base (8ca225b) to head (25b233a).

Files with missing lines Patch % Lines
client_handler.go 33.33% 9 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #577      +/-   ##
==========================================
- Coverage   86.74%   86.40%   -0.35%     
==========================================
  Files          12       12              
  Lines        2369     2383      +14     
==========================================
+ Hits         2055     2059       +4     
- Misses        238      247       +9     
- Partials       76       77       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Contributor

claude bot commented Sep 14, 2025

Code Review for PR #577

Summary

This PR addresses issue #430 by preventing the control connection from being closed due to idle timeout when there's an active data transfer in progress. The solution is straightforward and correctly identifies active transfers before deciding whether to close the connection.

✅ Strengths

  • Correct fix: The solution properly addresses the core issue where large/slow transfers could be interrupted by idle timeout
  • Thread-safe: Uses proper mutex locking when checking isTransferOpen state
  • Minimal change: Focused fix that doesn't introduce unnecessary complexity
  • Follows existing patterns: Uses the same deadline extension pattern already present in the codebase

🔍 Issues and Recommendations

1. Potential Infinite Loop Risk 🚨

The current implementation extends the deadline by IdleTimeout seconds each time. If a transfer stalls (e.g., network issues, client hanging), this could keep the connection alive indefinitely.

Recommendation: Consider adding a maximum transfer timeout or counter to prevent infinite extensions:

// Add a field to track consecutive extensions
if hasActiveTransfer {
    if c.transferTimeoutExtensions >= maxTransferExtensions {
        // Force close after too many extensions
        c.logger.Warn("Transfer timeout exceeded maximum extensions")
        // Continue with normal timeout handling
    } else {
        c.transferTimeoutExtensions++
        // Extend deadline...
    }
}

2. Missing Test Coverage ⚠️

There are no tests for this new behavior. The existing TestIdleTimeout doesn't cover the scenario where a transfer is active.

Recommendation: Add a test case that:

  • Initiates a slow transfer (using throttled connection or large file)
  • Verifies the control connection remains open beyond IdleTimeout
  • Confirms connection closes after transfer completes

3. Deadline Calculation Complexity 🔧

The deadline calculation is unnecessarily complex:

time.Duration(time.Second.Nanoseconds() * int64(c.server.settings.IdleTimeout))

Simplification:

time.Duration(c.server.settings.IdleTimeout) * time.Second

4. Reset Transfer Extension Counter 📝

When a transfer completes successfully, any transfer extension counter should be reset for the next transfer (in the closeTransfer method at client_handler.go:339).

🔒 Security Considerations

  • The fix doesn't introduce new attack vectors
  • However, consider adding rate limiting or maximum transfer duration to prevent resource exhaustion attacks where clients intentionally keep slow transfers running

📊 Performance Impact

  • Minimal overhead: One mutex lock/unlock per timeout event during active transfers
  • No impact on normal operations without active transfers

📚 Documentation

Consider updating documentation to clarify:

  • How IdleTimeout behaves during active transfers
  • Any maximum transfer duration limits (if implemented)

✅ Overall Assessment

The PR correctly fixes the reported issue with a clean, focused change. However, it needs:

  1. Test coverage for the new behavior
  2. Protection against infinite deadline extensions
  3. Minor code simplification for readability

Once these items are addressed, this will be a solid fix for the idle timeout issue during active transfers.

Copy link
Contributor

claude bot commented Sep 14, 2025

Code Review for PR #577: Fix IdleTimeout during active transfers

Thank you for this fix! I've reviewed the changes and have the following feedback:

Strengths

  • Correctly addresses the issue: The fix properly checks for active data transfers before closing the control connection on idle timeout
  • Thread-safe implementation: Proper use of mutex lock/unlock to access isTransferOpen
  • Maintains existing behavior: When no transfer is active, the original timeout handling is preserved
  • Good defensive coding: Includes error handling for SetDeadline failures

🔍 Areas for Improvement

1. Deadline Extension Logic

The current implementation has a potential issue with the deadline extension calculation:

extendedDeadline := time.Now().Add(time.Duration(time.Second.Nanoseconds() * int64(c.server.settings.IdleTimeout)))

Issues:

  • The conversion is unnecessarily complex. Since IdleTimeout is already in seconds, you can simplify this to:
    extendedDeadline := time.Now().Add(time.Duration(c.server.settings.IdleTimeout) * time.Second)
  • More importantly, this creates a recursive timeout loop. If the transfer takes longer than another idle timeout period, the same timeout will trigger again, extending indefinitely. Consider using a longer extension period or a maximum number of extensions.

2. Missing Test Coverage

The PR doesn't include tests for this new behavior. Consider adding a test case that:

  • Sets up a server with a short idle timeout
  • Initiates a large file transfer that takes longer than the idle timeout
  • Verifies the connection stays open during the transfer
  • Confirms the connection closes after the transfer completes and idle timeout expires

Example test structure:

func TestIdleTimeoutWithActiveTransfer(t *testing.T) {
    // Create server with 2-second idle timeout
    // Start a file transfer
    // Sleep for 3 seconds (> idle timeout)
    // Verify connection is still alive
    // Complete transfer
    // Wait for idle timeout
    // Verify connection closes
}

3. Potential Race Condition

While the mutex protects isTransferOpen, there's still a window between checking the flag and extending the deadline where the transfer could complete. This is likely acceptable given the use case, but worth noting.

4. Consider Alternative Approaches

Instead of repeatedly extending on timeout, consider:

  • Disabling idle timeout entirely while a transfer is active (set to 0 or very large value when transfer starts, restore when it ends)
  • Using a separate goroutine to monitor transfer status and manage deadlines
  • Implementing a transfer-aware timeout that resets on data activity rather than control channel activity

📝 Suggested Implementation

Here's a more robust approach:

if hasActiveTransfer {
    // Extend by a reasonable period (e.g., 5 minutes or 10x idle timeout)
    extensionPeriod := time.Duration(c.server.settings.IdleTimeout) * time.Second * 10
    if extensionPeriod < 5*time.Minute {
        extensionPeriod = 5 * time.Minute
    }
    
    if errSet := c.conn.SetDeadline(time.Now().Add(extensionPeriod)); errSet != nil {
        c.logger.Error("Could not extend deadline during transfer", "err", errSet)
        // Consider whether to continue or close the connection here
    }
    
    if c.debug {
        c.logger.Debug("Extended deadline due to active transfer", 
            "extension", extensionPeriod)
    }
    return
}

🚀 Overall Assessment

This is a good fix for an important issue. With the suggested improvements to the deadline calculation and added test coverage, this will be a solid addition to the codebase. The fix aligns well with the library's architecture and maintains backward compatibility.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

IdleTimeout doesn't factor in active data connection (large or slow transfers)
1 participant