Skip to content

Conversation

sfc-gh-joshi
Copy link
Contributor

@sfc-gh-joshi sfc-gh-joshi commented Sep 25, 2025

What do these changes do?

After this PR, AutoSwitchBackend now has 2 separate behaviors for functions with multiple query compiler arguments:

  1. If the method called is a registered pre-operation switch point, ALL active backends are considered as valid candidates for switching.
  2. If the method is NOT a pre-operation switch point, then arguments may only be moved to backends found among the original query compilers.

For example, after calling pd.concat([A1, A2]), we previously would only consider switching to the backends of the query compilers of arguments A1 and A2. Now, after calling register_function_for_pre_op_switch(class_name=None, backend="Backend_A", method="concat"), Modin may now move arguments to some third backend Backend_B.

  • first commit message and PR title follow format outlined here

    NOTE: If you edit the PR title to match this format, you need to add another commit (even if it's empty) or amend your last commit for the CI job that checks the PR title to pick up the new PR title.

  • passes flake8 modin/ asv_bench/benchmarks scripts/doc_checker.py
  • passes black --check modin/ asv_bench/benchmarks scripts/doc_checker.py
  • signed commit with git commit -s
  • Resolves FEAT: Consider automatic switching for backends other than those of the provided arguments #7675
  • tests added and passing
  • module layout described at docs/development/architecture.rst is up-to-date

…n provided arguments

Signed-off-by: Jonathan Shi <jonathan.shi@snowflake.com>
Copy link
Contributor

@sfc-gh-mvashishtha sfc-gh-mvashishtha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have some minor comments. Also, I have some questions:

  1. Have you run any benchmarks with this change? Does it solve the pathological merge case that motivated it?
  2. It possible or likely that we switch to an unexpected and/or suboptimal backend during multi-dataset operations? e.g. say we switch to ray for a snowflake-pandas merge? Is there a good way to test for whether this happens in practice?

@sfc-gh-joshi
Copy link
Contributor Author

Have you run any benchmarks with this change? Does it solve the pathological merge case that motivated it?

I ran the pathological merge as a sanity check and it went from 140s -> 6s (it takes around 2s with hybrid disabled, but there's some thrashing because an unnecessary switch occurs after read_snowflake). I'm working on doing some more testing.

It possible or likely that we switch to an unexpected and/or suboptimal backend during multi-dataset operations? e.g. say we switch to ray for a snowflake-pandas merge? Is there a good way to test for whether this happens in practice?

Right now we don't allow automatic switching to Ray, as its omitted from all_switchable_backends. It's possible this may happen in the future, but I think these cases are pretty well-simulated by test_compiler_caster.py. I just haven't gotten around to updating those tests yet since I was waiting for #7676 first.

@sfc-gh-joshi sfc-gh-joshi marked this pull request as ready for review September 26, 2025 19:06
@disable_logging
def max_cost(self) -> int:
@classmethod
def max_cost(cls) -> int:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So max cost may be implemented using some innate knowledge of the object; so /technically/ it cannot be a class method. In a practical sense I don't think we ever set this to anything other than COST_IMPOSSIBLE - so I'm wondering if the function should be removed in favor of just using COST_IMPOSSIBLE AS the static return value.

  • max_cost is the maximum cost allowed by this query compiler across all data movements. This method
    sets a normalized upper bound for situations where multiple data frames from different engines all
    need to move to the same engine. The value returned by this method can exceed
    QCCoercionCost.COST_IMPOSSIBLE

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We set it to COST_IMPOSSIBLE * 1e10 in the Snowflake QC. I turned it into a class method so that backends w/o arguments present in the operation (for example, calculating cost for the Cloud backend in pd.concat([df_pico1, df_pico2]) can report a max cost.

Should I leave the original max_cost method as-is, and introduce an alternate static/classmethod max_cost for these cases?

from modin.logging.metrics import emit_metric


def all_switchable_backends() -> list[str]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand why this cannot be part of envvars

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could make this configurable, but I just refactored this out from a function in the QC caster:

for backend in Backend.get_active_backends():
if backend in ("Ray", "Unidist", "Dask"):
# Disable automatically switching to these engines for now, because
# 1) _get_prepared_factory_for_backend() currently calls
# _initialize_engine(), which starts up the ray/dask/unidist
# processes
# 2) we can't decide to switch to unidist in the middle of execution.
continue

"""
Calculate which query compiler we should cast to.
Switching calculation is performed as follows:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to the documentation here.

).io_cls.query_compiler_cls,
)
if preop_switch:
# Initialize backend data for any backends not found among query compiler arguments.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we create a new environment variable for this behavior; maybe default on so we can perf test with it on and off?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

preop_switch is set for individual functions registered by register_function_for_pre_op_switch when BackendCalculator is initialized. What would you want the environment variable to do?

@sfc-gh-joshi
Copy link
Contributor Author

sfc-gh-joshi commented Sep 26, 2025

@sfc-gh-mvashishtha After doing some more testing I realized it made more sense to only switch to other backends if we explicitly registered a function as a switch point, as is the case for 0/1-argument functions. I've updated the code to reflect this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

FEAT: Consider automatic switching for backends other than those of the provided arguments
3 participants