Skip to content

Conversation

tschneidereit
Copy link
Member

I noticed that Wasmtime uses almost no cache for its GitHub Actions workflows. Let's see how well adding a cache for target plus various cargo dirs works.

@tschneidereit tschneidereit requested a review from a team as a code owner September 17, 2025 18:46
@tschneidereit tschneidereit requested review from cfallin and removed request for a team September 17, 2025 18:46
@alexcrichton
Copy link
Member

Personally I don't think we're a good fit for caching here, so I'm not sure about this. Some concerns I would have are:

  • Right now this is keyed on os + lock file but what exactly is built/cached in a target dir depends on the build itself. That means that as-is there may not be much sharing between builders with whomever wins the race to populate the cache (especially with features in play changing deep in deps). If we were to fully shard the cache based on build then I'd fear we would blow the limits quickly. We have >100 CI entries and with a 10G limit for the whole repo that gives ~100M per cache entry, and a build of Wasmtime is much larger than that.
  • We don't really need to cache Cargo registry lookups any more AFAIK as it's such a small portion of the build itself.

The most plausible route I know of for caching would be something like sccache-level granularity rather than target-dir-granularity, but I also haven't tested out if that would help much. Our slowest builds are mostly emulation of s390x/riscv64 and Windows. Emulation makes sense and Windows is unfortunately just really slow

I noticed that Wasmtime uses almost no cache for its GitHub Actions workflows. Let's see how well adding a cache for `target` plus various `cargo` dirs works.
@tschneidereit
Copy link
Member Author

Yeah, it's possible that this will end up not being worth it: my first attempt shaved about 45 seconds off the build, and that might vary by which job wins the race to creating the cache entry.

I just force-pushed a new version using https://github.com/Swatinem/rust-cache. We'll see if that does any better at all. If not, the only other thing I can think of is to specifically enable caching for the longest-running jobs and nothing else. Or we'll just close this at that point.

@tschneidereit
Copy link
Member Author

I re-triggered the build with cache seeded, but I'm already pretty certain that this won't help as-is: the job name part of the cache keys for the test-* jobs is abbreviated in a way that makes exactly the longest-running jobs race for creating the cache entry :/

The test jobs are the long pole, and the cache key needs to be derived from the test matrix to work properly.
…e on `main`

This brings disk usage for the cache down to about 340MB per platform, which should mean that we're not risking eviction of other, longer-term stable caches.
@tschneidereit
Copy link
Member Author

tschneidereit commented Sep 17, 2025

With a switch to only using the cache for the longest-running test job, I think this might work? It seems to reduce CI runtime by about 60-80 seconds, or ~10% or so, which doesn't seem too bad.

The last iteration also only caches dependencies now. With that, a cache entry for Linux is about 400MB, which should hopefully mean that for a full test run we should still stay way under 10GB, and hence shouldn't risk evicting the preexisting, much more long-term stable caches.

[Edit: 400MB, not 340MB. I think that doesn't change the calculus though)

@alexcrichton
Copy link
Member

Could you run prtest:full for this too? I'm not actually sure how many wasmtime-cli builds we do but it would be good to confirm the total size is hopefully well under 10G. Only caching wasmtime-cli seems reasonable since that's our slowest test run mostly, with the one outlier being C API tests on Windows.

Another possible alternative, though, is to configure sccache for the wasmtime-cli test job too. That would I think yield effectively the same speedups with better cache eviction behavior because the cache entries are much more fine-grained.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants