-
-
Notifications
You must be signed in to change notification settings - Fork 5.9k
Support Actions concurrency
syntax
#32751
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
3551677
to
fcf4517
Compare
52833e7
to
130f2a2
Compare
461c7c1
to
d5168a2
Compare
e038ed2
to
f77f266
Compare
concurrency
for Actionsconcurrency
syntax
ad71599
to
8f5948b
Compare
This comment was marked as resolved.
This comment was marked as resolved.
) Move the main logic of `generateTaskContext` and `findTaskNeeds` to the `services` layer. This is a part of #32751, since we need the git context and `needs` to parse the concurrency expressions. --------- Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com> Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Good catch. There was a bug when checking workflow-level concurrency groups. Now it has been fixed. |
I just tried this, with workflow on: push
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
test:
strategy:
fail-fast: false
matrix:
no:
- 1
- 2
- 3
- 4
- 5
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run tests
run: |
echo ${{ github.run_id }} > "/tmp/test.txt"
- uses: christopherhx/gitea-upload-artifact@v4
with:
path: test.txt
name: test
I expected the same result as on initial push, same order opposite behavior Maybe you could provide me some insights, or I am going to debug this myself and share you my findings What works good so far
|
services/actions/task.go
Outdated
@@ -53,6 +53,10 @@ func PickTask(ctx context.Context, runner *actions_model.ActionRunner) (*runnerv | |||
return nil | |||
} | |||
|
|||
if err := CancelJobsByJobConcurrency(ctx, t.Job); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code is cancelling implicitly a newer (re)run while a task is picked by an older (re)run and seems to be the cause for the behavior I described just before
I would expect only scheduling a job updates concurrency, since I am still using GitHub Actions side by side
Rerun all jobs didn't cancel the older run, which is already an error state.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rerun all jobs didn't cancel the older run, which is already an error state.
Status fixed, by respecting workflow concurrency in rerun all jobs.
// cancel previous jobs in the same concurrency group | ||
previousJobs, err := db.Find[ActionRunJob](ctx, &FindRunJobOptions{ | ||
RepoID: job.RepoID, | ||
ConcurrencyGroup: job.ConcurrencyGroup, | ||
Statuses: []Status{StatusRunning, StatusWaiting, StatusBlocked}, | ||
}) | ||
if err != nil { | ||
return cancelledJobs, fmt.Errorf("find previous jobs: %w", err) | ||
} | ||
previousJobs = slices.DeleteFunc(previousJobs, func(j *ActionRunJob) bool { return j.ID == job.ID }) | ||
cjs, err := CancelJobs(ctx, previousJobs) | ||
if err != nil { | ||
return cancelledJobs, fmt.Errorf("cancel previous jobs: %w", err) | ||
} | ||
cancelledJobs = append(cancelledJobs, cjs...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would expect that job concurrency key cancels equal workflow concurrency if defined on job level.
Based on my experience workflow and job concurrency use the same pool of concurrency keys. (deadlock handling required)
and CancelPreviousJobsByRunConcurrency to not be called here (EDIT 2 idk how this PR currently works, might break everything if not replaced)
e.g.
jobs:
_:
concurrency: test
cancels workflow
concurrency: test
In response of the unexpected behavior I have seen
EDIT
If we add an optional job / string parameter to CancelPreviousJobsByRunConcurrency , so the concurrency group of the job can be provided to replace the run concurrency key would fulfill my expectation, I guess.
Any news on this? Is this still planed for 1.25.0? |
@Zettat123 I would try to fix my findings here and resolve the conflicts, but I have no access to this Pullrequest (Maintainer with read only access). Allowing to queue up Workflows again is very nice, since I am not in favor of auto concurrency of length 1. For own CI feedback I need create a new (draft) PR including all commits from here. Then there are multiple options
Thank you for all your work on this so far :). |
If you'd like to work with others' PRs often, maybe you can ask @go-gitea/technical-oversight-committee or some core members like @lunny to apply a merger's access, then you can edit PRs directly |
I would expect this to happen < 5 times per year. I actually only applied as maintainer due to work with others' PRs. I will ask formally in discord. |
New Defect found, needs fixing in gitea/act
workflow on: push
jobs:
test:
concurrency: test
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4 short syntax not parseable. Do we have this bug in act_runner as well? Do we have this bug in Gitea already? |
One thing that I've noticed while running it is that manually cancelling a job that is blocking another job will not start the blocked job. You have to manually cancel & re-run the blocked job for it to be picked up. So, if e.g. you have a job with: concurrency:
group: main
cancel-in-progress: false And you push twice to |
Thanks for pointing this out, I convert is to a draft until I am certain this works correctly. I will add this scenario to the test cases and try to fix this |
Sounds good. Feel free to ping / DM on discord if you want me to test something, or need something else. I've been running this for a while on my instance already. |
@ChristopherHX Sorry for the late reply, as I was busy with other work last month. Thank you for the improvements and bug fixes to this PR. Please feel free to update the code. I will also check if there are still issues with this PR. |
Hi, No problem, once we both agree this is finished and can be merged I approve. Good that you already looked at the cancellation problem. I still think about if my code removal is correct or not in sense of not only let existing tests pass. I had another behavior difference
Would we want to add a custom boolean to implement both behaviors? concurrency:
cancel-pending: true # default true for GitHub Actions Compat, false for current implementation Would we want to keep the behavior difference and document it? |
if err != nil { | ||
ctx.ServerError("GetRunByIndex", err) | ||
} | ||
if err := actions_service.EmitJobsIfReady(run.ID); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Queue EmitJobsIfReady from Actions Notifier Job and Workflow Completed?
(The event source used by workflow_run and workflow_job webhooks / action trigger)
I mean whatif the job is cancelled by abandoning or other reasons.
Less explicit code that has the same weakness towards all ways of cancellation
Fix #24769
Fix #32662
Fix #33260
Depends on https://gitea.com/gitea/act/pulls/124
This PR removes the auto-cancellation feature added by #25716. Users need to manually add
concurrency
to workflows to control concurrent workflows or jobs.