From 88c4f035eabd0e9e449b8ed86a8c090d5e4918b6 Mon Sep 17 00:00:00 2001 From: Renovate Bot Date: Thu, 26 Mar 2026 14:52:07 +0100 Subject: [PATCH 01/82] Update module golang.org/x/image to v0.38.0 [SECURITY] (v15.0/forgejo) (#11825) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This PR contains the following updates: | Package | Change | [Age](https://docs.renovatebot.com/merge-confidence/) | [Confidence](https://docs.renovatebot.com/merge-confidence/) | |---|---|---|---| | [golang.org/x/image](https://pkg.go.dev/golang.org/x/image) | [`v0.37.0` → `v0.38.0`](https://cs.opensource.google/go/x/image/+/refs/tags/v0.37.0...refs/tags/v0.38.0) | ![age](https://developer.mend.io/api/mc/badges/age/go/golang.org%2fx%2fimage/v0.38.0?slim=true) | ![confidence](https://developer.mend.io/api/mc/badges/confidence/go/golang.org%2fx%2fimage/v0.37.0/v0.38.0?slim=true) | --- ### OOM from malicious IFD offset in golang.org/x/image/tiff [CVE-2026-33809](https://nvd.nist.gov/vuln/detail/CVE-2026-33809) / [GO-2026-4815](https://pkg.go.dev/vuln/GO-2026-4815)
More information #### Details A maliciously crafted TIFF file can cause image decoding to attempt to allocate up 4GiB of memory, causing either excessive resource consumption or an out-of-memory error. #### Severity Unknown #### References - [https://go.dev/cl/757660](https://go.dev/cl/757660) - [https://go.dev/issue/78267](https://go.dev/issue/78267) This data is provided by [OSV](https://osv.dev/vulnerability/GO-2026-4815) and the [Go Vulnerability Database](https://github.com/golang/vulndb) ([CC-BY 4.0](https://github.com/golang/vulndb#license)).
--- ### Configuration 📅 **Schedule**: Branch creation - "" (UTC), Automerge - Between 12:00 AM and 03:59 AM ( * 0-3 * * * ) (UTC). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] If you want to rebase/retry this PR, check this box --- This PR has been generated by [Renovate Bot](https://github.com/renovatebot/renovate). Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/11825 Reviewed-by: Michael Kriese Co-authored-by: Renovate Bot Co-committed-by: Renovate Bot --- go.mod | 2 +- go.sum | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/go.mod b/go.mod index b7c217def8..68381a87a7 100644 --- a/go.mod +++ b/go.mod @@ -103,7 +103,7 @@ require ( go.uber.org/mock v0.6.0 go.yaml.in/yaml/v3 v3.0.4 golang.org/x/crypto v0.49.0 - golang.org/x/image v0.37.0 + golang.org/x/image v0.38.0 golang.org/x/net v0.52.0 golang.org/x/oauth2 v0.36.0 golang.org/x/sync v0.20.0 diff --git a/go.sum b/go.sum index 33570a84d1..410025ae97 100644 --- a/go.sum +++ b/go.sum @@ -738,8 +738,8 @@ golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2 golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70= golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js= golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0= -golang.org/x/image v0.37.0 h1:ZiRjArKI8GwxZOoEtUfhrBtaCN+4b/7709dlT6SSnQA= -golang.org/x/image v0.37.0/go.mod h1:/3f6vaXC+6CEanU4KJxbcUZyEePbyKbaLoDOe4ehFYY= +golang.org/x/image v0.38.0 h1:5l+q+Y9JDC7mBOMjo4/aPhMDcxEptsX+Tt3GgRQRPuE= +golang.org/x/image v0.38.0/go.mod h1:/3f6vaXC+6CEanU4KJxbcUZyEePbyKbaLoDOe4ehFYY= golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= From 0245410cdcdefd94dc0030a827f0e5e6a6e3bd16 Mon Sep 17 00:00:00 2001 From: forgejo-backport-action Date: Thu, 26 Mar 2026 19:19:09 +0100 Subject: [PATCH 02/82] [v15.0/forgejo] fix(api): package name in route not properly unescaped (#11829) **Backport:** https://codeberg.org/forgejo/forgejo/pulls/11822 This pull fixes the issue described in https://codeberg.org/forgejo/forgejo/issues/11427 . The api handler of link/unlink packages use escaped path params to find packages. It causes errors when it comes to npm packages, which contains characters like `@` and `/`. ## Checklist The [contributor guide](https://forgejo.org/docs/next/contributor/) contains information that will be helpful to first time contributors. All work and communication must conform to Forgejo's [AI Agreement](https://codeberg.org/forgejo/governance/src/branch/main/AIAgreement.md). There also are a few [conditions for merging Pull Requests in Forgejo repositories](https://codeberg.org/forgejo/governance/src/branch/main/PullRequestsAgreement.md). You are also welcome to join the [Forgejo development chatroom](https://matrix.to/#/#forgejo-development:matrix.org). ### Tests for Go changes (can be removed for JavaScript changes) - I added test coverage for Go changes... - [ ] in their respective `*_test.go` for unit tests. - [x] in the `tests/integration` directory if it involves interactions with a live Forgejo server. - I ran... - [x] `make pr-go` before pushing ### Tests for JavaScript changes (can be removed for Go changes) - I added test coverage for JavaScript changes... - [ ] in `web_src/js/*.test.js` if it can be unit tested. - [ ] in `tests/e2e/*.test.e2e.js` if it requires interactions with a live Forgejo server (see also the [developer guide for JavaScript testing](https://codeberg.org/forgejo/forgejo/src/branch/forgejo/tests/e2e/README.md#end-to-end-tests)). ### Documentation - [ ] I created a pull request [to the documentation](https://codeberg.org/forgejo/docs) to explain to Forgejo users how to use this change. - [x] I did not document these changes and I do not expect someone else to do it. ### Release notes - [x] This change will be noticed by a Forgejo user or admin (feature, bug fix, performance, etc.). I suggest to include a release note for this change. - [ ] This change is not visible to a Forgejo user or admin (refactor, dependency upgrade, etc.). I think there is no need to add a release note for this change. *The decision if the pull request will be shown in the release notes is up to the mergers / release team.* The content of the `release-notes/.md` file will serve as the basis for the release notes. If the file does not exist, the title of the pull request will be used instead. Co-authored-by: Guangxiong Lin Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/11829 Reviewed-by: Mathieu Fenniak Co-authored-by: forgejo-backport-action Co-committed-by: forgejo-backport-action --- routers/api/v1/packages/package.go | 6 +++--- tests/integration/api_packages_npm_test.go | 19 +++++++++++++++++++ 2 files changed, 22 insertions(+), 3 deletions(-) diff --git a/routers/api/v1/packages/package.go b/routers/api/v1/packages/package.go index 03057c4feb..1f7bf89027 100644 --- a/routers/api/v1/packages/package.go +++ b/routers/api/v1/packages/package.go @@ -249,7 +249,7 @@ func LinkPackage(ctx *context.APIContext) { // "404": // "$ref": "#/responses/notFound" - pkg, err := packages.GetPackageByName(ctx, ctx.ContextUser.ID, packages.Type(ctx.PathParamRaw("type")), ctx.PathParamRaw("name")) + pkg, err := packages.GetPackageByName(ctx, ctx.ContextUser.ID, packages.Type(ctx.Params("type")), ctx.Params("name")) if err != nil { if errors.Is(err, util.ErrNotExist) { ctx.Error(http.StatusNotFound, "GetPackageByName", err) @@ -259,7 +259,7 @@ func LinkPackage(ctx *context.APIContext) { return } - repo, err := repo_model.GetRepositoryByName(ctx, ctx.ContextUser.ID, ctx.PathParamRaw("repo_name")) + repo, err := repo_model.GetRepositoryByName(ctx, ctx.ContextUser.ID, ctx.Params("repo_name")) if err != nil { if errors.Is(err, util.ErrNotExist) { ctx.Error(http.StatusNotFound, "GetRepositoryByName", err) @@ -311,7 +311,7 @@ func UnlinkPackage(ctx *context.APIContext) { // "404": // "$ref": "#/responses/notFound" - pkg, err := packages.GetPackageByName(ctx, ctx.ContextUser.ID, packages.Type(ctx.PathParamRaw("type")), ctx.PathParamRaw("name")) + pkg, err := packages.GetPackageByName(ctx, ctx.ContextUser.ID, packages.Type(ctx.Params("type")), ctx.Params("name")) if err != nil { if errors.Is(err, util.ErrNotExist) { ctx.Error(http.StatusNotFound, "GetPackageByName", err) diff --git a/tests/integration/api_packages_npm_test.go b/tests/integration/api_packages_npm_test.go index 38c7ee54c0..78c683c4e4 100644 --- a/tests/integration/api_packages_npm_test.go +++ b/tests/integration/api_packages_npm_test.go @@ -14,6 +14,7 @@ import ( auth_model "forgejo.org/models/auth" "forgejo.org/models/db" "forgejo.org/models/packages" + unit_model "forgejo.org/models/unit" "forgejo.org/models/unittest" user_model "forgejo.org/models/user" "forgejo.org/modules/packages/npm" @@ -28,6 +29,8 @@ func TestPackageNpm(t *testing.T) { defer tests.PrepareTestEnv(t)() user := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 2}) + session := loginUser(t, user.Name) + tokenWritePackage := getTokenForLoggedInUser(t, session, auth_model.AccessTokenScopeWritePackage) token := fmt.Sprintf("Bearer %s", getTokenForLoggedInUser(t, loginUser(t, user.Name), auth_model.AccessTokenScopeWritePackage)) @@ -117,6 +120,22 @@ func TestPackageNpm(t *testing.T) { assert.Equal(t, int64(192), pb.Size) }) + t.Run("RepositoryLink", func(t *testing.T) { + defer tests.PrintCurrentTest(t)() + + // create a repository + repo, _, f := tests.CreateDeclarativeRepo(t, user, "", []unit_model.Type{unit_model.TypeCode}, nil, nil) + defer f() + + // link to public repository + req := NewRequest(t, "POST", fmt.Sprintf("/api/v1/packages/%s/npm/%s/-/link/%s", user.Name, url.QueryEscape(packageName), repo.Name)).AddTokenAuth(tokenWritePackage) + MakeRequest(t, req, http.StatusCreated) + + // remove link + req = NewRequest(t, "POST", fmt.Sprintf("/api/v1/packages/%s/npm/%s/-/unlink", user.Name, url.QueryEscape(packageName))).AddTokenAuth(tokenWritePackage) + MakeRequest(t, req, http.StatusNoContent) + }) + t.Run("UploadExists", func(t *testing.T) { defer tests.PrintCurrentTest(t)() From 4230ba6ed0fa701b45630e4b6d62373558aa8795 Mon Sep 17 00:00:00 2001 From: forgejo-backport-action Date: Thu, 26 Mar 2026 19:20:19 +0100 Subject: [PATCH 03/82] [v15.0/forgejo] fix: out of synchronization error after interrupting a PR merge by user-agent disconnect (#11830) **Backport:** https://codeberg.org/forgejo/forgejo/pulls/11821 If the HTTP request to `/user/repo/pulls/N/merge` is cancelled by the user agent, don't stop work once we've passed validation and started to merge the PR. Go will automatically cancel the context if the user-agent disconnects, but that can leave Forgejo in an inconsistent state -- the `git` command can be cancelled at an arbitrary location, the `branch` database table update may not be completed, timers may not be stopped, cross-references may not be populated, etc. Added test `TestMergeHTTPRequestCancellation` stress-tests the fix by cancelling merge requests, and then verifying that the in-database repository state and in-repository database state are consistent. I've verified that this test fails if the fix is removed -- the in-database commit and commit messages don't match the repository in all PRs. This is a problem that likely affects other Forgejo endpoints. For example, even the PR merge API would be impacted. But this will be one of the most common real-world places for it to occur, so my thought is we'll see how well this fix works and what (if any) side-effects it has. We can apply a similar pattern in other areas if they are identified as problems. ## Checklist The [contributor guide](https://forgejo.org/docs/next/contributor/) contains information that will be helpful to first time contributors. All work and communication must conform to Forgejo's [AI Agreement](https://codeberg.org/forgejo/governance/src/branch/main/AIAgreement.md). There also are a few [conditions for merging Pull Requests in Forgejo repositories](https://codeberg.org/forgejo/governance/src/branch/main/PullRequestsAgreement.md). You are also welcome to join the [Forgejo development chatroom](https://matrix.to/#/#forgejo-development:matrix.org). ### Tests for Go changes - I added test coverage for Go changes... - [ ] in their respective `*_test.go` for unit tests. - [x] in the `tests/integration` directory if it involves interactions with a live Forgejo server. - I ran... - [ ] `make pr-go` before pushing ### Documentation - [ ] I created a pull request [to the documentation](https://codeberg.org/forgejo/docs) to explain to Forgejo users how to use this change. - [x] I did not document these changes and I do not expect someone else to do it. ### Release notes - [x] This change will be noticed by a Forgejo user or admin (feature, bug fix, performance, etc.). I suggest to include a release note for this change. - [ ] This change is not visible to a Forgejo user or admin (refactor, dependency upgrade, etc.). I think there is no need to add a release note for this change. Co-authored-by: Mathieu Fenniak Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/11830 Reviewed-by: Mathieu Fenniak Co-authored-by: forgejo-backport-action Co-committed-by: forgejo-backport-action --- routers/web/repo/pull.go | 20 ++- tests/integration/pull_merge_test.go | 228 ++++++++++++++++++++------- 2 files changed, 182 insertions(+), 66 deletions(-) diff --git a/routers/web/repo/pull.go b/routers/web/repo/pull.go index 83e3a3cef3..c4b8583e2d 100644 --- a/routers/web/repo/pull.go +++ b/routers/web/repo/pull.go @@ -7,6 +7,7 @@ package repo import ( + stdCtx "context" "errors" "fmt" "html" @@ -16,6 +17,7 @@ import ( "path" "strconv" "strings" + "time" "forgejo.org/models" actions_model "forgejo.org/models/actions" @@ -1420,7 +1422,15 @@ func MergePullRequest(ctx *context.Context) { } } - if err := pull_service.Merge(ctx, pr, ctx.Doer, ctx.Repo.GitRepo, repo_model.MergeStyle(form.Do), form.HeadCommitID, message, false); err != nil { + // If the HTTP request is cancelled by the user agent, don't stop work. We've started a merge and need to finish all + // the related work. All usage of `ctx` throughout the rest of this function should be only for error handling or UI + // interactions, and all effective work should use `workCtx` instead. + workCtx, cancelWorkCtx := stdCtx.WithTimeout( + stdCtx.WithoutCancel(ctx), + time.Duration(setting.Git.Timeout.Default)*time.Second) + defer cancelWorkCtx() + + if err := pull_service.Merge(workCtx, pr, ctx.Doer, ctx.Repo.GitRepo, repo_model.MergeStyle(form.Do), form.HeadCommitID, message, false); err != nil { if models.IsErrInvalidMergeStyle(err) { ctx.JSONError(ctx.Tr("repo.pulls.invalid_merge_option")) } else if models.IsErrMergeConflicts(err) { @@ -1491,7 +1501,7 @@ func MergePullRequest(ctx *context.Context) { } log.Trace("Pull request merged: %d", pr.ID) - if err := stopTimerIfAvailable(ctx, ctx.Doer, issue); err != nil { + if err := stopTimerIfAvailable(workCtx, ctx.Doer, issue); err != nil { ctx.ServerError("stopTimerIfAvailable", err) return } @@ -1504,7 +1514,7 @@ func MergePullRequest(ctx *context.Context) { headRepo = ctx.Repo.GitRepo } else { var err error - headRepo, err = gitrepo.OpenRepository(ctx, pr.HeadRepo) + headRepo, err = gitrepo.OpenRepository(workCtx, pr.HeadRepo) if err != nil { ctx.ServerError(fmt.Sprintf("OpenRepository[%s]", pr.HeadRepo.FullName()), err) return @@ -1512,7 +1522,7 @@ func MergePullRequest(ctx *context.Context) { defer headRepo.Close() } - if err := repo_service.DeleteBranchAfterMerge(ctx, ctx.Doer, pr, headRepo); err != nil { + if err := repo_service.DeleteBranchAfterMerge(workCtx, ctx.Doer, pr, headRepo); err != nil { switch { case errors.Is(err, repo_service.ErrBranchIsDefault): ctx.Flash.Error(ctx.Tr("repo.pulls.delete_after_merge.head_branch.is_default")) @@ -1557,7 +1567,7 @@ func CancelAutoMergePullRequest(ctx *context.Context) { ctx.Redirect(issue.HTMLURL()) } -func stopTimerIfAvailable(ctx *context.Context, user *user_model.User, issue *issues_model.Issue) error { +func stopTimerIfAvailable(ctx stdCtx.Context, user *user_model.User, issue *issues_model.Issue) error { if issues_model.StopwatchExists(ctx, user.ID, issue.ID) { if err := issues_model.CreateOrStopIssueStopwatch(ctx, user, issue); err != nil { return err diff --git a/tests/integration/pull_merge_test.go b/tests/integration/pull_merge_test.go index b12ced9073..a987603ce7 100644 --- a/tests/integration/pull_merge_test.go +++ b/tests/integration/pull_merge_test.go @@ -5,6 +5,7 @@ package integration import ( "bytes" + "context" "encoding/base64" "fmt" "math/rand/v2" @@ -1197,6 +1198,68 @@ func shuffleSlice(slice []int64) { }) } +func bulkCreatePRs(t *testing.T, prCount int, repo *repo_model.Repository, token string, labelIDs []int64, milestoneID int64) { + var createAllPRs sync.WaitGroup + var errorListMutex sync.Mutex + var errorList []any + for i := range prCount { + createAllPRs.Add(1) + go func(i int) { + defer createAllPRs.Done() + defer func() { + if r := recover(); r != nil { + errorListMutex.Lock() + defer errorListMutex.Unlock() + errorList = append(errorList, r) + } + }() + + // We're going to create two branches; a new target branch where the PR will merge *into*, and a new + // head branch where the PR will merge *from*. This test is about finding internal concurrency + // conflicts within Forgejo that prevent merges, and, merging simultaneously into the *same branch* + // would have natural conflicts that aren't what we're attempting to test. + targetBranchName := fmt.Sprintf("target-branch-%d", i) + req := NewRequestWithJSON(t, + "POST", + fmt.Sprintf("/api/v1/repos/%s/%s/branches", repo.OwnerName, repo.Name), + &api.CreateBranchRepoOption{ + OldRefName: "main", + BranchName: targetBranchName, + }).AddTokenAuth(token) + MakeRequest(t, req, http.StatusCreated) + + // Create the head branch that we'll be trying to merge from, with a file change: + headBranchName := fmt.Sprintf("update-%d", i) + req = NewRequestWithJSON(t, + "POST", + fmt.Sprintf("/api/v1/repos/%s/%s/contents/README-%d.md", repo.OwnerName, repo.Name, i), + &api.CreateFileOptions{ + FileOptions: api.FileOptions{ + NewBranchName: headBranchName, + }, + ContentBase64: base64.StdEncoding.EncodeToString(fmt.Appendf(nil, "Hello, world %d!\n", i)), + }).AddTokenAuth(token) + MakeRequest(t, req, http.StatusCreated) + + // Create a PR for the branch + myLabelIDs := slices.Clone(labelIDs) + shuffleSlice(myLabelIDs) // use a random ordering for labels as it may cause deadlocks when their count of assigned issues is updated + req = NewRequestWithJSON(t, http.MethodPost, + fmt.Sprintf("/api/v1/repos/%s/%s/pulls", repo.OwnerName, repo.Name), + &api.CreatePullRequestOption{ + Head: headBranchName, + Base: targetBranchName, + Title: fmt.Sprintf("create PR from branch %s", headBranchName), + Labels: myLabelIDs, + Milestone: milestoneID, + }).AddTokenAuth(token) + MakeRequest(t, req, http.StatusCreated) + }(i) + } + createAllPRs.Wait() + assert.Empty(t, errorList) +} + func TestMergeConcurrency(t *testing.T) { onApplicationRun(t, func(t *testing.T, giteaURL *url.URL) { user2 := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 2}) @@ -1241,67 +1304,7 @@ func TestMergeConcurrency(t *testing.T) { var apiMilestone api.Milestone DecodeJSON(t, resp, &apiMilestone) - { - var createAllPRs sync.WaitGroup - var errorListMutex sync.Mutex - var errorList []any - for i := range concurrentCount { - createAllPRs.Add(1) - go func(i int) { - defer createAllPRs.Done() - defer func() { - if r := recover(); r != nil { - errorListMutex.Lock() - defer errorListMutex.Unlock() - errorList = append(errorList, r) - } - }() - - // We're going to create two branches; a new target branch where the PR will merge *into*, and a new - // head branch where the PR will merge *from*. This test is about finding internal concurrency - // conflicts within Forgejo that prevent merges, and, merging simultaneously into the *same branch* - // would have natural conflicts that aren't what we're attempting to test. - targetBranchName := fmt.Sprintf("target-branch-%d", i) - req := NewRequestWithJSON(t, - "POST", - fmt.Sprintf("/api/v1/repos/%s/%s/branches", repo.OwnerName, repo.Name), - &api.CreateBranchRepoOption{ - OldRefName: "main", - BranchName: targetBranchName, - }).AddTokenAuth(token) - MakeRequest(t, req, http.StatusCreated) - - // Create the head branch that we'll be trying to merge from, with a file change: - headBranchName := fmt.Sprintf("update-%d", i) - req = NewRequestWithJSON(t, - "POST", - fmt.Sprintf("/api/v1/repos/%s/%s/contents/README-%d.md", repo.OwnerName, repo.Name, i), - &api.CreateFileOptions{ - FileOptions: api.FileOptions{ - NewBranchName: headBranchName, - }, - ContentBase64: base64.StdEncoding.EncodeToString(fmt.Appendf(nil, "Hello, world %d!\n", i)), - }).AddTokenAuth(token) - MakeRequest(t, req, http.StatusCreated) - - // Create a PR for the branch - myLabelIDs := slices.Clone(labelIDs) - shuffleSlice(myLabelIDs) // use a random ordering for labels as it may cause deadlocks when their count of assigned issues is updated - req = NewRequestWithJSON(t, http.MethodPost, - fmt.Sprintf("/api/v1/repos/%s/%s/pulls", repo.OwnerName, repo.Name), - &api.CreatePullRequestOption{ - Head: headBranchName, - Base: targetBranchName, - Title: fmt.Sprintf("create PR from branch %s", headBranchName), - Labels: myLabelIDs, - Milestone: apiMilestone.ID, - }).AddTokenAuth(token) - MakeRequest(t, req, http.StatusCreated) - }(i) - } - createAllPRs.Wait() - assert.Empty(t, errorList) - } + bulkCreatePRs(t, concurrentCount, repo, token, labelIDs, apiMilestone.ID) // All our PRs are created; now let's try to merge them concurrently. @@ -1377,3 +1380,106 @@ func TestMergeConcurrency(t *testing.T) { } }) } + +func TestMergeHTTPRequestCancellation(t *testing.T) { + onApplicationRun(t, func(t *testing.T, giteaURL *url.URL) { + user2 := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 2}) + user2Session := loginUser(t, "user2") + token := getUserToken(t, "user2", auth_model.AccessTokenScopeWriteRepository, auth_model.AccessTokenScopeWriteIssue) + + // The purpose of this test is to interrupt the HTTP request to "/%s/%s/pulls/%d/merge" by cancelling the + // context at various times during the request, and ensuring that we don't get into any states where the request + // has partially succeeded but then been cancelled -- for example, wrote the merge to the repo, but didn't + // update Forgejo's database. To do this we're going to create a bunch of PRs, merge them, and cancel request + // during merge -- evenly distributing the cancellation times like this: + cancellationChecks := 5 // number of pull requests to create and attempt to merge + measuredMergeTime := 283 * time.Millisecond // time measured on a test system for one POST /%s/%s/pulls/%d/merge + cancellationDuration := measuredMergeTime / time.Duration(cancellationChecks) // cancel after (i+1) * cancellationDuration for each PR + + repo, _, deferrer := tests.CreateDeclarativeRepo(t, user2, "concurrency-test", nil, nil, nil) + defer deferrer() + + bulkCreatePRs(t, cancellationChecks, repo, token, nil, 0) + + // All our PRs are created; now let's try to merge them concurrently. This technically doesn't have to be + // concurrent, but `TestMergeConcurrency` already had all this logic for this test to copy, and it reduces the + // test runtime: + { + var mergeAllPRs sync.WaitGroup + var errorListMutex sync.Mutex + var errorList []any + for i := range cancellationChecks { + mergeAllPRs.Add(1) + go func(i int) { + defer mergeAllPRs.Done() + defer func() { + if r := recover(); r != nil { + errorListMutex.Lock() + defer errorListMutex.Unlock() + errorList = append(errorList, r) + } + }() + + targetBranchName := fmt.Sprintf("target-branch-%d", i) + headBranchName := fmt.Sprintf("update-%d", i) + pr := unittest.AssertExistsAndLoadBean(t, &issues_model.PullRequest{ + HeadRepoID: repo.ID, + BaseRepoID: repo.ID, + HeadBranch: headBranchName, + BaseBranch: targetBranchName, + }) + + // Here's the major subject of this test: every merge request is fired with a different context + // timeout, causing the HTTP request to be interrupted in different places throughout the request. + reqCtx, cancel := context.WithTimeout(t.Context(), time.Duration(i+1)*cancellationDuration) + defer cancel() + + req := NewRequestWithValues(t, "POST", + fmt.Sprintf("/%s/%s/pulls/%d/merge", repo.OwnerName, repo.Name, pr.Index), map[string]string{ + "do": "merge", + "delete_branch_after_merge": "on", + }) + req.Request = req.WithContext(reqCtx) + user2Session.MakeRequest(t, req, NoExpectedStatus) + }(i) + } + mergeAllPRs.Wait() + assert.Empty(t, errorList) + } + + // Verify that all PRs are in a consistent state of merged or not (not a corrupt state): + gitRepo, err := gitrepo.OpenRepository(t.Context(), repo) + require.NoError(t, err) + + for i := range cancellationChecks { + targetBranchName := fmt.Sprintf("target-branch-%d", i) + headBranchName := fmt.Sprintf("update-%d", i) + pr := unittest.AssertExistsAndLoadBean(t, &issues_model.PullRequest{ + HeadRepoID: repo.ID, + BaseRepoID: repo.ID, + HeadBranch: headBranchName, + BaseBranch: targetBranchName, + }) + targetBranchInDB := unittest.AssertExistsAndLoadBean(t, &git_model.Branch{ + RepoID: repo.ID, + Name: targetBranchName, + }) + + targetBranchCommitIDInRepo, err := gitRepo.GetBranchCommitID(targetBranchName) + require.NoError(t, err) + assert.Equal(t, targetBranchCommitIDInRepo, targetBranchInDB.CommitID, "real commit ID match for %s", targetBranchName) + + targetBranchCommitInRepo, err := gitRepo.GetCommit(targetBranchCommitIDInRepo) + require.NoError(t, err) + assert.Equal(t, strings.TrimSpace(targetBranchCommitInRepo.CommitMessage), strings.TrimSpace(targetBranchInDB.CommitMessage)) + + if pr.HasMerged { + assert.Equal(t, + fmt.Sprintf("Merge pull request 'create PR from branch %[1]s' (#%[2]d) from %[1]s into %[3]s", headBranchName, pr.Index, targetBranchName), + targetBranchInDB.CommitMessage) + } else { + assert.Equal(t, "Initial commit", targetBranchInDB.CommitMessage) + } + } + }) +} From ebac8b38cb965727a7ca696f4338b4c7222d6837 Mon Sep 17 00:00:00 2001 From: forgejo-backport-action Date: Fri, 27 Mar 2026 01:36:18 +0100 Subject: [PATCH 04/82] [v15.0/forgejo] fix: duplicate key violates unique constraint in concurrent debian package uploads (#11833) **Backport:** https://codeberg.org/forgejo/forgejo/pulls/11776 Fixes #11438. Whenever a "unique constraint violation" error is encountered by package mutation, detect if a `xorm.ErrUniqueConstraintViolation` error occurs. If it does, retry the entire transaction. ## Checklist The [contributor guide](https://forgejo.org/docs/next/contributor/) contains information that will be helpful to first time contributors. There also are a few [conditions for merging Pull Requests in Forgejo repositories](https://codeberg.org/forgejo/governance/src/branch/main/PullRequestsAgreement.md). You are also welcome to join the [Forgejo development chatroom](https://matrix.to/#/#forgejo-development:matrix.org). ### Tests for Go changes (can be removed for JavaScript changes) - I added test coverage for Go changes... - [ ] in their respective `*_test.go` for unit tests. - [ ] in the `tests/integration` directory if it involves interactions with a live Forgejo server. - I ran... - [ ] `make pr-go` before pushing ### Documentation - [ ] I created a pull request [to the documentation](https://codeberg.org/forgejo/docs) to explain to Forgejo users how to use this change. - [x] I did not document these changes and I do not expect someone else to do it. ### Release notes - [x] This change will be noticed by a Forgejo user or admin (feature, bug fix, performance, etc.). I suggest to include a release note for this change. - [ ] This change is not visible to a Forgejo user or admin (refactor, dependency upgrade, etc.). I think there is no need to add a release note for this change. Co-authored-by: Mathieu Fenniak Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/11833 Reviewed-by: Mathieu Fenniak Co-authored-by: forgejo-backport-action Co-committed-by: forgejo-backport-action --- .deadcode-out | 1 - go.mod | 2 +- go.sum | 4 +- models/db/context.go | 41 ++++++ models/db/context_test.go | 56 ++++++++ models/packages/package_version.go | 21 +++ services/packages/packages.go | 129 ++++++++++++------ tests/integration/api_packages_debian_test.go | 111 ++++++++++----- 8 files changed, 286 insertions(+), 79 deletions(-) diff --git a/.deadcode-out b/.deadcode-out index 6d2c35e374..45ad117ccd 100644 --- a/.deadcode-out +++ b/.deadcode-out @@ -19,7 +19,6 @@ forgejo.org/models/auth forgejo.org/models/db TruncateBeans TruncateBeansCascade - InTransaction DumpTables GetTableNames extendBeansForCascade diff --git a/go.mod b/go.mod index 68381a87a7..c7a29c014c 100644 --- a/go.mod +++ b/go.mod @@ -274,4 +274,4 @@ replace github.com/gliderlabs/ssh => code.forgejo.org/forgejo/ssh v0.0.0-2024121 replace git.sr.ht/~mariusor/go-xsd-duration => code.forgejo.org/forgejo/go-xsd-duration v0.0.0-20220703122237-02e73435a078 -replace xorm.io/xorm v1.3.9 => code.forgejo.org/xorm/xorm v1.3.9-forgejo.8 +replace xorm.io/xorm v1.3.9 => code.forgejo.org/xorm/xorm v1.3.9-forgejo.9 diff --git a/go.sum b/go.sum index 410025ae97..594dbb4315 100644 --- a/go.sum +++ b/go.sum @@ -42,8 +42,8 @@ code.forgejo.org/go-chi/captcha v1.0.2 h1:vyHDPXkpjDv8bLO9NqtWzZayzstD/WpJ5xwEkA code.forgejo.org/go-chi/captcha v1.0.2/go.mod h1:lxiPLcJ76UCZHoH31/Wbum4GUi2NgjfFZLrJkKv1lLE= code.forgejo.org/go-chi/session v1.0.3 h1:ByJ9c/UC0AU57hxiGl53TXh+NdBOBwK/bhZ9jyadEwE= code.forgejo.org/go-chi/session v1.0.3/go.mod h1:xzGtFrV/agCJoZCUhFDlqAr1he6BrAdqlaprKOB1W90= -code.forgejo.org/xorm/xorm v1.3.9-forgejo.8 h1:dsSKm2nus0NhHsqYxeuB3Gldk6TtlusD1CBGV6V1SS0= -code.forgejo.org/xorm/xorm v1.3.9-forgejo.8/go.mod h1:A7sFd3BFmRp20h6drSsCXgQRQdF8Vz8HuCSrzFS3m90= +code.forgejo.org/xorm/xorm v1.3.9-forgejo.9 h1:hzEXDa53opdp5nrGG4F6y8HzFzrGXd5GIvFyUHcvGmI= +code.forgejo.org/xorm/xorm v1.3.9-forgejo.9/go.mod h1:A7sFd3BFmRp20h6drSsCXgQRQdF8Vz8HuCSrzFS3m90= code.gitea.io/sdk/gitea v0.21.0 h1:69n6oz6kEVHRo1+APQQyizkhrZrLsTLXey9142pfkD4= code.gitea.io/sdk/gitea v0.21.0/go.mod h1:tnBjVhuKJCn8ibdyyhvUyxrR1Ca2KHEoTWoukNhXQPA= code.superseriousbusiness.org/exif-terminator v0.11.1 h1:qnujLH4/Yk/CFtFMmtjozbdV6Ry5G3Q/E/mLlWm/gQI= diff --git a/models/db/context.go b/models/db/context.go index 9be158ccca..f098b40a32 100644 --- a/models/db/context.go +++ b/models/db/context.go @@ -6,6 +6,8 @@ package db import ( "context" "database/sql" + "errors" + "fmt" "xorm.io/builder" "xorm.io/xorm" @@ -416,3 +418,42 @@ func inTransaction(ctx context.Context) (*xorm.Session, bool) { return nil, false } } + +type RetryConfig struct { + ErrorIs []error + AttemptCount int +} + +// Execute the given function in a transaction. RetryConfig will retry the function on an error, if it matches the +// ErrorIs parameter, up to the total of AttemptCount number of tries. RetryTx cannot be invoked when already within a +// transaction and will return an error immediately. +func RetryTx(ctx context.Context, config RetryConfig, f func(ctx context.Context) error) error { + if InTransaction(ctx) { + return errors.New("unsupported operation: attempted to use RetryTx while already within a transaction") + } else if config.AttemptCount == 0 { + return errors.New("unsupported operation: attempted to use RetryTx with 0 attempts") + } + + var lastError error + for range config.AttemptCount { + err := WithTx(ctx, f) + if err == nil { + return nil + } + + foundMatch := false + for _, possibleError := range config.ErrorIs { + if errors.Is(err, possibleError) { + foundMatch = true + break + } + } + if !foundMatch { + return err + } + + lastError = err + } + + return fmt.Errorf("retry tx failed after %d attempts; last error: %w", config.AttemptCount, lastError) +} diff --git a/models/db/context_test.go b/models/db/context_test.go index 525ab54d99..60ef8462cc 100644 --- a/models/db/context_test.go +++ b/models/db/context_test.go @@ -220,3 +220,59 @@ func TestAfterTx(t *testing.T) { }) } } + +func TestRetryTx(t *testing.T) { + t.Run("success", func(t *testing.T) { + err := db.RetryTx(t.Context(), db.RetryConfig{AttemptCount: 1}, func(ctx context.Context) error { + assert.True(t, db.InTransaction(ctx)) + return nil + }) + require.NoError(t, err) + }) + + t.Run("fail constantly", func(t *testing.T) { + attemptCount := 0 + testError := errors.New("hello") + err := db.RetryTx(t.Context(), db.RetryConfig{ + AttemptCount: 2, + ErrorIs: []error{testError}, + }, func(ctx context.Context) error { + attemptCount++ + return testError + }) + require.ErrorIs(t, err, testError) + require.ErrorContains(t, err, "2 attempts") + assert.Equal(t, 2, attemptCount) + }) + + t.Run("fail w/ non retriable error", func(t *testing.T) { + attemptCount := 0 + testError := errors.New("hello") + err := db.RetryTx(t.Context(), db.RetryConfig{ + AttemptCount: 2, + ErrorIs: []error{}, + }, func(ctx context.Context) error { + attemptCount++ + return testError + }) + require.ErrorIs(t, err, testError) + assert.Equal(t, 1, attemptCount) + }) + + t.Run("succeed on retry", func(t *testing.T) { + attemptCount := 0 + testError := errors.New("hello") + err := db.RetryTx(t.Context(), db.RetryConfig{ + AttemptCount: 2, + ErrorIs: []error{testError}, + }, func(ctx context.Context) error { + attemptCount++ + if attemptCount == 1 { + return testError + } + return nil + }) + require.NoError(t, err) + assert.Equal(t, 2, attemptCount) + }) +} diff --git a/models/packages/package_version.go b/models/packages/package_version.go index 873f7bf9b6..545ad63eb4 100644 --- a/models/packages/package_version.go +++ b/models/packages/package_version.go @@ -5,11 +5,13 @@ package packages import ( "context" + "errors" "strconv" "strings" "forgejo.org/models/db" "forgejo.org/modules/optional" + "forgejo.org/modules/setting" "forgejo.org/modules/timeutil" "forgejo.org/modules/util" @@ -155,6 +157,25 @@ func HasVersionFileReferences(ctx context.Context, versionID int64) (bool, error }) } +func (pv *PackageVersion) LockForUpdate(ctx context.Context) error { + if !db.InTransaction(ctx) { + return errors.New("invalid state for PackageVersion.LockForUpdate: database is not in a transaction") + } else if setting.Database.Type.IsSQLite3() { + // SQLite both doesn't support "SELECT ... FOR UPDATE", and it's irrelevant for SQLite as the entire database is + // locked for write when a write transaction is open. + return nil + } + + pvfu := PackageVersion{} + has, err := db.GetEngine(ctx).ID(pv.ID).ForUpdate().Get(&pvfu) + if err != nil { + return err + } else if !has { + return ErrPackageNotExist + } + return nil +} + // SearchValue describes a value to search // If ExactMatch is true, the field must match the value otherwise a LIKE search is performed. type SearchValue struct { diff --git a/services/packages/packages.go b/services/packages/packages.go index 418ceab798..a1772cc1d9 100644 --- a/services/packages/packages.go +++ b/services/packages/packages.go @@ -23,6 +23,8 @@ import ( "forgejo.org/modules/setting" "forgejo.org/modules/storage" notify_service "forgejo.org/services/notify" + + "xorm.io/xorm" ) var ( @@ -76,38 +78,54 @@ func CreatePackageOrAddFileToExisting(ctx context.Context, pvci *PackageCreation } func createPackageAndAddFile(ctx context.Context, pvci *PackageCreationInfo, pfci *PackageFileCreationInfo, allowDuplicate bool) (*packages_model.PackageVersion, *packages_model.PackageFile, error) { - dbCtx, committer, err := db.TxContext(ctx) - if err != nil { - return nil, nil, err - } - defer committer.Close() + var pv *packages_model.PackageVersion + var pf *packages_model.PackageFile + var blobHash256Created optional.Option[string] + var createdPackage bool - pv, created, err := createPackageAndVersion(dbCtx, pvci, allowDuplicate) - if err != nil { - return nil, nil, err - } + // ErrUniqueConstraintViolation can occur when two concurrent updates occur to a package registry. Typically this + // occurs when a registry with an index of organization-level packages is modified (Debian, Alpine, Alt, Arch, RPM) + // and that index needs to be rebuilt -- even if two different packages are being updated, they can write the + // registry concurrently and that can cause ErrUniqueConstraintViolation errors from the database operations that + // "check if record exists, if not, create it". + // + // The simple approach of detecting the ErrUniqueConstraintViolation error inside the transaction and picking up the + // other write isn't possible for two reasons: (a) PostgreSQL can't continue a transaction with an error in it, a + // SAVEPOINT and ROLLBACK TO SAVEPOINT are required, and (b) xorm keeps internal state during a transaction that + // causes such a recovery from error to panic. So, we retry the entire modification transaction if + // ErrUniqueConstraintViolation is encountered. + err := db.RetryTx(ctx, db.RetryConfig{ + // A single retry is sufficient as any package index that was concurrently modified should now be present: + AttemptCount: 2, + ErrorIs: []error{xorm.ErrUniqueConstraintViolation}, + }, func(ctx context.Context) error { + var err error + var pb *packages_model.PackageBlob + var blobCreated bool - pf, pb, blobCreated, err := addFileToPackageVersion(dbCtx, pv, &pvci.PackageInfo, pfci) - removeBlob := false - defer func() { - if blobCreated && removeBlob { - contentStore := packages_module.NewContentStore() - if err := contentStore.Delete(packages_module.BlobHash256Key(pb.HashSHA256)); err != nil { + pv, createdPackage, err = createPackageAndVersion(ctx, pvci, allowDuplicate) + if err != nil { + return err + } + + pf, pb, blobCreated, err = addFileToPackageVersion(ctx, pv, &pvci.PackageInfo, pfci) + if blobCreated { + blobHash256Created = optional.Some(pb.HashSHA256) + } + return err + }) + if err != nil { + // If we have an error later in the process after writing a blob to the content store, make our best effort to + // remove the content -- it won't be referenced in the DB because the transaction would be rolled back. + if has, hash := blobHash256Created.Get(); has { + if err := packages_module.NewContentStore().Delete(packages_module.BlobHash256Key(hash)); err != nil { log.Error("Error deleting package blob from content store: %v", err) } } - }() - if err != nil { - removeBlob = true return nil, nil, err } - if err := committer.Commit(); err != nil { - removeBlob = true - return nil, nil, err - } - - if created { + if createdPackage { pd, err := packages_model.GetPackageDescriptor(ctx, pv) if err != nil { return nil, nil, err @@ -213,29 +231,33 @@ func AddFileToPackageVersionInternal(ctx context.Context, pv *packages_model.Pac } func addFileToPackageWrapper(ctx context.Context, fn func(ctx context.Context) (*packages_model.PackageFile, *packages_model.PackageBlob, bool, error)) (*packages_model.PackageFile, error) { - ctx, committer, err := db.TxContext(ctx) - if err != nil { - return nil, err - } - defer committer.Close() + var pf *packages_model.PackageFile + var pb *packages_model.PackageBlob + var blobHash256Created optional.Option[string] - pf, pb, blobCreated, err := fn(ctx) - removeBlob := false - defer func() { - if removeBlob { - contentStore := packages_module.NewContentStore() - if err := contentStore.Delete(packages_module.BlobHash256Key(pb.HashSHA256)); err != nil { + // See comment in createPackageAndAddFile which explains why RetryTx is used with ErrUniqueConstraintViolation. + err := db.RetryTx(ctx, db.RetryConfig{ + // A single retry is sufficient as any package index that was concurrently modified should now be present: + AttemptCount: 2, + ErrorIs: []error{xorm.ErrUniqueConstraintViolation}, + }, func(ctx context.Context) error { + var err error + var blobCreated bool + + pf, pb, blobCreated, err = fn(ctx) + if blobCreated { + blobHash256Created = optional.Some(pb.HashSHA256) + } + return err + }) + if err != nil { + // If we have an error later in the process after writing a blob to the content store, make our best effort to + // remove the content -- it won't be referenced in the DB because the transaction would be rolled back. + if has, hash := blobHash256Created.Get(); has { + if err := packages_module.NewContentStore().Delete(packages_module.BlobHash256Key(hash)); err != nil { log.Error("Error deleting package blob from content store: %v", err) } } - }() - if err != nil { - removeBlob = blobCreated - return nil, err - } - - if err := committer.Commit(); err != nil { - removeBlob = blobCreated return nil, err } @@ -267,6 +289,20 @@ func addFileToPackageVersion(ctx context.Context, pv *packages_model.PackageVers func addFileToPackageVersionUnchecked(ctx context.Context, pv *packages_model.PackageVersion, pfci *PackageFileCreationInfo, packageType packages_model.Type) (*packages_model.PackageFile, *packages_model.PackageBlob, bool, error) { log.Trace("Adding package file: %v, %s", pv.ID, pfci.Filename) + // The `OverwriteExisting` capability in this method has a race condition in it -- it will check if the file already + // exists in the package, and delete the file's properties and the file, and then it will attempt to insert the new + // file. This can cause the `ErrDuplicatePackageFile` error to be returned even when `OverwriteExisting` in + // concurrent modifications, as both modifications will attempt to delete the existing file, one will succeed, one + // will delete zero records and think it succeeded, and then both will attempt to add the file and one will hit + // `ErrDuplicatePackageFile`. + // + // To address this, lock the package version being modified by performing a `SELECT ... FOR UPDATE` on it, + // guaranteeing only one `addFileToPackageVersionUnchecked` is running on a specific package version. + err := pv.LockForUpdate(ctx) + if err != nil { + return nil, nil, false, err + } + pb, exists, err := packages_model.GetOrInsertBlob(ctx, NewPackageBlob(pfci.Data)) if err != nil { log.Error("Error inserting package blob: %v", err) @@ -430,7 +466,12 @@ func CheckSizeQuotaExceeded(ctx context.Context, doer, owner *user_model.User, p func GetOrCreateInternalPackageVersion(ctx context.Context, ownerID int64, packageType packages_model.Type, name, version string) (*packages_model.PackageVersion, error) { var pv *packages_model.PackageVersion - return pv, db.WithTx(ctx, func(ctx context.Context) error { + // See comment in createPackageAndAddFile which explains why RetryTx is used with ErrUniqueConstraintViolation. + return pv, db.RetryTx(ctx, db.RetryConfig{ + // A single retry is sufficient as any package index that was concurrently modified should now be present: + AttemptCount: 2, + ErrorIs: []error{xorm.ErrUniqueConstraintViolation}, + }, func(ctx context.Context) error { p := &packages_model.Package{ OwnerID: ownerID, Type: packageType, diff --git a/tests/integration/api_packages_debian_test.go b/tests/integration/api_packages_debian_test.go index e22165c44b..4dc5288bc5 100644 --- a/tests/integration/api_packages_debian_test.go +++ b/tests/integration/api_packages_debian_test.go @@ -11,6 +11,7 @@ import ( "io" "net/http" "strings" + "sync" "testing" "forgejo.org/models/db" @@ -19,6 +20,7 @@ import ( user_model "forgejo.org/models/user" "forgejo.org/modules/base" debian_module "forgejo.org/modules/packages/debian" + "forgejo.org/modules/setting" "forgejo.org/tests" "github.com/blakesmith/ar" @@ -26,6 +28,32 @@ import ( "github.com/stretchr/testify/require" ) +func createDebianArchive(name, version, architecture, packageDescription string) io.Reader { + var cbuf bytes.Buffer + zw := gzip.NewWriter(&cbuf) + tw := tar.NewWriter(zw) + tw.WriteHeader(&tar.Header{ + Name: "control", + Mode: 0o600, + Size: 50, + }) + fmt.Fprintf(tw, "Package: %s\nVersion: %s\nArchitecture: %s\nDescription: %s\n", name, version, architecture, packageDescription) + tw.Close() + zw.Close() + + var buf bytes.Buffer + aw := ar.NewWriter(&buf) + aw.WriteGlobalHeader() + hdr := &ar.Header{ + Name: "control.tar.gz", + Mode: 0o600, + Size: int64(cbuf.Len()), + } + aw.WriteHeader(hdr) + aw.Write(cbuf.Bytes()) + return &buf +} + func TestPackageDebian(t *testing.T) { defer tests.PrepareTestEnv(t)() user := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 2}) @@ -35,32 +63,6 @@ func TestPackageDebian(t *testing.T) { packageVersion2 := "1.0.4" packageDescription := "Package Description" - createArchive := func(name, version, architecture string) io.Reader { - var cbuf bytes.Buffer - zw := gzip.NewWriter(&cbuf) - tw := tar.NewWriter(zw) - tw.WriteHeader(&tar.Header{ - Name: "control", - Mode: 0o600, - Size: 50, - }) - fmt.Fprintf(tw, "Package: %s\nVersion: %s\nArchitecture: %s\nDescription: %s\n", name, version, architecture, packageDescription) - tw.Close() - zw.Close() - - var buf bytes.Buffer - aw := ar.NewWriter(&buf) - aw.WriteGlobalHeader() - hdr := &ar.Header{ - Name: "control.tar.gz", - Mode: 0o600, - Size: int64(cbuf.Len()), - } - aw.WriteHeader(hdr) - aw.Write(cbuf.Bytes()) - return &buf - } - distributions := []string{"test", "gitea"} components := []string{"main", "stable"} architectures := []string{"all", "amd64"} @@ -97,16 +99,16 @@ func TestPackageDebian(t *testing.T) { AddBasicAuth(user.Name) MakeRequest(t, req, http.StatusBadRequest) - req = NewRequestWithBody(t, "PUT", uploadURL, createArchive("", "", "")). + req = NewRequestWithBody(t, "PUT", uploadURL, createDebianArchive("", "", "", packageDescription)). AddBasicAuth(user.Name) MakeRequest(t, req, http.StatusBadRequest) - req = NewRequestWithBody(t, "PUT", uploadURL, createArchive(packageName, packageVersion, architecture)). + req = NewRequestWithBody(t, "PUT", uploadURL, createDebianArchive(packageName, packageVersion, architecture, packageDescription)). AddBasicAuth(user.Name). SetHeader("content-type", "multipart/form-data") MakeRequest(t, req, http.StatusBadRequest) - req = NewRequestWithBody(t, "PUT", uploadURL, createArchive(packageName, packageVersion, architecture)). + req = NewRequestWithBody(t, "PUT", uploadURL, createDebianArchive(packageName, packageVersion, architecture, packageDescription)). AddBasicAuth(user.Name) MakeRequest(t, req, http.StatusCreated) @@ -154,7 +156,7 @@ func TestPackageDebian(t *testing.T) { return seen }) - req = NewRequestWithBody(t, "PUT", uploadURL, createArchive(packageName, packageVersion, architecture)). + req = NewRequestWithBody(t, "PUT", uploadURL, createDebianArchive(packageName, packageVersion, architecture, packageDescription)). AddBasicAuth(user.Name) MakeRequest(t, req, http.StatusConflict) }) @@ -171,7 +173,7 @@ func TestPackageDebian(t *testing.T) { t.Run("Packages", func(t *testing.T) { defer tests.PrintCurrentTest(t)() - req := NewRequestWithBody(t, "PUT", uploadURL, createArchive(packageName, packageVersion2, architecture)). + req := NewRequestWithBody(t, "PUT", uploadURL, createDebianArchive(packageName, packageVersion2, architecture, packageDescription)). AddBasicAuth(user.Name) MakeRequest(t, req, http.StatusCreated) @@ -308,3 +310,50 @@ func TestPackageDebian(t *testing.T) { require.Contains(t, body, fmt.Sprintf("Version: %s", packageVersion2)) }) } + +func TestPackageDebianConcurrent(t *testing.T) { + if setting.Database.Type.IsSQLite3() { + // Concurrency test fails on SQLite w/ "database is locked" + t.Skip() + } + + defer tests.PrepareTestEnv(t)() + + user := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 2}) + + distribution := "test" + component := "main" + architecture := "amd64" + packageName := "gitea" + packageDescription := "Package Description" + + rootURL := fmt.Sprintf("/api/packages/%s/debian", user.Name) + uploadURL := fmt.Sprintf("%s/pool/%s/%s/upload", rootURL, distribution, component) + + t.Run("Concurrent Upload", func(t *testing.T) { + defer tests.PrintCurrentTest(t)() + + var wg sync.WaitGroup + packageCount := 10 + for i := range packageCount { + wg.Go(func() { + req := NewRequestWithBody(t, "PUT", uploadURL, + createDebianArchive(packageName, fmt.Sprintf("1.0.%d", i), architecture, packageDescription)). + AddBasicAuth(user.Name) + MakeRequest(t, req, http.StatusCreated) + }) + } + wg.Wait() + + url := fmt.Sprintf("%s/dists/%s/%s/binary-%s/Packages", rootURL, distribution, component, architecture) + + req := NewRequest(t, "GET", url) + resp := MakeRequest(t, req, http.StatusOK) + body := resp.Body.String() + + assert.Contains(t, body, fmt.Sprintf("Package: %s\n", packageName)) + for i := range packageCount { + assert.Contains(t, body, fmt.Sprintf("Version: 1.0.%d\n", i)) + } + }) +} From 7a2bd542bd818814b07dc5a73c11319d7a45a101 Mon Sep 17 00:00:00 2001 From: Renovate Bot Date: Fri, 27 Mar 2026 06:49:10 +0100 Subject: [PATCH 05/82] Update dependency happy-dom to v20.8.8 [SECURITY] (v15.0/forgejo) (#11839) Co-authored-by: Renovate Bot Co-committed-by: Renovate Bot --- package-lock.json | 60 ++++++++++++++++++++++++++++++++++++++++++----- package.json | 2 +- 2 files changed, 55 insertions(+), 7 deletions(-) diff --git a/package-lock.json b/package-lock.json index b6a45fb066..45e38ee021 100644 --- a/package-lock.json +++ b/package-lock.json @@ -108,7 +108,7 @@ "eslint-plugin-vue-scoped-css": "2.12.0", "eslint-plugin-wc": "3.1.0", "globals": "17.4.0", - "happy-dom": "20.0.11", + "happy-dom": "20.8.8", "license-checker-rseidelsohn": "4.4.2", "markdownlint-cli": "0.47.0", "postcss-html": "1.8.1", @@ -4284,6 +4284,16 @@ "dev": true, "license": "MIT" }, + "node_modules/@types/ws": { + "version": "8.18.1", + "resolved": "https://registry.npmjs.org/@types/ws/-/ws-8.18.1.tgz", + "integrity": "sha512-ThVF6DCVhA8kUGy+aazFQ4kXQ7E1Ty7A3ypFOe0IcJV8O/M511G99AW24irKrW56Wt44yG9+ij8FaqoBGkuBXg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@types/node": "*" + } + }, "node_modules/@typescript-eslint/eslint-plugin": { "version": "8.56.1", "resolved": "https://registry.npmjs.org/@typescript-eslint/eslint-plugin/-/eslint-plugin-8.56.1.tgz", @@ -9150,20 +9160,36 @@ } }, "node_modules/happy-dom": { - "version": "20.0.11", - "resolved": "https://registry.npmjs.org/happy-dom/-/happy-dom-20.0.11.tgz", - "integrity": "sha512-QsCdAUHAmiDeKeaNojb1OHOPF7NjcWPBR7obdu3NwH2a/oyQaLg5d0aaCy/9My6CdPChYF07dvz5chaXBGaD4g==", + "version": "20.8.8", + "resolved": "https://registry.npmjs.org/happy-dom/-/happy-dom-20.8.8.tgz", + "integrity": "sha512-5/F8wxkNxYtsN0bXfMwIyNLZ9WYsoOYPbmoluqVJqv8KBUbcyKZawJ7uYK4WTX8IHBLYv+VXIwfeNDPy1oKMwQ==", "dev": true, "license": "MIT", "dependencies": { - "@types/node": "^20.0.0", + "@types/node": ">=20.0.0", "@types/whatwg-mimetype": "^3.0.2", - "whatwg-mimetype": "^3.0.0" + "@types/ws": "^8.18.1", + "entities": "^7.0.1", + "whatwg-mimetype": "^3.0.0", + "ws": "^8.18.3" }, "engines": { "node": ">=20.0.0" } }, + "node_modules/happy-dom/node_modules/entities": { + "version": "7.0.1", + "resolved": "https://registry.npmjs.org/entities/-/entities-7.0.1.tgz", + "integrity": "sha512-TWrgLOFUQTH994YUyl1yT4uyavY5nNB5muff+RtWaqNVCAK408b5ZnnbNAUEWLTCpum9w6arT70i1XdQ4UeOPA==", + "dev": true, + "license": "BSD-2-Clause", + "engines": { + "node": ">=0.12" + }, + "funding": { + "url": "https://github.com/fb55/entities?sponsor=1" + } + }, "node_modules/has-bigints": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/has-bigints/-/has-bigints-1.1.0.tgz", @@ -16235,6 +16261,28 @@ "node": "^14.17.0 || ^16.13.0 || >=18.0.0" } }, + "node_modules/ws": { + "version": "8.20.0", + "resolved": "https://registry.npmjs.org/ws/-/ws-8.20.0.tgz", + "integrity": "sha512-sAt8BhgNbzCtgGbt2OxmpuryO63ZoDk/sqaB/znQm94T4fCEsy/yV+7CdC1kJhOU9lboAEU7R3kquuycDoibVA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=10.0.0" + }, + "peerDependencies": { + "bufferutil": "^4.0.1", + "utf-8-validate": ">=5.0.2" + }, + "peerDependenciesMeta": { + "bufferutil": { + "optional": true + }, + "utf-8-validate": { + "optional": true + } + } + }, "node_modules/xml-name-validator": { "version": "4.0.0", "resolved": "https://registry.npmjs.org/xml-name-validator/-/xml-name-validator-4.0.0.tgz", diff --git a/package.json b/package.json index 309a16ec1a..4e8988bd79 100644 --- a/package.json +++ b/package.json @@ -107,7 +107,7 @@ "eslint-plugin-vue-scoped-css": "2.12.0", "eslint-plugin-wc": "3.1.0", "globals": "17.4.0", - "happy-dom": "20.0.11", + "happy-dom": "20.8.8", "license-checker-rseidelsohn": "4.4.2", "markdownlint-cli": "0.47.0", "postcss-html": "1.8.1", From a90e9b827cdd85d4d525ea82f515cd45f91245ed Mon Sep 17 00:00:00 2001 From: forgejo-backport-action Date: Sun, 29 Mar 2026 19:28:34 +0200 Subject: [PATCH 06/82] [v15.0/forgejo] feat: use `--token-url` in runner setup instructions (#11877) **Backport:** https://codeberg.org/forgejo/forgejo/pulls/11874 Use `--token-url` instead of `--token` in the runner setup instructions. `--token-url` is more secure. It was also decided [not to implement `--token`](https://code.forgejo.org/forgejo/runner/pulls/1457). The new instructions look as follows: ``` $ echo -n "a3bac733-079f-4917-ae9f-4acb99f1827b" > /path/to/runner-token $ forgejo-runner daemon \ --url http://192.168.178.62:3000/ \ --uuid 5982831f-8ee7-42c7-abcc-49c7d6dba586 \ --token-url file:///path/to/runner-token \ --label docker:docker://node:lts ``` `--label` is also new because Forgejo Runner is inoperable when neither a runner configuration nor `--label` are present. ## Checklist The [contributor guide](https://forgejo.org/docs/next/contributor/) contains information that will be helpful to first time contributors. All work and communication must conform to Forgejo's [AI Agreement](https://codeberg.org/forgejo/governance/src/branch/main/AIAgreement.md). There also are a few [conditions for merging Pull Requests in Forgejo repositories](https://codeberg.org/forgejo/governance/src/branch/main/PullRequestsAgreement.md). You are also welcome to join the [Forgejo development chatroom](https://matrix.to/#/#forgejo-development:matrix.org). ### Tests for Go changes (can be removed for JavaScript changes) - I added test coverage for Go changes... - [ ] in their respective `*_test.go` for unit tests. - [ ] in the `tests/integration` directory if it involves interactions with a live Forgejo server. - I ran... - [x] `make pr-go` before pushing ### Tests for JavaScript changes (can be removed for Go changes) - I added test coverage for JavaScript changes... - [ ] in `web_src/js/*.test.js` if it can be unit tested. - [x] in `tests/e2e/*.test.e2e.js` if it requires interactions with a live Forgejo server (see also the [developer guide for JavaScript testing](https://codeberg.org/forgejo/forgejo/src/branch/forgejo/tests/e2e/README.md#end-to-end-tests)). ### Documentation - [ ] I created a pull request [to the documentation](https://codeberg.org/forgejo/docs) to explain to Forgejo users how to use this change. - [x] I did not document these changes and I do not expect someone else to do it. ### Release notes - [ ] This change will be noticed by a Forgejo user or admin (feature, bug fix, performance, etc.). I suggest to include a release note for this change. - [ ] This change is not visible to a Forgejo user or admin (refactor, dependency upgrade, etc.). I think there is no need to add a release note for this change. *The decision if the pull request will be shown in the release notes is up to the mergers / release team.* The content of the `release-notes/.md` file will serve as the basis for the release notes. If the file does not exist, the title of the pull request will be used instead. Co-authored-by: Andreas Ahlenstorf Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/11877 Reviewed-by: Andreas Ahlenstorf Co-authored-by: forgejo-backport-action Co-committed-by: forgejo-backport-action --- templates/shared/actions/runner_setup.tmpl | 6 ++++-- tests/e2e/runner-management.test.e2e.ts | 6 +++--- 2 files changed, 7 insertions(+), 5 deletions(-) diff --git a/templates/shared/actions/runner_setup.tmpl b/templates/shared/actions/runner_setup.tmpl index b18c995b3b..a24f63d536 100644 --- a/templates/shared/actions/runner_setup.tmpl +++ b/templates/shared/actions/runner_setup.tmpl @@ -43,10 +43,12 @@

{{ctx.Locale.Tr "actions.runners.runner_setup.instruction_replace_connection_name"}}

{{ctx.Locale.Tr "actions.runners.runner_setup.heading_using_options"}}
-
forgejo-runner daemon \
+		
$ echo -n "{{.Runner.Token}}" > /path/to/runner-token
+$ forgejo-runner daemon \
 	--url {{.AppURL}} \
 	--uuid {{.Runner.UUID}} \
-	--token {{.Runner.Token}}
+	--token-url file:///path/to/runner-token \
+	--label docker:docker://node:lts
 

{{ctx.Locale.Tr "actions.runners.runner_setup.instruction_advanced_configurations"}}

diff --git a/tests/e2e/runner-management.test.e2e.ts b/tests/e2e/runner-management.test.e2e.ts index 8f4d5584be..cecde7a850 100644 --- a/tests/e2e/runner-management.test.e2e.ts +++ b/tests/e2e/runner-management.test.e2e.ts @@ -134,7 +134,7 @@ test.describe('Runners of user2', () => { await expect(page.getByRole('heading', {name: 'Using program options'})).toBeVisible(); await expect(page.getByLabel('How to invoke forgejo-runner')).toContainText(`--uuid ${runnerUUID}`); - await expect(page.getByLabel('How to invoke forgejo-runner')).toContainText(`--token ${runnerToken}`); + await expect(page.getByLabel('How to invoke forgejo-runner')).toContainText(`echo -n "${runnerToken}"`); // Go back to list of runners. await page.getByRole('link', {name: 'List of runners', exact: true}).click(); @@ -238,7 +238,7 @@ test.describe('Runners of user2', () => { await expect(page.getByRole('heading', {name: 'Using program options'})).toBeVisible(); await expect(page.getByLabel('How to invoke forgejo-runner')).toContainText(`--uuid ${runnerUUID}`); - await expect(page.getByLabel('How to invoke forgejo-runner')).toContainText(`--token ${runnerToken}`); + await expect(page.getByLabel('How to invoke forgejo-runner')).toContainText(`echo -n "${runnerToken}"`); }); test('delete runner', async ({page}) => { @@ -425,7 +425,7 @@ test.describe('Global runners', () => { await expect(page.getByRole('heading', {name: 'Using program options'})).toBeVisible(); await expect(page.getByLabel('How to invoke forgejo-runner')).toContainText(`--uuid ${runnerUUID}`); - await expect(page.getByLabel('How to invoke forgejo-runner')).toContainText(`--token ${runnerToken}`); + await expect(page.getByLabel('How to invoke forgejo-runner')).toContainText(`echo -n "${runnerToken}"`); // Go back to list of runners. await page.getByRole('link', {name: 'List of runners', exact: true}).click(); From e045fb9b77822aee6f2e18a5bd128fb9004ea858 Mon Sep 17 00:00:00 2001 From: Renovate Bot Date: Mon, 30 Mar 2026 01:00:08 +0200 Subject: [PATCH 07/82] Update dependency happy-dom to v20.8.9 [SECURITY] (v15.0/forgejo) (#11886) Co-authored-by: Renovate Bot Co-committed-by: Renovate Bot --- package-lock.json | 8 ++++---- package.json | 2 +- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/package-lock.json b/package-lock.json index 45e38ee021..a5756c475f 100644 --- a/package-lock.json +++ b/package-lock.json @@ -108,7 +108,7 @@ "eslint-plugin-vue-scoped-css": "2.12.0", "eslint-plugin-wc": "3.1.0", "globals": "17.4.0", - "happy-dom": "20.8.8", + "happy-dom": "20.8.9", "license-checker-rseidelsohn": "4.4.2", "markdownlint-cli": "0.47.0", "postcss-html": "1.8.1", @@ -9160,9 +9160,9 @@ } }, "node_modules/happy-dom": { - "version": "20.8.8", - "resolved": "https://registry.npmjs.org/happy-dom/-/happy-dom-20.8.8.tgz", - "integrity": "sha512-5/F8wxkNxYtsN0bXfMwIyNLZ9WYsoOYPbmoluqVJqv8KBUbcyKZawJ7uYK4WTX8IHBLYv+VXIwfeNDPy1oKMwQ==", + "version": "20.8.9", + "resolved": "https://registry.npmjs.org/happy-dom/-/happy-dom-20.8.9.tgz", + "integrity": "sha512-Tz23LR9T9jOGVZm2x1EPdXqwA37G/owYMxRwU0E4miurAtFsPMQ1d2Jc2okUaSjZqAFz2oEn3FLXC5a0a+siyA==", "dev": true, "license": "MIT", "dependencies": { diff --git a/package.json b/package.json index 4e8988bd79..6aa43fa143 100644 --- a/package.json +++ b/package.json @@ -107,7 +107,7 @@ "eslint-plugin-vue-scoped-css": "2.12.0", "eslint-plugin-wc": "3.1.0", "globals": "17.4.0", - "happy-dom": "20.8.8", + "happy-dom": "20.8.9", "license-checker-rseidelsohn": "4.4.2", "markdownlint-cli": "0.47.0", "postcss-html": "1.8.1", From adebf2adac08aa92566b0d99d2737d11d2f4caa2 Mon Sep 17 00:00:00 2001 From: Renovate Bot Date: Tue, 31 Mar 2026 02:48:18 +0200 Subject: [PATCH 08/82] Update github.com/go-git/go-git/v5 (indirect) to v5.17.1 [SECURITY] (v15.0/forgejo) (#11900) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This PR contains the following updates: | Package | Change | [Age](https://docs.renovatebot.com/merge-confidence/) | [Confidence](https://docs.renovatebot.com/merge-confidence/) | |---|---|---|---| | [github.com/go-git/go-git/v5](https://github.com/go-git/go-git) | `v5.17.0` → `v5.17.1` | ![age](https://developer.mend.io/api/mc/badges/age/go/github.com%2fgo-git%2fgo-git%2fv5/v5.17.1?slim=true) | ![confidence](https://developer.mend.io/api/mc/badges/confidence/go/github.com%2fgo-git%2fgo-git%2fv5/v5.17.0/v5.17.1?slim=true) | --- ### go-git missing validation decoding Index v4 files leads to panic [CVE-2026-33762](https://nvd.nist.gov/vuln/detail/CVE-2026-33762) / [GHSA-gm2x-2g9h-ccm8](https://github.com/advisories/GHSA-gm2x-2g9h-ccm8)
More information #### Details ##### Impact `go-git`’s index decoder for format version 4 fails to validate the path name prefix length before applying it to the previously decoded path name. A maliciously crafted index file can trigger an out-of-bounds slice operation, resulting in a runtime panic during normal index parsing. This issue only affects Git index format version 4. Earlier formats (`go-git` supports only `v2` and `v3`) are not vulnerable to this issue. An attacker able to supply a crafted `.git/index` file can cause applications using go-git to panic while reading the index. If the application does not recover from panics, this results in process termination, leading to a denial-of-service (DoS) condition. Exploitation requires the ability to modify or inject a Git index file within the local repository in disk. This typically implies write access to the `.git` directory. ##### Patches Users should upgrade to `v5.17.1`, or the latest `v6` [pseudo-version](https://go.dev/ref/mod#pseudo-versions), in order to mitigate this vulnerability. ##### Credit go-git maintainers thank @​kq5y for finding and reporting this issue privately to the `go-git` project. #### Severity - CVSS Score: 2.8 / 10 (Low) - Vector String: `CVSS:3.1/AV:L/AC:L/PR:L/UI:R/S:U/C:N/I:N/A:L` #### References - [https://github.com/go-git/go-git/security/advisories/GHSA-gm2x-2g9h-ccm8](https://github.com/go-git/go-git/security/advisories/GHSA-gm2x-2g9h-ccm8) - [https://github.com/go-git/go-git](https://github.com/go-git/go-git) This data is provided by [OSV](https://osv.dev/vulnerability/GHSA-gm2x-2g9h-ccm8) and the [GitHub Advisory Database](https://github.com/github/advisory-database) ([CC-BY 4.0](https://github.com/github/advisory-database/blob/main/LICENSE.md)).
--- ### go-git: Maliciously crafted idx file can cause asymmetric memory consumption [CVE-2026-34165](https://nvd.nist.gov/vuln/detail/CVE-2026-34165) / [GHSA-jhf3-xxhw-2wpp](https://github.com/advisories/GHSA-jhf3-xxhw-2wpp)
More information #### Details ##### Impact A vulnerability has been identified in which a maliciously crafted `.idx` file can cause asymmetric memory consumption, potentially exhausting available memory and resulting in a Denial of Service (DoS) condition. Exploitation requires write access to the local repository's `.git` directory, it order to create or alter existing `.idx` files. ##### Patches Users should upgrade to `v5.17.1`, or the latest `v6` [pseudo-version](https://go.dev/ref/mod#pseudo-versions), in order to mitigate this vulnerability. ##### Credit The go-git maintainers thank @​kq5y for finding and reporting this issue privately to the `go-git` project. #### Severity - CVSS Score: 5.0 / 10 (Medium) - Vector String: `CVSS:3.1/AV:L/AC:L/PR:L/UI:R/S:U/C:N/I:N/A:H` #### References - [https://github.com/go-git/go-git/security/advisories/GHSA-jhf3-xxhw-2wpp](https://github.com/go-git/go-git/security/advisories/GHSA-jhf3-xxhw-2wpp) - [https://github.com/go-git/go-git](https://github.com/go-git/go-git) - [https://github.com/go-git/go-git/releases/tag/v5.17.1](https://github.com/go-git/go-git/releases/tag/v5.17.1) This data is provided by [OSV](https://osv.dev/vulnerability/GHSA-jhf3-xxhw-2wpp) and the [GitHub Advisory Database](https://github.com/github/advisory-database) ([CC-BY 4.0](https://github.com/github/advisory-database/blob/main/LICENSE.md)).
--- ### Release Notes
go-git/go-git (github.com/go-git/go-git/v5) ### [`v5.17.1`](https://github.com/go-git/go-git/releases/tag/v5.17.1) [Compare Source](https://github.com/go-git/go-git/compare/v5.17.0...v5.17.1) #### What's Changed - build: Update module github.com/cloudflare/circl to v1.6.3 \[SECURITY] (releases/v5.x) by [@​go-git-renovate](https://github.com/go-git-renovate)\[bot] in [#​1930](https://github.com/go-git/go-git/pull/1930) - \[v5] plumbing: format/index, Improve v4 entry name validation by [@​pjbgf](https://github.com/pjbgf) in [#​1935](https://github.com/go-git/go-git/pull/1935) - \[v5] plumbing: format/idxfile, Fix version and fanout checks by [@​pjbgf](https://github.com/pjbgf) in [#​1937](https://github.com/go-git/go-git/pull/1937) **Full Changelog**:
--- ### Configuration 📅 **Schedule**: Branch creation - "" (UTC), Automerge - Between 12:00 AM and 03:59 AM ( * 0-3 * * * ) (UTC). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] If you want to rebase/retry this PR, check this box --- This PR has been generated by [Renovate Bot](https://github.com/renovatebot/renovate). Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/11900 Reviewed-by: Mathieu Fenniak Co-authored-by: Renovate Bot Co-committed-by: Renovate Bot --- go.mod | 2 +- go.sum | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/go.mod b/go.mod index c7a29c014c..9f751ccbb1 100644 --- a/go.mod +++ b/go.mod @@ -175,7 +175,7 @@ require ( github.com/go-fed/httpsig v1.1.0 // indirect github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 // indirect github.com/go-git/go-billy/v5 v5.8.0 // indirect - github.com/go-git/go-git/v5 v5.17.0 // indirect + github.com/go-git/go-git/v5 v5.17.1 // indirect github.com/go-ini/ini v1.67.0 // indirect github.com/go-openapi/jsonpointer v0.22.4 // indirect github.com/go-openapi/jsonreference v0.21.4 // indirect diff --git a/go.sum b/go.sum index 594dbb4315..3e4ee526b9 100644 --- a/go.sum +++ b/go.sum @@ -284,8 +284,8 @@ github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 h1:+zs/tPmkDkHx3U66D github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376/go.mod h1:an3vInlBmSxCcxctByoQdvwPiA7DTK7jaaFDBTtu0ic= github.com/go-git/go-billy/v5 v5.8.0 h1:I8hjc3LbBlXTtVuFNJuwYuMiHvQJDq1AT6u4DwDzZG0= github.com/go-git/go-billy/v5 v5.8.0/go.mod h1:RpvI/rw4Vr5QA+Z60c6d6LXH0rYJo0uD5SqfmrrheCY= -github.com/go-git/go-git/v5 v5.17.0 h1:AbyI4xf+7DsjINHMu35quAh4wJygKBKBuXVjV/pxesM= -github.com/go-git/go-git/v5 v5.17.0/go.mod h1:f82C4YiLx+Lhi8eHxltLeGC5uBTXSFa6PC5WW9o4SjI= +github.com/go-git/go-git/v5 v5.17.1 h1:WnljyxIzSj9BRRUlnmAU35ohDsjRK0EKmL0evDqi5Jk= +github.com/go-git/go-git/v5 v5.17.1/go.mod h1:pW/VmeqkanRFqR6AljLcs7EA7FbZaN5MQqO7oZADXpo= github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU= github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= github.com/go-ini/ini v1.67.0 h1:z6ZrTEZqSWOTyH2FlglNbNgARyHG8oLW9gMELqKr06A= From d42c66471aa3bc2980d867f58b0cfa7980f8b8d4 Mon Sep 17 00:00:00 2001 From: forgejo-backport-action Date: Tue, 31 Mar 2026 05:32:57 +0200 Subject: [PATCH 09/82] [v15.0/forgejo] fix: unique key violation in first-time concurrent debian package uploads to a user (#11906) **Backport:** https://codeberg.org/forgejo/forgejo/pulls/11881 Fixes an intermittent test failure in `TestPackageDebianConcurrent`, [example](https://codeberg.org/forgejo/forgejo/actions/runs/148747/jobs/9/attempt/1#jobstep-5-981), introduced by testing in #11776. This one is caused by duplicate writes to `user_setting` to store a GPG key (questionable place for that...). Confirmed reproduced in local testing and test now passes: ``` === TestPackageDebianConcurrent (tests/test_utils.go:344) === TestPackageDebianConcurrent/Concurrent_Upload (tests/integration/api_packages_debian_test.go:334) ... other duplicate key violations ... // TestPackageDebianConcurrent/Concurrent_Upload "2026/03/29 10:31:57 ...dels/user/setting.go:210:func1() [E] [Error SQL Query] INSERT INTO \"gtestschema\".\"user_setting\" (\"user_id\",\"setting_key\",\"setting_value\") VALUES ($1,$2,$3) RETURNING \"id\" [2 debian.key.private -----BEGIN PGP PRIVATE KEY BLOCK----- ...snip... -----END PGP PRIVATE KEY BLOCK-----] - ERROR: duplicate key value violates unique constraint \"UQE_user_setting_key_userid\" (SQLSTATE 23505)", PASS ``` No additional test required as it is already tripping a test failure. ## Checklist The [contributor guide](https://forgejo.org/docs/next/contributor/) contains information that will be helpful to first time contributors. All work and communication must conform to Forgejo's [AI Agreement](https://codeberg.org/forgejo/governance/src/branch/main/AIAgreement.md). There also are a few [conditions for merging Pull Requests in Forgejo repositories](https://codeberg.org/forgejo/governance/src/branch/main/PullRequestsAgreement.md). You are also welcome to join the [Forgejo development chatroom](https://matrix.to/#/#forgejo-development:matrix.org). ### Tests for Go changes - I added test coverage for Go changes... - [ ] in their respective `*_test.go` for unit tests. - [ ] in the `tests/integration` directory if it involves interactions with a live Forgejo server. (already present and failing) - I ran... - [x] `make pr-go` before pushing ### Documentation - [ ] I created a pull request [to the documentation](https://codeberg.org/forgejo/docs) to explain to Forgejo users how to use this change. - [x] I did not document these changes and I do not expect someone else to do it. ### Release notes - [x] This change will be noticed by a Forgejo user or admin (feature, bug fix, performance, etc.). I suggest to include a release note for this change. - [ ] This change is not visible to a Forgejo user or admin (refactor, dependency upgrade, etc.). I think there is no need to add a release note for this change. Co-authored-by: Mathieu Fenniak Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/11906 Reviewed-by: Mathieu Fenniak Co-authored-by: forgejo-backport-action Co-committed-by: forgejo-backport-action --- services/packages/debian/repository.go | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/services/packages/debian/repository.go b/services/packages/debian/repository.go index a8a401662e..ab8c4fdc45 100644 --- a/services/packages/debian/repository.go +++ b/services/packages/debian/repository.go @@ -14,6 +14,7 @@ import ( "strings" "time" + "forgejo.org/models/db" packages_model "forgejo.org/models/packages" debian_model "forgejo.org/models/packages/debian" user_model "forgejo.org/models/user" @@ -28,6 +29,7 @@ import ( "github.com/ProtonMail/go-crypto/openpgp/clearsign" "github.com/ProtonMail/go-crypto/openpgp/packet" "github.com/ulikunitz/xz" + "xorm.io/xorm" ) // GetOrCreateRepositoryVersion gets or creates the internal repository package @@ -308,7 +310,21 @@ func buildReleaseFiles(ctx context.Context, ownerID int64, repoVersion *packages sort.Strings(architectures) - priv, _, err := GetOrCreateKeyPair(ctx, ownerID) + // ErrUniqueConstraintViolation can occur rarely when two concurrent updates occur to the same organization and + // `GetOrCreateKeyPair` ends up being invoked simulatneously, which writes to `user_setting` to store a GPG key for + // the `Release.gpg` file. In that event, retry the rebuild. + // + // See comment in package services' createPackageAndAddFile for why we cannot recover from the error in any other + // way. + var priv string + err = db.RetryTx(ctx, db.RetryConfig{ + // A single retry is sufficient the user/org's key pair would have been created by the first successful tx. + AttemptCount: 2, + ErrorIs: []error{xorm.ErrUniqueConstraintViolation}, + }, func(ctx context.Context) error { + priv, _, err = GetOrCreateKeyPair(ctx, ownerID) + return err + }) if err != nil { return err } From 2c59849072ee1004ddca9ca004deaf8d09be930c Mon Sep 17 00:00:00 2001 From: forgejo-backport-action Date: Wed, 1 Apr 2026 02:17:05 +0200 Subject: [PATCH 10/82] [v15.0/forgejo] Fix @mention combobox semantics for screen reader accessibility (#11922) **Backport:** https://codeberg.org/forgejo/forgejo/pulls/11860 Fixes https://codeberg.org/forgejo/forgejo/issues/7668. This was simpler to fix than my theory I posted on https://codeberg.org/forgejo/forgejo/issues/7668 about needing to patch the upstream package. When testing in Firefox with the developer console open and warnings enabled, I noticed a `Empty string passed to getElementById()` warning coming from `@github/combobox-nav` while attempting to manage the `aria-activedescendant` attribute. Then I found this in the [README for that project](https://github.com/github/combobox-nav). > Markup requirements: > - Each option needs to have role="option" and a unique id This was easy to miss, as we're using `@github/text-expander-element` and the combobox-nav package is one of _its_ dependencies. Without a unique ID on each dropdown menu item, `@github/text-expander-element` is unable to set an appropriate `aria-activedescendant` attribute on the textarea. Once that's in place, the screen reader announcements come to life beautifully. While working on it I noticed the emoji picker combobox was affected by the same problem and patched that as well. Co-authored-by: Henry Catalini Smith Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/11922 Reviewed-by: Otto Co-authored-by: forgejo-backport-action Co-committed-by: forgejo-backport-action --- tests/e2e/issue-comment.test.e2e.ts | 2 +- web_src/js/features/comp/TextExpander.js | 3 +++ 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/tests/e2e/issue-comment.test.e2e.ts b/tests/e2e/issue-comment.test.e2e.ts index 94e4bb7243..ed1c79a585 100644 --- a/tests/e2e/issue-comment.test.e2e.ts +++ b/tests/e2e/issue-comment.test.e2e.ts @@ -331,7 +331,7 @@ test('Emoji suggestions', async ({page}) => { ]; for (const {emoji, name} of expectedSuggestions) { - const item = suggestionList.locator(`li:has-text("${name}")`); + const item = suggestionList.locator(`[id="combobox-emoji-${name}"]`); await expect(item).toContainText(`${emoji} ${name}`); } diff --git a/web_src/js/features/comp/TextExpander.js b/web_src/js/features/comp/TextExpander.js index 8777f3a334..ae652a013b 100644 --- a/web_src/js/features/comp/TextExpander.js +++ b/web_src/js/features/comp/TextExpander.js @@ -12,6 +12,7 @@ export function initTextExpander(expander) { ul.classList.add('suggestions'); for (const name of matches) { const li = document.createElement('li'); + li.setAttribute('id', `combobox-emoji-${name}`); li.setAttribute('role', 'option'); li.setAttribute('data-value', emojiString(name)); if (customEmojis.has(name)) { @@ -33,10 +34,12 @@ export function initTextExpander(expander) { ul.classList.add('suggestions'); for (const {value, name, fullname, avatar} of matches) { const li = document.createElement('li'); + li.setAttribute('id', `combobox-user-${name}`); li.setAttribute('role', 'option'); li.setAttribute('data-value', `${key}${value}`); const img = document.createElement('img'); + img.setAttribute('aria-hidden', 'true'); img.src = avatar; li.append(img); From 00f9d01593f7a17d9dd891c98301d3e2af484688 Mon Sep 17 00:00:00 2001 From: forgejo-backport-action Date: Wed, 1 Apr 2026 02:57:09 +0200 Subject: [PATCH 11/82] [v15.0/forgejo] ci: prevent usage of live application models & services in migrations (#11907) **Backport:** https://codeberg.org/forgejo/forgejo/pulls/11872 Prevent access to "current" application models and services from migrations via `golangci` config: eg: ``` models/forgejo_migrations/v14a_ap-change-fedi-handle-structure.go:18:2: import 'forgejo.org/models/user' is not allowed from list 'migration-isolation': Migrations must not import application models. Application models will be the most recent schema for Forgejo, while migrations will be operating against the database schema that existed when they were authored. (depguard) user_model "forgejo.org/models/user" ^ models/forgejo_migrations/v14a_ap-change-fedi-handle-structure.go:21:2: import 'forgejo.org/services/user' is not allowed from list 'migration-isolation': Migrations must not import application services. Application services will reference application models which will use the most recent schema for Forgejo, while migrations will be operating against the database schema that existed when they were authored. (depguard) user_service "forgejo.org/services/user" ``` Fixes an existing migration issue where it isn't possible to add a new column to the `User` table ([test errors that occur](https://codeberg.org/forgejo/forgejo/actions/runs/148633/jobs/10/attempt/1#jobstep-5-323)), but also guarantees that future migrations don't stumble into the same issue by inadvertently referencing live application code from historical migrations. Originally identified and draft fix by @codecat w/ proposed fix in #11870. ## Checklist The [contributor guide](https://forgejo.org/docs/next/contributor/) contains information that will be helpful to first time contributors. All work and communication must conform to Forgejo's [AI Agreement](https://codeberg.org/forgejo/governance/src/branch/main/AIAgreement.md). There also are a few [conditions for merging Pull Requests in Forgejo repositories](https://codeberg.org/forgejo/governance/src/branch/main/PullRequestsAgreement.md). You are also welcome to join the [Forgejo development chatroom](https://matrix.to/#/#forgejo-development:matrix.org). ### Tests for Go changes - I added test coverage for Go changes... - [ ] in their respective `*_test.go` for unit tests. - [ ] in the `tests/integration` directory if it involves interactions with a live Forgejo server. - I ran... - [x] `make pr-go` before pushing ### Documentation - [ ] I created a pull request [to the documentation](https://codeberg.org/forgejo/docs) to explain to Forgejo users how to use this change. - [x] I did not document these changes and I do not expect someone else to do it. ### Release notes - [ ] This change will be noticed by a Forgejo user or admin (feature, bug fix, performance, etc.). I suggest to include a release note for this change. - [x] This change is not visible to a Forgejo user or admin (refactor, dependency upgrade, etc.). I think there is no need to add a release note for this change. Co-authored-by: Melissa Geels Co-authored-by: Mathieu Fenniak Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/11907 Reviewed-by: Mathieu Fenniak Co-authored-by: forgejo-backport-action Co-committed-by: forgejo-backport-action --- .golangci.yml | 19 +++++ .../v14a_actions-approval-and-trust.go | 15 +++- .../v14a_actions-approval-and-trust_test.go | 27 +++---- .../v14a_ap-change-fedi-handle-structure.go | 80 ++++++++++++------- .../v14a_migrate_task_secrets.go | 21 ++++- .../v14a_migrate_webhook_authorization.go | 16 +++- ...v14a_migrate_webhook_authorization_test.go | 6 +- .../v14a_rework-notification.go | 6 +- .../v14a_set_remote_user_prohibit_login.go | 47 +++++++++-- 9 files changed, 172 insertions(+), 65 deletions(-) diff --git a/.golangci.yml b/.golangci.yml index 1d0dcb489f..de3f3a9255 100644 --- a/.golangci.yml +++ b/.golangci.yml @@ -45,6 +45,25 @@ linters: desc: use forgejo.org/modules/git instead, see https://codeberg.org/forgejo/forgejo/pulls/4941 - pkg: gopkg.in/yaml.v3 desc: use go.yaml.in/yaml instead, see https://codeberg.org/forgejo/forgejo/pulls/8956 + migration-isolation: + list-mode: lax + files: + - "**/models/forgejo_migrations/**" + deny: + - pkg: "forgejo.org/models" + desc: > + Migrations must not import application models. Application models will be the most recent schema for + Forgejo, while migrations will be operating against the database schema that existed when they were + authored. + - pkg: "forgejo.org/services" + desc: > + Migrations must not import application services. Application services will reference application + models which will use the most recent schema for Forgejo, while migrations will be operating against the + database schema that existed when they were authored. + allow: + - "forgejo.org/models/db" + - "forgejo.org/models/gitea_migrations/base" + - "forgejo.org/models/gitea_migrations/test" gocritic: disabled-checks: - ifElseChain diff --git a/models/forgejo_migrations/v14a_actions-approval-and-trust.go b/models/forgejo_migrations/v14a_actions-approval-and-trust.go index 9f6691c4f0..f8be109dda 100644 --- a/models/forgejo_migrations/v14a_actions-approval-and-trust.go +++ b/models/forgejo_migrations/v14a_actions-approval-and-trust.go @@ -6,7 +6,6 @@ package forgejo_migrations import ( "context" - actions_model "forgejo.org/models/actions" "forgejo.org/models/db" "forgejo.org/modules/log" "forgejo.org/modules/timeutil" @@ -59,6 +58,18 @@ type v14ActionsApprovalAndTrustTrusted struct { } func v14ActionsApprovalAndTrustPopulateTableActionUser(x *xorm.Engine) error { + type ActionUser struct { + ID int64 `xorm:"pk autoincr"` + UserID int64 `xorm:"INDEX UNIQUE(action_user_index) REFERENCES(user, id)"` + RepoID int64 `xorm:"INDEX UNIQUE(action_user_index) REFERENCES(repository, id)"` + TrustedWithPullRequests bool + LastAccess timeutil.TimeStamp `xorm:"INDEX"` + } + insertActionUser := func(ctx context.Context, user *ActionUser) error { + user.LastAccess = timeutil.TimeStampNow() + return db.Insert(ctx, user) + } + // // Users approved once were trusted before and are trusted now. // @@ -87,7 +98,7 @@ func v14ActionsApprovalAndTrustPopulateTableActionUser(x *xorm.Engine) error { if err := db.WithTx(db.DefaultContext, func(ctx context.Context) error { for _, trusted := range trustedList { log.Debug("v14a_actions-approval-and-trust: repository %d trusts user %d", trusted.RepoID, trusted.UserID) - if err := actions_model.InsertActionUser(ctx, &actions_model.ActionUser{ + if err := insertActionUser(ctx, &ActionUser{ RepoID: trusted.RepoID, UserID: trusted.UserID, TrustedWithPullRequests: true, diff --git a/models/forgejo_migrations/v14a_actions-approval-and-trust_test.go b/models/forgejo_migrations/v14a_actions-approval-and-trust_test.go index c639a0d2e9..8ff1b1c066 100644 --- a/models/forgejo_migrations/v14a_actions-approval-and-trust_test.go +++ b/models/forgejo_migrations/v14a_actions-approval-and-trust_test.go @@ -7,11 +7,8 @@ import ( "testing" "time" - actions_model "forgejo.org/models/actions" "forgejo.org/models/db" migration_tests "forgejo.org/models/gitea_migrations/test" - repo_model "forgejo.org/models/repo" - user_model "forgejo.org/models/user" "forgejo.org/modules/timeutil" webhook_module "forgejo.org/modules/webhook" @@ -20,6 +17,9 @@ import ( ) func Test_v14ActionsApprovalAndTrustPopulateTableActionUser(t *testing.T) { + type ConcurrencyMode int + type Status int + type ActionUser struct { ID int64 `xorm:"pk autoincr"` UserID int64 `xorm:"INDEX UNIQUE(action_user_index) REFERENCES(user, id)"` @@ -32,21 +32,18 @@ func Test_v14ActionsApprovalAndTrustPopulateTableActionUser(t *testing.T) { type ActionRun struct { ID int64 Title string - RepoID int64 `xorm:"index unique(repo_index) index(concurrency)"` - Repo *repo_model.Repository `xorm:"-"` - OwnerID int64 `xorm:"index"` - WorkflowID string `xorm:"index"` // the name of workflow file - Index int64 `xorm:"index unique(repo_index)"` // a unique number for each run of a repository - TriggerUserID int64 `xorm:"index"` - TriggerUser *user_model.User `xorm:"-"` + RepoID int64 `xorm:"index unique(repo_index) index(concurrency)"` + OwnerID int64 `xorm:"index"` + WorkflowID string `xorm:"index"` // the name of workflow file + Index int64 `xorm:"index unique(repo_index)"` // a unique number for each run of a repository + TriggerUserID int64 `xorm:"index"` ScheduleID int64 Ref string `xorm:"index"` // the commit/tag/… that caused the run - IsRefDeleted bool `xorm:"-"` CommitSHA string Event webhook_module.HookEventType // the webhook event that causes the workflow to run EventPayload string `xorm:"LONGTEXT"` TriggerEvent string // the trigger event defined in the `on` configuration of the triggered workflow - Status actions_model.Status `xorm:"index"` + Status Status `xorm:"index"` Version int `xorm:"version default 0"` // Status could be updated concomitantly, so an optimistic lock is needed // Started and Stopped is used for recording last run time, if rerun happened, they will be reset to 0 Started timeutil.TimeStamp @@ -65,7 +62,7 @@ func Test_v14ActionsApprovalAndTrustPopulateTableActionUser(t *testing.T) { ApprovedBy int64 `xorm:"index"` ConcurrencyGroup string `xorm:"'concurrency_group' index(concurrency)"` - ConcurrencyType actions_model.ConcurrencyMode + ConcurrencyType ConcurrencyMode PreExecutionError string `xorm:"LONGTEXT"` // used to report errors that blocked execution of a workflow } @@ -83,10 +80,10 @@ func Test_v14ActionsApprovalAndTrustPopulateTableActionUser(t *testing.T) { require.NoError(t, v14ActionsApprovalAndTrustPopulateTableActionUser(x)) - var users []*actions_model.ActionUser + var users []*ActionUser require.NoError(t, db.GetEngine(t.Context()).Select("`repo_id`, `user_id`").OrderBy("`id`").Find(&users)) // See models/gitea_migrations/fixtures/Test_v14ActionsApprovalAndTrustPopulateTableActionUser/action_run.yml - assert.Equal(t, []*actions_model.ActionUser{ + assert.Equal(t, []*ActionUser{ { UserID: 3, RepoID: 15, diff --git a/models/forgejo_migrations/v14a_ap-change-fedi-handle-structure.go b/models/forgejo_migrations/v14a_ap-change-fedi-handle-structure.go index fe0a68489a..a412ceb737 100644 --- a/models/forgejo_migrations/v14a_ap-change-fedi-handle-structure.go +++ b/models/forgejo_migrations/v14a_ap-change-fedi-handle-structure.go @@ -10,15 +10,14 @@ package forgejo_migrations import ( "context" + "database/sql" "fmt" "strings" "forgejo.org/models/db" - "forgejo.org/models/forgefed" - user_model "forgejo.org/models/user" "forgejo.org/modules/log" + "forgejo.org/modules/timeutil" "forgejo.org/modules/validation" - user_service "forgejo.org/services/user" "xorm.io/xorm" ) @@ -31,6 +30,42 @@ func init() { } func changeActivityPubUsernameFormat(x *xorm.Engine) error { + type FederationHost struct { + ID int64 `xorm:"pk autoincr"` + HostFqdn string `xorm:"host_fqdn UNIQUE(federation_host) INDEX VARCHAR(255) NOT NULL"` + HostPort uint16 `xorm:" UNIQUE(federation_host) INDEX NOT NULL DEFAULT 443"` + HostSchema string `xorm:"NOT NULL DEFAULT 'https'"` + Created timeutil.TimeStamp `xorm:"created"` + Updated timeutil.TimeStamp `xorm:"updated"` + } + type FederatedUser struct { + ID int64 `xorm:"pk autoincr"` + UserID int64 `xorm:"NOT NULL INDEX user_id"` + ExternalID string `xorm:"UNIQUE(federation_user_mapping) NOT NULL"` + FederationHostID int64 `xorm:"UNIQUE(federation_user_mapping) NOT NULL"` + KeyID sql.NullString `xorm:"key_id UNIQUE"` + PublicKey sql.Null[sql.RawBytes] `xorm:"BLOB"` + InboxPath string + NormalizedOriginalURL string // This field is just to keep original information. Pls. do not use for search or as ID! + } + type User struct { + ID int64 `xorm:"pk autoincr"` + LowerName string `xorm:"UNIQUE NOT NULL"` + Name string `xorm:"UNIQUE NOT NULL"` + CreatedUnix timeutil.TimeStamp `xorm:"INDEX created"` + UpdatedUnix timeutil.TimeStamp `xorm:"INDEX updated"` + } + deleteFederatedUser := func(ctx context.Context, userID int64) error { + _, err := db.GetEngine(ctx).Delete(&FederatedUser{UserID: userID}) + return err + } + userLogString := func(u *User) string { + if u == nil { + return "" + } + return fmt.Sprintf("", u.ID, u.Name) + } + // Normally, the db.WithTx statement ensures that the database transaction (aka. all changes made // by this migration) will only be committed if the SQL operations inside of the iteration // (db.Iterate) don't return an error. @@ -45,9 +80,9 @@ func changeActivityPubUsernameFormat(x *xorm.Engine) error { // migrations at a later point and has been kept as-is. return db.WithTx(db.DefaultContext, func(ctx context.Context) error { // The transaction is committed only if modifying all federated users is possible. - return db.Iterate(ctx, nil, func(ctx context.Context, federatedUser *user_model.FederatedUser) error { + return db.Iterate(ctx, nil, func(ctx context.Context, federatedUser *FederatedUser) error { // localUser represents the "local" representation of an ActivityPub (federated) user - localUser := &user_model.User{} + localUser := &User{} has, err := db.GetEngine(ctx).ID(federatedUser.UserID).Get(localUser) if err != nil { log.Warn("Migration[v14a_ap-change-fedi-handle-structure]: Database error occurred while getting local user (ID: %d), ignoring...: %e", federatedUser.UserID, err) @@ -56,7 +91,7 @@ func changeActivityPubUsernameFormat(x *xorm.Engine) error { if !has { log.Warn("Migration[v14a_ap-change-fedi-handle-structure]: User missing for federated user: %v", federatedUser) - err := user_model.DeleteFederatedUser(ctx, federatedUser.UserID) + err := deleteFederatedUser(ctx, federatedUser.UserID) if err != nil { log.Warn("Migration[v14a_ap-change-fedi-handle-structure]: Database error occurred while deleting federated user (%s), ignoring...: %e", federatedUser, err) return nil @@ -68,24 +103,13 @@ func changeActivityPubUsernameFormat(x *xorm.Engine) error { } else { // Copied from models/forgefed/federationhost_repository.go (forgefed.GetFederationHost), // minus some validation code for FederationHost which we do not otherwise manipulate here. - federationHost := new(forgefed.FederationHost) + federationHost := new(FederationHost) has, err := db.GetEngine(ctx).ID(federatedUser.FederationHostID).Get(federationHost) if err != nil { log.Warn("Migration[v14a_ap-change-fedi-handle-structure]: Database error occurred while looking up federation host info (for %v), ignoring...: %e", federatedUser, err) return nil } else if !has { - log.Warn("Migration[v14a_ap-change-fedi-handle-structure]: Federation host for federated user missing, deleting: %v", federatedUser) - err := user_model.DeleteFederatedUser(ctx, federatedUser.UserID) - if err != nil { - log.Warn("Migration[v14a_ap-change-fedi-handle-structure]: Database error occurred while deleting federated user (%v), ignoring...: %e", federatedUser, err) - return nil - } - - err = user_service.DeleteUser(ctx, localUser, true) - if err != nil { - log.Warn("Migration[v14a_ap-change-fedi-handle-structure]: Database error occurred while deleting user (%s), ignoring...: %v", localUser.LogString(), err) - } - + log.Warn("Migration[v14a_ap-change-fedi-handle-structure]: Federation host for federated user %s is missing", federatedUser) return nil } @@ -117,10 +141,10 @@ func changeActivityPubUsernameFormat(x *xorm.Engine) error { // Implicitly assumes that there won't be a lower name unique constraint violation. // Potentially a bit paranoid, but why not? - userThatShouldntExist := &user_model.User{} + userThatShouldntExist := &User{} lowernameTaken, err := db.GetEngine(ctx).Where("lower_name = ?", strings.ToLower(newUsername)).Table("user").Get(userThatShouldntExist) if err != nil { - log.Warn("Migration[v14a_ap-change-fedi-handle-structure]: Database error occurred, skipping migration of %s: %e", localUser.LogString(), err) + log.Warn("Migration[v14a_ap-change-fedi-handle-structure]: Database error occurred, skipping migration of %s: %e", userLogString(localUser), err) return nil } @@ -128,23 +152,23 @@ func changeActivityPubUsernameFormat(x *xorm.Engine) error { log.Warn( "Migration[v14a_ap-change-fedi-handle-structure]: New username %s for %s already taken by %s, deleting the former...", newUsername, - localUser.LogString(), - userThatShouldntExist.LogString(), + userLogString(localUser), + userLogString(userThatShouldntExist), ) - err := user_model.DeleteFederatedUser(ctx, localUser.ID) + err := deleteFederatedUser(ctx, localUser.ID) if err != nil { - log.Warn("Migration[v14a_ap-change-fedi-handle-structure]: Database error occurred while deleting federated user (%s), ignoring...: %e", localUser.LogString(), err) + log.Warn("Migration[v14a_ap-change-fedi-handle-structure]: Database error occurred while deleting federated user (%s), ignoring...: %e", userLogString(localUser), err) } return nil } // Safe to assume that the following operations should just work now. - log.Info("Migration[v14a_ap-change-fedi-handle-structure]: Updating username of %s to %s", localUser.LogString(), newUsername) - if _, err := db.GetEngine(ctx).ID(localUser.ID).Cols("lower_name", "name").Update(&user_model.User{ + log.Info("Migration[v14a_ap-change-fedi-handle-structure]: Updating username of %s to %s", userLogString(localUser), newUsername) + if _, err := db.GetEngine(ctx).ID(localUser.ID).Cols("lower_name", "name").Update(&User{ LowerName: strings.ToLower(newUsername), Name: newUsername, }); err != nil { - log.Warn("Migration[v14a_ap-change-fedi-handle-structure]: Database error occurred when updating federated user's username (%s), ignoring...: %e", localUser.LogString(), err) + log.Warn("Migration[v14a_ap-change-fedi-handle-structure]: Database error occurred when updating federated user's username (%s), ignoring...: %e", userLogString(localUser), err) return nil } } diff --git a/models/forgejo_migrations/v14a_migrate_task_secrets.go b/models/forgejo_migrations/v14a_migrate_task_secrets.go index a177dff92a..3484a024b2 100644 --- a/models/forgejo_migrations/v14a_migrate_task_secrets.go +++ b/models/forgejo_migrations/v14a_migrate_task_secrets.go @@ -8,7 +8,6 @@ import ( "encoding/base64" "fmt" - admin_model "forgejo.org/models/admin" "forgejo.org/models/db" "forgejo.org/modules/json" "forgejo.org/modules/keying" @@ -17,6 +16,7 @@ import ( "forgejo.org/modules/secret" "forgejo.org/modules/setting" "forgejo.org/modules/structs" + "forgejo.org/modules/timeutil" "xorm.io/builder" "xorm.io/xorm" @@ -30,6 +30,19 @@ func init() { } func migrateTaskSecrets(x *xorm.Engine) error { + type Task struct { + ID int64 + DoerID int64 `xorm:"index"` + OwnerID int64 `xorm:"index"` + RepoID int64 `xorm:"index"` + PayloadContent string `xorm:"TEXT"` + Created timeutil.TimeStamp `xorm:"created"` + } + taskUpdateCols := func(ctx context.Context, task *Task, cols ...string) error { + _, err := db.GetEngine(ctx).ID(task.ID).Cols(cols...).Update(task) + return err + } + return db.WithTx(db.DefaultContext, func(ctx context.Context) error { sess := db.GetEngine(ctx) @@ -39,7 +52,7 @@ func migrateTaskSecrets(x *xorm.Engine) error { messages := make([]string, 0, 100) ids := make([]int64, 0, 100) - err := db.Iterate(ctx, builder.Eq{"type": structs.TaskTypeMigrateRepo}, func(ctx context.Context, bean *admin_model.Task) error { + err := db.Iterate(ctx, builder.Eq{"type": structs.TaskTypeMigrateRepo}, func(ctx context.Context, bean *Task) error { var opts migration.MigrateOptions err := json.Unmarshal([]byte(bean.PayloadContent), &opts) if err != nil { @@ -96,7 +109,7 @@ func migrateTaskSecrets(x *xorm.Engine) error { } bean.PayloadContent = string(bs) - return bean.UpdateCols(ctx, "payload_content") + return taskUpdateCols(ctx, bean, "payload_content") }) if err == nil { @@ -106,7 +119,7 @@ func migrateTaskSecrets(x *xorm.Engine) error { log.Error("v14a_migrate_task_secrets: %s", message) } - _, err = sess.In("id", ids).NoAutoCondition().NoAutoTime().Delete(&admin_model.Task{}) + _, err = sess.In("id", ids).NoAutoCondition().NoAutoTime().Delete(&Task{}) } } return err diff --git a/models/forgejo_migrations/v14a_migrate_webhook_authorization.go b/models/forgejo_migrations/v14a_migrate_webhook_authorization.go index 738841eb2b..5921329b3e 100644 --- a/models/forgejo_migrations/v14a_migrate_webhook_authorization.go +++ b/models/forgejo_migrations/v14a_migrate_webhook_authorization.go @@ -8,11 +8,11 @@ import ( "fmt" "forgejo.org/models/db" - webhook_model "forgejo.org/models/webhook" "forgejo.org/modules/keying" "forgejo.org/modules/log" "forgejo.org/modules/secret" "forgejo.org/modules/setting" + "forgejo.org/modules/timeutil" "xorm.io/xorm" "xorm.io/xorm/schemas" @@ -26,6 +26,16 @@ func init() { } func migrateWebhookSecrets(x *xorm.Engine) error { + type Webhook struct { + ID int64 `xorm:"pk autoincr"` + RepoID int64 `xorm:"INDEX"` // An ID of 0 indicates either a default or system webhook + OwnerID int64 `xorm:"INDEX"` + HeaderAuthorizationEncrypted []byte `xorm:"BLOB"` + + CreatedUnix timeutil.TimeStamp `xorm:"INDEX created"` + UpdatedUnix timeutil.TimeStamp `xorm:"INDEX updated"` + } + return db.WithTx(db.DefaultContext, func(ctx context.Context) error { sess := db.GetEngine(ctx) @@ -59,7 +69,7 @@ func migrateWebhookSecrets(x *xorm.Engine) error { messages := make([]string, 0, 100) ids := make([]int64, 0, 100) - err := db.Iterate(ctx, nil, func(ctx context.Context, bean *webhook_model.Webhook) error { + err := db.Iterate(ctx, nil, func(ctx context.Context, bean *Webhook) error { if len(bean.HeaderAuthorizationEncrypted) == 0 { return nil } @@ -83,7 +93,7 @@ func migrateWebhookSecrets(x *xorm.Engine) error { log.Error("migration[v14a_migrate_webhook_authorization]: %s", message) } - _, err = sess.In("id", ids).NoAutoCondition().NoAutoTime().Delete(&webhook_model.Webhook{}) + _, err = sess.In("id", ids).NoAutoCondition().NoAutoTime().Delete(&Webhook{}) } } return err diff --git a/models/forgejo_migrations/v14a_migrate_webhook_authorization_test.go b/models/forgejo_migrations/v14a_migrate_webhook_authorization_test.go index 0b5701c88c..9da06f4baf 100644 --- a/models/forgejo_migrations/v14a_migrate_webhook_authorization_test.go +++ b/models/forgejo_migrations/v14a_migrate_webhook_authorization_test.go @@ -7,7 +7,6 @@ import ( "testing" migration_tests "forgejo.org/models/gitea_migrations/test" - webhook_model "forgejo.org/models/webhook" "forgejo.org/modules/keying" "forgejo.org/modules/timeutil" webhook_module "forgejo.org/modules/webhook" @@ -17,6 +16,7 @@ import ( ) func Test_MigrateWebhookSecrets(t *testing.T) { + type HookContentType int type Webhook struct { ID int64 `xorm:"pk autoincr"` RepoID int64 `xorm:"INDEX"` @@ -24,7 +24,7 @@ func Test_MigrateWebhookSecrets(t *testing.T) { IsSystemWebhook bool URL string `xorm:"url TEXT"` HTTPMethod string `xorm:"http_method"` - ContentType webhook_model.HookContentType + ContentType HookContentType Secret string `xorm:"TEXT"` Events string `xorm:"TEXT"` IsActive bool `xorm:"INDEX"` @@ -45,7 +45,7 @@ func Test_MigrateWebhookSecrets(t *testing.T) { IsSystemWebhook bool URL string `xorm:"url TEXT"` HTTPMethod string `xorm:"http_method"` - ContentType webhook_model.HookContentType + ContentType HookContentType Secret string `xorm:"TEXT"` Events string `xorm:"TEXT"` IsActive bool `xorm:"INDEX"` diff --git a/models/forgejo_migrations/v14a_rework-notification.go b/models/forgejo_migrations/v14a_rework-notification.go index 77ae79d86f..04303559e8 100644 --- a/models/forgejo_migrations/v14a_rework-notification.go +++ b/models/forgejo_migrations/v14a_rework-notification.go @@ -4,7 +4,6 @@ package forgejo_migrations import ( - activities_model "forgejo.org/models/activities" "forgejo.org/modules/setting" "xorm.io/xorm" @@ -18,9 +17,10 @@ func init() { } func reworkNotification(x *xorm.Engine) error { + type NotificationStatus uint8 type Notification struct { - UserID int64 `xorm:"NOT NULL INDEX(s)"` - Status activities_model.NotificationStatus `xorm:"SMALLINT NOT NULL INDEX(s)"` + UserID int64 `xorm:"NOT NULL INDEX(s)"` + Status NotificationStatus `xorm:"SMALLINT NOT NULL INDEX(s)"` } if err := dropIndexIfExists(x, "notification", "IDX_notification_user_id"); err != nil { diff --git a/models/forgejo_migrations/v14a_set_remote_user_prohibit_login.go b/models/forgejo_migrations/v14a_set_remote_user_prohibit_login.go index 3575dad832..9f453e05f3 100644 --- a/models/forgejo_migrations/v14a_set_remote_user_prohibit_login.go +++ b/models/forgejo_migrations/v14a_set_remote_user_prohibit_login.go @@ -5,10 +5,11 @@ package forgejo_migrations import ( "context" + "fmt" "forgejo.org/models/db" - user_model "forgejo.org/models/user" "forgejo.org/modules/log" + "forgejo.org/modules/timeutil" "xorm.io/builder" "xorm.io/xorm" @@ -22,13 +23,45 @@ func init() { } func setProhibitLoginActivityPubUser(x *xorm.Engine) error { + type UserType int + const ( + UserTypeIndividual UserType = iota // Historic reason to make it starts at 0. + UserTypeOrganization // 1 + UserTypeUserReserved // 2 + UserTypeOrganizationReserved // 3 + UserTypeBot // 4 + UserTypeRemoteUser // 5 + UserTypeActivityPubUser // 6 + ) + type User struct { + ID int64 `xorm:"pk autoincr"` + Name string `xorm:"UNIQUE NOT NULL"` + Passwd string `xorm:"NOT NULL"` + PasswdHashAlgo string `xorm:"NOT NULL DEFAULT 'argon2'"` + Type UserType + Salt string `xorm:"VARCHAR(32)"` + CreatedUnix timeutil.TimeStamp `xorm:"INDEX created"` + UpdatedUnix timeutil.TimeStamp `xorm:"INDEX updated"` + ProhibitLogin bool `xorm:"NOT NULL DEFAULT false"` + } + type FederatedUser struct { + UserID int64 `xorm:"NOT NULL INDEX user_id"` + } + + userLogString := func(u *User) string { + if u == nil { + return "" + } + return fmt.Sprintf("", u.ID, u.Name) + } + return db.WithTx(db.DefaultContext, func(ctx context.Context) error { - return db.Iterate(ctx, builder.Eq{"type": 5}, func(ctx context.Context, user *user_model.User) error { - log.Info("Checking if user %s is created from ActivityPub", user.LogString()) + return db.Iterate(ctx, builder.Eq{"type": 5}, func(ctx context.Context, user *User) error { + log.Info("Checking if user %s is created from ActivityPub", userLogString(user)) // Users created from f3 also have the RemoteUser user type. All // FederatedUser should reference exactly one User. - has, err := db.GetEngine(ctx).Table("federated_user").Get(&user_model.FederatedUser{UserID: user.ID}) + has, err := db.GetEngine(ctx).Table("federated_user").Get(&FederatedUser{UserID: user.ID}) if err != nil { return err } @@ -37,9 +70,9 @@ func setProhibitLoginActivityPubUser(x *xorm.Engine) error { return nil } - log.Info("Updating user %s", user.LogString()) - _, err = db.GetEngine(ctx).Table("user").ID(user.ID).Cols("type", "prohibit_login", "passwd", "salt", "passwd_hash_algo").Update(&user_model.User{ - Type: user_model.UserTypeActivityPubUser, + log.Info("Updating user %s", userLogString(user)) + _, err = db.GetEngine(ctx).Table("user").ID(user.ID).Cols("type", "prohibit_login", "passwd", "salt", "passwd_hash_algo").Update(&User{ + Type: UserTypeActivityPubUser, ProhibitLogin: true, Passwd: "", Salt: "", From e919aedcec0b4b2e230ec688a2a77ce4093bda50 Mon Sep 17 00:00:00 2001 From: forgejo-backport-action Date: Wed, 1 Apr 2026 08:13:12 +0200 Subject: [PATCH 12/82] [v15.0/forgejo] fix: allow modals to be submitted multiple times (#11931) **Backport:** https://codeberg.org/forgejo/forgejo/pulls/11843 Fixes #11842. The `once: true` was likely added to prevent multiple concurrent submissions of the same form. This could still be worth preventing, but I suspect it would require wrapping the supplied `onApprove` callback with the corresponding logic, implemented manually, as I am not aware of any native API to prevent concurrent executions of callbacks. ## Checklist ### Tests for JavaScript changes - I added test coverage for JavaScript changes... - [ ] in `web_src/js/*.test.js` if it can be unit tested. - [x] in `tests/e2e/*.test.e2e.js` if it requires interactions with a live Forgejo server (see also the [developer guide for JavaScript testing](https://codeberg.org/forgejo/forgejo/src/branch/forgejo/tests/e2e/README.md#end-to-end-tests)). ### Documentation - [ ] I created a pull request [to the documentation](https://codeberg.org/forgejo/docs) to explain to Forgejo users how to use this change. - [x] I did not document these changes and I do not expect someone else to do it. ### Release notes - [x] This change will be noticed by a Forgejo user or admin (feature, bug fix, performance, etc.). I suggest to include a release note for this change. - [ ] This change is not visible to a Forgejo user or admin (refactor, dependency upgrade, etc.). I think there is no need to add a release note for this change. Co-authored-by: Antonin Delpeuch Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/11931 Reviewed-by: Mathieu Fenniak Co-authored-by: forgejo-backport-action Co-committed-by: forgejo-backport-action --- tests/e2e/repo-labels.test.e2e.ts | 20 ++++++++++++++++++++ web_src/js/modules/modal.ts | 2 +- 2 files changed, 21 insertions(+), 1 deletion(-) diff --git a/tests/e2e/repo-labels.test.e2e.ts b/tests/e2e/repo-labels.test.e2e.ts index ef33e91dda..956c4c19d5 100644 --- a/tests/e2e/repo-labels.test.e2e.ts +++ b/tests/e2e/repo-labels.test.e2e.ts @@ -41,3 +41,23 @@ test('Edit label', async ({page}) => { await expect(page.locator('.label-title').filter({hasText: labelName})).toBeVisible(); }); + +test('New label after a failed validation', async ({page}) => { + // for issue https://codeberg.org/forgejo/forgejo/issues/11842 + const response = await page.goto('/user2/repo1/labels'); + expect(response?.status()).toBe(200); + + await page.getByRole('button', {name: 'New label'}).click(); + await expect(page.locator('#new-label-modal')).toBeVisible(); + + // attempt to submit the form without having filled it first + await page.getByRole('button', {name: 'Create label'}).click(); + await screenshot(page, page.locator('#new-label-modal')); + + // then fill the form and submit it again + const labelName = dynamic_id(); + await page.getByRole('textbox', {name: 'Label name'}).fill(labelName); + await page.getByRole('button', {name: 'Create label'}).click(); + + await expect(page.locator('.label-title').filter({hasText: labelName})).toBeVisible(); +}); diff --git a/web_src/js/modules/modal.ts b/web_src/js/modules/modal.ts index 290dccee70..b6fef12b2b 100644 --- a/web_src/js/modules/modal.ts +++ b/web_src/js/modules/modal.ts @@ -13,7 +13,7 @@ export function showModal(modalID: string, onApprove: () => void) { modal.querySelector('.cancel')?.addEventListener('click', () => { modal.close(); }, {once: true, passive: true}); - modal.querySelector('.ok')?.addEventListener('click', onApprove, {once: true, passive: true}); + modal.querySelector('.ok')?.addEventListener('click', onApprove, {passive: true}); // The modal is ready to be shown. modal.showModal(); From d60af095dd7ed9b1f0ec2724dfb2300700c9f756 Mon Sep 17 00:00:00 2001 From: forgejo-backport-action Date: Wed, 1 Apr 2026 17:35:43 +0200 Subject: [PATCH 13/82] [v15.0/forgejo] fix: allow repository deletion when referenced by a repo-specific access token (#11933) **Backport:** https://codeberg.org/forgejo/forgejo/pulls/11927 Fixes #11919. Co-authored-by: Mathieu Fenniak Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/11933 Reviewed-by: Mathieu Fenniak Co-authored-by: forgejo-backport-action Co-committed-by: forgejo-backport-action --- services/repository/delete.go | 2 ++ services/repository/repository_test.go | 17 +++++++++++++++++ 2 files changed, 19 insertions(+) diff --git a/services/repository/delete.go b/services/repository/delete.go index 3ea7e51a9b..8766204f34 100644 --- a/services/repository/delete.go +++ b/services/repository/delete.go @@ -13,6 +13,7 @@ import ( activities_model "forgejo.org/models/activities" admin_model "forgejo.org/models/admin" asymkey_model "forgejo.org/models/asymkey" + auth_model "forgejo.org/models/auth" "forgejo.org/models/db" git_model "forgejo.org/models/git" issues_model "forgejo.org/models/issues" @@ -189,6 +190,7 @@ func DeleteRepositoryDirectly(ctx context.Context, doer *user_model.User, repoID &actions_model.ActionUser{RepoID: repoID}, &repo_model.RepoArchiveDownloadCount{RepoID: repoID}, &actions_model.ActionRunnerToken{RepoID: optional.Some(repoID)}, + &auth_model.AccessTokenResourceRepo{RepoID: repoID}, ); err != nil { return fmt.Errorf("deleteBeans: %w", err) } diff --git a/services/repository/repository_test.go b/services/repository/repository_test.go index 5f63e4d9cb..e5ae3ecb32 100644 --- a/services/repository/repository_test.go +++ b/services/repository/repository_test.go @@ -6,6 +6,7 @@ package repository import ( "testing" + auth_model "forgejo.org/models/auth" "forgejo.org/models/db" repo_model "forgejo.org/models/repo" "forgejo.org/models/unit" @@ -62,3 +63,19 @@ func TestDeleteRepository(t *testing.T) { doer := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 1}) require.NoError(t, DeleteRepository(t.Context(), doer, repo, false)) } + +func TestDeleteRepositoryWithReferences(t *testing.T) { + require.NoError(t, unittest.PrepareTestDatabase()) + + repo := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: 1}) + + token1 := unittest.AssertExistsAndLoadBean(t, &auth_model.AccessToken{ID: 1}) + err := db.Insert(t.Context(), &auth_model.AccessTokenResourceRepo{ + TokenID: token1.ID, + RepoID: repo.ID, + }) + require.NoError(t, err) + + doer := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 1}) + require.NoError(t, DeleteRepository(t.Context(), doer, repo, false)) +} From a32804bebe85d46878396f2653917cc195cc37f7 Mon Sep 17 00:00:00 2001 From: Renovate Bot Date: Thu, 2 Apr 2026 03:29:58 +0200 Subject: [PATCH 14/82] Update module github.com/golangci/golangci-lint/v2/cmd/golangci-lint to v2.11.4 (v15.0/forgejo) (#11948) Co-authored-by: Renovate Bot Co-committed-by: Renovate Bot --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index 61d5b22162..d1a1e90dec 100644 --- a/Makefile +++ b/Makefile @@ -39,7 +39,7 @@ XGO_VERSION := go-1.21.x AIR_PACKAGE ?= github.com/air-verse/air@v1 # renovate: datasource=go EDITORCONFIG_CHECKER_PACKAGE ?= github.com/editorconfig-checker/editorconfig-checker/v3/cmd/editorconfig-checker@v3.6.1 # renovate: datasource=go GOFUMPT_PACKAGE ?= mvdan.cc/gofumpt@v0.9.2 # renovate: datasource=go -GOLANGCI_LINT_PACKAGE ?= github.com/golangci/golangci-lint/v2/cmd/golangci-lint@v2.10.1 # renovate: datasource=go +GOLANGCI_LINT_PACKAGE ?= github.com/golangci/golangci-lint/v2/cmd/golangci-lint@v2.11.4 # renovate: datasource=go GXZ_PACKAGE ?= github.com/ulikunitz/xz/cmd/gxz@v0.5.15 # renovate: datasource=go SWAGGER_PACKAGE ?= github.com/go-swagger/go-swagger/cmd/swagger@v0.33.2 # renovate: datasource=go XGO_PACKAGE ?= src.techknowlogick.com/xgo@latest From 607d0310694976fd01994ade592f7cfe60a39e11 Mon Sep 17 00:00:00 2001 From: Gusted Date: Thu, 2 Apr 2026 16:54:46 +0200 Subject: [PATCH 15/82] [v15.0/forgejo]: chore: add modernizer linter (#11949) **Backport: !11936** - Go has a suite of small linters that helps with modernizing Go code by using newer functions and catching small mistakes, https://pkg.go.dev/golang.org/x/tools/go/analysis/passes/modernize. - Enable this linter in golangci-lint. - There's also [`go fix`](https://go.dev/blog/gofix), which is not yet released as a linter in golangci-lint: https://github.com/golangci/golangci-lint/pull/6385 Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/11949 Reviewed-by: Mathieu Fenniak Co-authored-by: Gusted Co-committed-by: Gusted --- .golangci.yml | 1 + cmd/cert.go | 4 +- cmd/dump.go | 37 ++++++-------- cmd/dump_repo.go | 4 +- models/actions/status.go | 9 ++-- models/activities/action.go | 7 +-- models/activities/notification_list.go | 20 ++------ models/activities/repo_activity.go | 5 +- models/auth/access_token_scope.go | 8 +-- models/auth/oauth2.go | 2 +- models/db/iterate.go | 2 +- models/db/name.go | 7 ++- models/dbfs/dbfile.go | 20 ++------ models/git/protected_branch.go | 2 +- models/gitea_migrations/test/tests.go | 2 +- models/gitea_migrations/v1_11/v111.go | 7 ++- models/gitea_migrations/v1_11/v115.go | 2 +- models/gitea_migrations/v1_20/v259.go | 2 +- models/issues/comment.go | 10 ++-- models/issues/comment_list.go | 35 +++---------- models/issues/issue_list.go | 50 ++++--------------- models/issues/issue_stats.go | 5 +- models/issues/issue_test.go | 7 +-- models/issues/issue_update.go | 2 +- models/issues/review_list.go | 2 +- models/issues/tracked_time.go | 5 +- models/perm/access/repo_permission.go | 12 +++-- models/project/column_test.go | 2 +- models/pull/review_state.go | 5 +- models/repo/repo.go | 13 +++-- models/repo/repo_list.go | 4 +- models/repo/repo_unit.go | 6 +-- models/repo/upload.go | 2 +- models/unit/unit.go | 14 +----- models/unittest/fixture_loader.go | 4 +- models/unittest/mock_http.go | 8 +-- models/unittest/reflection.go | 2 +- models/user/avatar.go | 2 +- models/user/email_address_test.go | 8 +-- models/user/moderation.go | 2 +- models/user/user.go | 4 +- models/user/user_test.go | 4 +- models/webhook/webhook.go | 2 +- modules/actions/workflows.go | 15 ++---- modules/auth/password/password.go | 2 +- modules/auth/password/password_test.go | 2 +- modules/auth/password/pwn/pwn.go | 2 +- modules/avatar/identicon/block.go | 4 +- modules/avatar/identicon/identicon.go | 2 +- modules/charset/charset.go | 2 +- modules/charset/charset_test.go | 2 +- modules/forgefed/actor.go | 8 +-- modules/forgefed/repository.go | 2 +- modules/git/commit.go | 4 +- modules/git/commit_info.go | 5 +- modules/git/foreachref/format.go | 8 +-- modules/git/hook.go | 8 +-- modules/git/last_commit_cache.go | 2 +- modules/git/log_name_status.go | 5 +- modules/git/notes.go | 11 ++-- modules/git/parse.go | 8 +-- modules/git/pushoptions/pushoptions.go | 2 +- modules/git/ref.go | 4 +- modules/git/repo.go | 4 +- modules/git/repo_attribute.go | 4 +- modules/git/repo_index.go | 2 +- modules/git/repo_tag.go | 6 +-- modules/git/tree.go | 2 +- modules/git/tree_entry.go | 2 +- modules/git/tree_test.go | 2 +- modules/hostmatcher/hostmatcher.go | 11 ++-- modules/httpcache/httpcache.go | 2 +- modules/httplib/serve.go | 5 +- modules/indexer/code/git.go | 4 +- modules/issue/template/template.go | 8 +-- modules/label/parser.go | 4 +- modules/log/event_format.go | 6 +-- modules/log/event_writer_conn_test.go | 6 +-- modules/log/flags.go | 2 +- modules/log/level_test.go | 6 +-- modules/markup/file_preview.go | 4 +- modules/markup/html.go | 17 +++---- modules/markup/markdown/markdown.go | 5 +- modules/markup/markdown/markdown_test.go | 4 +- .../markup/markdown/math/block_renderer.go | 2 +- modules/markup/markdown/meta_test.go | 8 +-- modules/markup/markdown/toc.go | 2 +- modules/markup/markdown/transform_heading.go | 2 +- modules/markup/renderer.go | 12 ++--- modules/packages/npm/creator.go | 4 +- modules/packages/npm/metadata.go | 2 +- modules/packages/nuget/symbol_extractor.go | 4 +- modules/packages/rubygems/marshal.go | 4 +- modules/packages/swift/metadata.go | 2 +- modules/private/serv.go | 10 ++-- modules/public/public.go | 2 +- modules/queue/base_levelqueue_common.go | 2 +- modules/queue/base_redis.go | 2 +- modules/queue/base_test.go | 2 +- modules/queue/manager.go | 5 +- modules/queue/workergroup.go | 8 +-- modules/queue/workerqueue_test.go | 22 ++++---- modules/repository/init.go | 2 +- modules/setting/config.go | 7 +-- modules/setting/config_env.go | 25 +++++----- modules/setting/indexer.go | 2 +- modules/setting/log.go | 4 +- modules/setting/markup.go | 4 +- modules/setting/mirror.go | 6 +-- modules/setting/storage.go | 8 +-- modules/structs/action.go | 8 +-- modules/structs/issue.go | 2 +- modules/structs/repo.go | 2 +- modules/structs/user.go | 4 +- modules/structs/user_gpgkey.go | 4 +- modules/structs/user_key.go | 2 +- modules/templates/eval/eval_test.go | 2 +- modules/templates/htmlrenderer.go | 2 +- modules/templates/scopedtmpl/scopedtmpl.go | 13 ++--- modules/templates/util_render.go | 9 ++-- modules/test/logchecker.go | 4 +- modules/testlogger/testlogger.go | 2 +- modules/updatechecker/update_checker.go | 4 +- modules/util/remove.go | 6 +-- .../util/rotatingfilewriter/writer_test.go | 2 +- modules/util/timer_test.go | 24 ++++----- modules/util/truncate.go | 2 +- modules/util/util_test.go | 2 +- modules/validation/binding.go | 10 ++-- modules/validation/helpers.go | 8 +-- modules/validation/validatable.go | 7 ++- modules/web/handler.go | 8 +-- modules/web/middleware/binding.go | 6 +-- modules/web/middleware/data.go | 5 +- modules/web/route.go | 4 +- routers/api/actions/oidc.go | 3 +- routers/api/packages/cargo/cargo.go | 5 +- routers/api/packages/composer/composer.go | 5 +- routers/api/v1/repo/issue_dependency.go | 10 +--- routers/api/v1/repo/wiki.go | 10 +--- routers/install/install.go | 8 ++- routers/private/serv.go | 17 +++---- routers/web/admin/auths.go | 2 +- routers/web/admin/config.go | 2 +- routers/web/admin/notice.go | 5 +- routers/web/admin/packages.go | 5 +- routers/web/auth/oauth.go | 10 ++-- routers/web/org/members.go | 5 +- routers/web/org/projects.go | 5 +- routers/web/repo/branch.go | 5 +- routers/web/repo/commit.go | 10 +--- routers/web/repo/editor.go | 2 +- routers/web/repo/githttp.go | 8 +-- routers/web/repo/issue.go | 12 ++--- routers/web/repo/issue_label_test.go | 9 ++-- routers/web/repo/milestone.go | 5 +- routers/web/repo/packages.go | 5 +- routers/web/repo/projects.go | 5 +- routers/web/repo/repo.go | 2 +- routers/web/repo/setting/lfs.go | 10 +--- routers/web/repo/wiki.go | 5 +- routers/web/shared/actions/runners.go | 10 +--- routers/web/user/home.go | 7 +-- routers/web/user/notification.go | 10 +--- routers/web/user/package.go | 10 +--- services/actions/rerun.go | 13 +++-- services/auth/oauth2.go | 2 +- services/auth/source/oauth2/urlmapping.go | 10 ++-- .../auth/source/pam/source_authenticate.go | 6 +-- .../auth/source/smtp/source_authenticate.go | 12 ++--- services/context/api.go | 9 +--- services/context/context_model.go | 10 ++-- services/context/permission.go | 13 ++--- services/context/repo.go | 8 +-- services/context/upload/upload.go | 2 +- services/convert/activitypub_user_action.go | 7 +-- services/cron/tasks.go | 2 +- services/doctor/push_mirror_consistency.go | 2 +- services/forms/repo_form.go | 9 +--- services/gitdiff/csv.go | 10 ++-- services/gitdiff/gitdiff.go | 5 +- services/gitdiff/gitdiff_test.go | 6 +-- services/gitdiff/highlightdiff_test.go | 2 +- services/issue/issue.go | 8 ++- services/issue/milestone.go | 5 +- services/lfs/locks.go | 10 +--- services/lfs/server.go | 5 +- services/mailer/mailer_test.go | 4 +- services/migrations/gitea_uploader_test.go | 4 +- services/migrations/github.go | 4 +- services/packages/alt/repository.go | 15 +++--- services/packages/arch/repository.go | 4 +- services/pull/merge.go | 5 +- services/repository/adopt_test.go | 2 +- .../repository/commitstatus/commitstatus.go | 2 +- services/repository/create.go | 4 +- services/repository/create_test.go | 2 +- services/repository/files/file.go | 2 +- services/repository/files/temp_repo.go | 2 +- services/repository/files/tree.go | 6 +-- services/repository/files/update.go | 17 ++----- services/repository/gitgraph/graph_models.go | 4 +- services/repository/gitgraph/graph_test.go | 29 +++-------- services/repository/gitgraph/parser.go | 10 ++-- services/webhook/deliver_test.go | 2 - services/webhook/dingtalk.go | 10 ++-- services/webhook/discord.go | 8 +-- services/webhook/feishu.go | 11 ++-- services/webhook/matrix.go | 9 ++-- services/webhook/msteams.go | 8 +-- services/webhook/slack.go | 8 +-- services/webhook/telegram.go | 10 ++-- services/webhook/wechatwork.go | 8 +-- services/wiki/wiki_path.go | 4 +- services/wiki/wiki_test.go | 4 +- tests/integration/actions_trigger_test.go | 2 +- ...pi_activitypub_person_inbox_follow_test.go | 8 +-- ...ivitypub_person_inbox_useractivity_test.go | 4 +- .../api_activitypub_repository_test.go | 12 ++--- .../api_helper_for_declarative_test.go | 2 +- tests/integration/api_issue_test.go | 16 +++--- tests/integration/api_packages_alt_test.go | 38 +++++++------- tests/integration/api_packages_chef_test.go | 8 +-- .../api_packages_container_test.go | 2 +- tests/integration/api_packages_maven_test.go | 2 +- tests/integration/api_repo_topic_test.go | 2 +- tests/integration/cmd_forgejo_actions_test.go | 2 +- .../git_helper_for_declarative_test.go | 4 +- tests/integration/git_push_test.go | 12 ++--- tests/integration/git_test.go | 13 +++-- tests/integration/integration_test.go | 4 +- tests/integration/issue_comment_test.go | 9 +--- tests/integration/issue_test.go | 21 +++----- tests/integration/org_test.go | 2 +- tests/integration/project_test.go | 2 +- tests/integration/pull_merge_test.go | 5 +- tests/integration/quota_use_test.go | 5 +- tests/integration/release_test.go | 2 +- tests/integration/repo_commits_test.go | 2 +- tests/integration/repo_flags_test.go | 6 +-- tests/integration/repo_topic_test.go | 2 +- tests/integration/repo_webhook_test.go | 9 ++-- tests/integration/signing_git_test.go | 2 +- tests/integration/signup_test.go | 6 +-- tests/integration/ssh_key_test.go | 2 +- tests/integration/user_test.go | 2 +- tests/test_utils.go | 32 ++++++------ 247 files changed, 650 insertions(+), 1001 deletions(-) diff --git a/.golangci.yml b/.golangci.yml index de3f3a9255..fea9195be2 100644 --- a/.golangci.yml +++ b/.golangci.yml @@ -15,6 +15,7 @@ linters: - govet - importas - ineffassign + - modernize - nakedret - nolintlint - revive diff --git a/cmd/cert.go b/cmd/cert.go index baadcbda85..516ac4ce84 100644 --- a/cmd/cert.go +++ b/cmd/cert.go @@ -150,8 +150,8 @@ func runCert(ctx context.Context, c *cli.Command) error { BasicConstraintsValid: true, } - hosts := strings.Split(c.String("host"), ",") - for _, h := range hosts { + hosts := strings.SplitSeq(c.String("host"), ",") + for h := range hosts { if ip := net.ParseIP(h); ip != nil { template.IPAddresses = append(template.IPAddresses, ip) } else { diff --git a/cmd/dump.go b/cmd/dump.go index b94277e529..25459c7731 100644 --- a/cmd/dump.go +++ b/cmd/dump.go @@ -12,6 +12,7 @@ import ( "os" "path" "path/filepath" + "slices" "strings" "sync" "time" @@ -83,11 +84,9 @@ func (o outputType) Join() string { } func (o *outputType) Set(value string) error { - for _, enum := range o.Enum { - if enum == value { - o.selected = value - return nil - } + if slices.Contains(o.Enum, value) { + o.selected = value + return nil } return fmt.Errorf("allowed values are %s", o.Join()) @@ -250,8 +249,8 @@ func runDump(stdCtx context.Context, ctx *cli.Command) error { setupConsoleLogger(log.FATAL, log.CanColorStderr, os.Stderr) } else { for _, suffix := range outputTypeEnum.Enum { - if strings.HasSuffix(fileName, "."+suffix) { - fileName = strings.TrimSuffix(fileName, "."+suffix) + if before, ok := strings.CutSuffix(fileName, "."+suffix); ok { + fileName = before break } } @@ -330,14 +329,12 @@ func runDump(stdCtx context.Context, ctx *cli.Command) error { go dumpDatabase(ctx, archiveJobs, &wg, verbose) if len(setting.CustomConf) > 0 { - wg.Add(1) - go func() { - defer wg.Done() + wg.Go(func() { log.Info("Adding custom configuration file from %s", setting.CustomConf) if err := addFile(archiveJobs, "app.ini", setting.CustomConf, verbose); err != nil { fatal("Failed to include specified app.ini: %v", err) } - }() + }) } if ctx.IsSet("skip-custom-dir") && ctx.Bool("skip-custom-dir") { @@ -361,15 +358,13 @@ func runDump(stdCtx context.Context, ctx *cli.Command) error { if ctx.IsSet("skip-attachment-data") && ctx.Bool("skip-attachment-data") { log.Info("Skipping attachment data") } else { - wg.Add(1) - go func() { - defer wg.Done() + wg.Go(func() { if err := storage.Attachments.IterateObjects("", func(objPath string, object storage.Object) error { return addObject(archiveJobs, object, path.Join("data", "attachments", objPath), verbose) }); err != nil { fatal("Failed to dump attachments: %v", err) } - }() + }) } if ctx.IsSet("skip-package-data") && ctx.Bool("skip-package-data") { @@ -377,15 +372,13 @@ func runDump(stdCtx context.Context, ctx *cli.Command) error { } else if !setting.Packages.Enabled { log.Info("Package registry not enabled - skipping") } else { - wg.Add(1) - go func() { - defer wg.Done() + wg.Go(func() { if err := storage.Packages.IterateObjects("", func(objPath string, object storage.Object) error { return addObject(archiveJobs, object, path.Join("data", "packages", objPath), verbose) }); err != nil { fatal("Failed to dump packages: %v", err) } - }() + }) } // Doesn't check if LogRootPath exists before processing --skip-log intentionally, @@ -399,13 +392,11 @@ func runDump(stdCtx context.Context, ctx *cli.Command) error { log.Error("Failed to check if %s exists: %v", setting.Log.RootPath, err) } if isExist { - wg.Add(1) - go func() { - defer wg.Done() + wg.Go(func() { if err := addRecursiveExclude(archiveJobs, "log", setting.Log.RootPath, []string{absFileName}, verbose); err != nil { fatal("Failed to include log: %v", err) } - }() + }) } } diff --git a/cmd/dump_repo.go b/cmd/dump_repo.go index 8e0ef0311f..60fffe1226 100644 --- a/cmd/dump_repo.go +++ b/cmd/dump_repo.go @@ -143,8 +143,8 @@ func runDumpRepository(stdCtx context.Context, ctx *cli.Command) error { opts.PullRequests = true opts.ReleaseAssets = true } else { - units := strings.Split(ctx.String("units"), ",") - for _, unit := range units { + units := strings.SplitSeq(ctx.String("units"), ",") + for unit := range units { switch strings.ToLower(strings.TrimSpace(unit)) { case "": continue diff --git a/models/actions/status.go b/models/actions/status.go index 1f6aa5e890..7f06b6231b 100644 --- a/models/actions/status.go +++ b/models/actions/status.go @@ -4,6 +4,8 @@ package actions import ( + "slices" + "forgejo.org/modules/translation" runnerv1 "code.forgejo.org/forgejo/actions-proto/runner/v1" @@ -107,12 +109,7 @@ func (s Status) IsBlocked() bool { // In returns whether s is one of the given statuses func (s Status) In(statuses ...Status) bool { - for _, v := range statuses { - if s == v { - return true - } - } - return false + return slices.Contains(statuses, s) } func (s Status) AsResult() runnerv1.Result { diff --git a/models/activities/action.go b/models/activities/action.go index 8fd7709e81..6b3fabfae0 100644 --- a/models/activities/action.go +++ b/models/activities/action.go @@ -132,12 +132,7 @@ func (at ActionType) String() string { } func (at ActionType) InActions(actions ...string) bool { - for _, action := range actions { - if action == at.String() { - return true - } - } - return false + return slices.Contains(actions, at.String()) } // Action represents user operation type and other information to diff --git a/models/activities/notification_list.go b/models/activities/notification_list.go index 02206b1d6b..bf6356021e 100644 --- a/models/activities/notification_list.go +++ b/models/activities/notification_list.go @@ -213,10 +213,7 @@ func (nl NotificationList) LoadRepos(ctx context.Context) (repo_model.Repository repos := make(map[int64]*repo_model.Repository, len(repoIDs)) left := len(repoIDs) for left > 0 { - limit := db.DefaultMaxInSize - if left < limit { - limit = left - } + limit := min(left, db.DefaultMaxInSize) rows, err := db.GetEngine(ctx). In("id", repoIDs[:limit]). Rows(new(repo_model.Repository)) @@ -287,10 +284,7 @@ func (nl NotificationList) LoadIssues(ctx context.Context) ([]int, error) { issues := make(map[int64]*issues_model.Issue, len(issueIDs)) left := len(issueIDs) for left > 0 { - limit := db.DefaultMaxInSize - if left < limit { - limit = left - } + limit := min(left, db.DefaultMaxInSize) rows, err := db.GetEngine(ctx). In("id", issueIDs[:limit]). Rows(new(issues_model.Issue)) @@ -382,10 +376,7 @@ func (nl NotificationList) LoadUsers(ctx context.Context) ([]int, error) { users := make(map[int64]*user_model.User, len(userIDs)) left := len(userIDs) for left > 0 { - limit := db.DefaultMaxInSize - if left < limit { - limit = left - } + limit := min(left, db.DefaultMaxInSize) rows, err := db.GetEngine(ctx). In("id", userIDs[:limit]). Rows(new(user_model.User)) @@ -433,10 +424,7 @@ func (nl NotificationList) LoadComments(ctx context.Context) ([]int, error) { comments := make(map[int64]*issues_model.Comment, len(commentIDs)) left := len(commentIDs) for left > 0 { - limit := db.DefaultMaxInSize - if left < limit { - limit = left - } + limit := min(left, db.DefaultMaxInSize) rows, err := db.GetEngine(ctx). In("id", commentIDs[:limit]). Rows(new(issues_model.Comment)) diff --git a/models/activities/repo_activity.go b/models/activities/repo_activity.go index 3d15c22e19..0b9522be1b 100644 --- a/models/activities/repo_activity.go +++ b/models/activities/repo_activity.go @@ -138,10 +138,7 @@ func GetActivityStatsTopAuthors(ctx context.Context, repo *repo_model.Repository return v[i].Commits > v[j].Commits }) - cnt := count - if cnt > len(v) { - cnt = len(v) - } + cnt := min(count, len(v)) return v[:cnt], nil } diff --git a/models/auth/access_token_scope.go b/models/auth/access_token_scope.go index d14838cf02..6dc143b122 100644 --- a/models/auth/access_token_scope.go +++ b/models/auth/access_token_scope.go @@ -5,6 +5,7 @@ package auth import ( "fmt" + "slices" "strings" "forgejo.org/models/perm" @@ -204,12 +205,7 @@ func GetRequiredScopes(level AccessTokenScopeLevel, scopeCategories ...AccessTok // ContainsCategory checks if a list of categories contains a specific category func ContainsCategory(categories []AccessTokenScopeCategory, category AccessTokenScopeCategory) bool { - for _, c := range categories { - if c == category { - return true - } - } - return false + return slices.Contains(categories, category) } // GetScopeLevelFromAccessMode converts permission access mode to scope level diff --git a/models/auth/oauth2.go b/models/auth/oauth2.go index fa68197cf0..9acd9a46b0 100644 --- a/models/auth/oauth2.go +++ b/models/auth/oauth2.go @@ -505,7 +505,7 @@ func (grant *OAuth2Grant) IncreaseCounter(ctx context.Context) error { // ScopeContains returns true if the grant scope contains the specified scope func (grant *OAuth2Grant) ScopeContains(scope string) bool { - for _, currentScope := range strings.Split(grant.Scope, " ") { + for currentScope := range strings.SplitSeq(grant.Scope, " ") { if scope == currentScope { return true } diff --git a/models/db/iterate.go b/models/db/iterate.go index d2315cb12c..c56871d503 100644 --- a/models/db/iterate.go +++ b/models/db/iterate.go @@ -80,7 +80,7 @@ func Iterate[Bean any](ctx context.Context, cond builder.Cond, f func(ctx contex func extractFieldValue(bean any, fieldName string) any { v := reflect.ValueOf(bean) - if v.Kind() == reflect.Ptr { + if v.Kind() == reflect.Pointer { v = v.Elem() } field := v.FieldByName(fieldName) diff --git a/models/db/name.go b/models/db/name.go index d456f49d9c..efd1c2b5f3 100644 --- a/models/db/name.go +++ b/models/db/name.go @@ -6,6 +6,7 @@ package db import ( "fmt" "regexp" + "slices" "strings" "unicode/utf8" @@ -114,10 +115,8 @@ func IsUsableName(names, patterns []string, name string) error { return ErrNameEmpty } - for i := range names { - if name == names[i] { - return ErrNameReserved{name} - } + if slices.Contains(names, name) { + return ErrNameReserved{name} } for _, pat := range patterns { diff --git a/models/dbfs/dbfile.go b/models/dbfs/dbfile.go index 8cd64177dd..7e7c58cc6c 100644 --- a/models/dbfs/dbfile.go +++ b/models/dbfs/dbfile.go @@ -46,10 +46,7 @@ func (f *file) readAt(fileMeta *DbfsMeta, offset int64, p []byte) (n int, err er blobPos := int(offset % f.blockSize) blobOffset := offset - int64(blobPos) blobRemaining := int(f.blockSize) - blobPos - needRead := len(p) - if needRead > blobRemaining { - needRead = blobRemaining - } + needRead := min(len(p), blobRemaining) if blobOffset+int64(blobPos)+int64(needRead) > fileMeta.FileSize { needRead = int(fileMeta.FileSize - blobOffset - int64(blobPos)) } @@ -66,14 +63,8 @@ func (f *file) readAt(fileMeta *DbfsMeta, offset int64, p []byte) (n int, err er blobData = nil } - canCopy := len(blobData) - blobPos - if canCopy <= 0 { - canCopy = 0 - } - realRead := needRead - if realRead > canCopy { - realRead = canCopy - } + canCopy := max(len(blobData)-blobPos, 0) + realRead := min(needRead, canCopy) if realRead > 0 { copy(p[:realRead], fileData.BlobData[blobPos:blobPos+realRead]) } @@ -113,10 +104,7 @@ func (f *file) Write(p []byte) (n int, err error) { blobPos := int(f.offset % f.blockSize) blobOffset := f.offset - int64(blobPos) blobRemaining := int(f.blockSize) - blobPos - needWrite := len(p) - if needWrite > blobRemaining { - needWrite = blobRemaining - } + needWrite := min(len(p), blobRemaining) buf := make([]byte, f.blockSize) readBytes, err := f.readAt(fileMeta, blobOffset, buf) if err != nil && !errors.Is(err, io.EOF) { diff --git a/models/git/protected_branch.go b/models/git/protected_branch.go index c1eb750230..3eaada2fdd 100644 --- a/models/git/protected_branch.go +++ b/models/git/protected_branch.go @@ -213,7 +213,7 @@ func (protectBranch *ProtectedBranch) GetUnprotectedFilePatterns() []glob.Glob { func getFilePatterns(filePatterns string) []glob.Glob { extarr := make([]glob.Glob, 0, 10) - for _, expr := range strings.Split(strings.ToLower(filePatterns), ";") { + for expr := range strings.SplitSeq(strings.ToLower(filePatterns), ";") { expr = strings.TrimSpace(expr) if expr != "" { if g, err := glob.Compile(expr, '.', '/'); err != nil { diff --git a/models/gitea_migrations/test/tests.go b/models/gitea_migrations/test/tests.go index fc54b65626..086127a2e8 100644 --- a/models/gitea_migrations/test/tests.go +++ b/models/gitea_migrations/test/tests.go @@ -265,7 +265,7 @@ func deleteDB() error { func removeAllWithRetry(dir string) error { var err error - for i := 0; i < 20; i++ { + for range 20 { err = os.RemoveAll(dir) if err == nil { break diff --git a/models/gitea_migrations/v1_11/v111.go b/models/gitea_migrations/v1_11/v111.go index fcd2ee7be3..59ca416af0 100644 --- a/models/gitea_migrations/v1_11/v111.go +++ b/models/gitea_migrations/v1_11/v111.go @@ -5,6 +5,7 @@ package v1_11 import ( "fmt" + "slices" "xorm.io/xorm" ) @@ -345,10 +346,8 @@ func AddBranchProtectionCanPushAndEnableWhitelist(x *xorm.Engine) error { } return AccessModeWrite <= perm.UnitsMode[UnitTypeCode], nil } - for _, id := range protectedBranch.ApprovalsWhitelistUserIDs { - if id == reviewer.ID { - return true, nil - } + if slices.Contains(protectedBranch.ApprovalsWhitelistUserIDs, reviewer.ID) { + return true, nil } // isUserInTeams diff --git a/models/gitea_migrations/v1_11/v115.go b/models/gitea_migrations/v1_11/v115.go index 65094df93d..84364e310b 100644 --- a/models/gitea_migrations/v1_11/v115.go +++ b/models/gitea_migrations/v1_11/v115.go @@ -146,7 +146,7 @@ func copyOldAvatarToNewLocation(userID int64, oldAvatar string) (string, error) return "", fmt.Errorf("io.ReadAll: %w", err) } - newAvatar := fmt.Sprintf("%x", md5.Sum([]byte(fmt.Sprintf("%d-%x", userID, md5.Sum(data))))) + newAvatar := fmt.Sprintf("%x", md5.Sum(fmt.Appendf(nil, "%d-%x", userID, md5.Sum(data)))) if newAvatar == oldAvatar { return newAvatar, nil } diff --git a/models/gitea_migrations/v1_20/v259.go b/models/gitea_migrations/v1_20/v259.go index 9b2b68263e..1ae8b2e30f 100644 --- a/models/gitea_migrations/v1_20/v259.go +++ b/models/gitea_migrations/v1_20/v259.go @@ -329,7 +329,7 @@ func ConvertScopedAccessTokens(x *xorm.Engine) error { for _, token := range tokens { var scopes []string allNewScopesMap := make(map[AccessTokenScope]bool) - for _, oldScope := range strings.Split(token.Scope, ",") { + for oldScope := range strings.SplitSeq(token.Scope, ",") { if newScopes, exists := accessTokenScopeMap[OldAccessTokenScope(oldScope)]; exists { for _, newScope := range newScopes { allNewScopesMap[newScope] = true diff --git a/models/issues/comment.go b/models/issues/comment.go index 325fcbe30b..fd0f595945 100644 --- a/models/issues/comment.go +++ b/models/issues/comment.go @@ -9,6 +9,7 @@ import ( "context" "fmt" "html/template" + "slices" "strconv" "unicode/utf8" @@ -198,12 +199,7 @@ func (t CommentType) HasMailReplySupport() bool { } func (t CommentType) CountedAsConversation() bool { - for _, ct := range ConversationCountedCommentType() { - if t == ct { - return true - } - } - return false + return slices.Contains(ConversationCountedCommentType(), t) } // ConversationCountedCommentType returns the comment types that are counted as a conversation @@ -619,7 +615,7 @@ func (c *Comment) UpdateAttachments(ctx context.Context, uuids []string) error { if err != nil { return fmt.Errorf("FindRepoAttachmentsByUUID[uuids=%q,repoID=%d]: %w", uuids, c.Issue.RepoID, err) } - for i := 0; i < len(attachments); i++ { + for i := range attachments { attachments[i].IssueID = c.IssueID attachments[i].CommentID = c.ID if err := repo_model.UpdateAttachment(ctx, attachments[i]); err != nil { diff --git a/models/issues/comment_list.go b/models/issues/comment_list.go index 3996dcb29a..53903bea31 100644 --- a/models/issues/comment_list.go +++ b/models/issues/comment_list.go @@ -54,10 +54,7 @@ func (comments CommentList) loadLabels(ctx context.Context) error { commentLabels := make(map[int64]*Label, len(labelIDs)) left := len(labelIDs) for left > 0 { - limit := db.DefaultMaxInSize - if left < limit { - limit = left - } + limit := min(left, db.DefaultMaxInSize) rows, err := db.GetEngine(ctx). In("id", labelIDs[:limit]). Rows(new(Label)) @@ -104,10 +101,7 @@ func (comments CommentList) loadMilestones(ctx context.Context) error { milestones := make(map[int64]*Milestone, len(milestoneIDs)) left := len(milestoneIDs) for left > 0 { - limit := db.DefaultMaxInSize - if left < limit { - limit = left - } + limit := min(left, db.DefaultMaxInSize) err := db.GetEngine(ctx). In("id", milestoneIDs[:limit]). Find(&milestones) @@ -143,10 +137,7 @@ func (comments CommentList) loadOldMilestones(ctx context.Context) error { milestones := make(map[int64]*Milestone, len(milestoneIDs)) left := len(milestoneIDs) for left > 0 { - limit := db.DefaultMaxInSize - if left < limit { - limit = left - } + limit := min(left, db.DefaultMaxInSize) err := db.GetEngine(ctx). In("id", milestoneIDs[:limit]). Find(&milestones) @@ -178,10 +169,7 @@ func (comments CommentList) loadAssignees(ctx context.Context) error { assignees := make(map[int64]*user_model.User, len(assigneeIDs)) left := len(assigneeIDs) for left > 0 { - limit := db.DefaultMaxInSize - if left < limit { - limit = left - } + limit := min(left, db.DefaultMaxInSize) rows, err := db.GetEngine(ctx). In("id", assigneeIDs[:limit]). Rows(new(user_model.User)) @@ -246,10 +234,7 @@ func (comments CommentList) LoadIssues(ctx context.Context) error { issues := make(map[int64]*Issue, len(issueIDs)) left := len(issueIDs) for left > 0 { - limit := db.DefaultMaxInSize - if left < limit { - limit = left - } + limit := min(left, db.DefaultMaxInSize) rows, err := db.GetEngine(ctx). In("id", issueIDs[:limit]). Rows(new(Issue)) @@ -300,10 +285,7 @@ func (comments CommentList) loadDependentIssues(ctx context.Context) error { issues := make(map[int64]*Issue, len(issueIDs)) left := len(issueIDs) for left > 0 { - limit := db.DefaultMaxInSize - if left < limit { - limit = left - } + limit := min(left, db.DefaultMaxInSize) rows, err := e. In("id", issueIDs[:limit]). Rows(new(Issue)) @@ -379,10 +361,7 @@ func (comments CommentList) LoadAttachments(ctx context.Context) (err error) { commentsIDs := comments.getAttachmentCommentIDs() left := len(commentsIDs) for left > 0 { - limit := db.DefaultMaxInSize - if left < limit { - limit = left - } + limit := min(left, db.DefaultMaxInSize) rows, err := db.GetEngine(ctx). In("comment_id", commentsIDs[:limit]). Rows(new(repo_model.Attachment)) diff --git a/models/issues/issue_list.go b/models/issues/issue_list.go index 5a02baa428..34cfe35475 100644 --- a/models/issues/issue_list.go +++ b/models/issues/issue_list.go @@ -43,10 +43,7 @@ func (issues IssueList) LoadRepositories(ctx context.Context) (repo_model.Reposi repoMaps := make(map[int64]*repo_model.Repository, len(repoIDs)) left := len(repoIDs) for left > 0 { - limit := db.DefaultMaxInSize - if left < limit { - limit = left - } + limit := min(left, db.DefaultMaxInSize) err := db.GetEngine(ctx). In("id", repoIDs[:limit]). Find(&repoMaps) @@ -99,10 +96,7 @@ func getPostersByIDs(ctx context.Context, posterIDs []int64) (map[int64]*user_mo posterMaps := make(map[int64]*user_model.User, len(posterIDs)) left := len(posterIDs) for left > 0 { - limit := db.DefaultMaxInSize - if left < limit { - limit = left - } + limit := min(left, db.DefaultMaxInSize) err := db.GetEngine(ctx). In("id", posterIDs[:limit]). Find(&posterMaps) @@ -137,10 +131,7 @@ func (issues IssueList) LoadLabels(ctx context.Context) error { issueIDs := issues.getIssueIDs() left := len(issueIDs) for left > 0 { - limit := db.DefaultMaxInSize - if left < limit { - limit = left - } + limit := min(left, db.DefaultMaxInSize) rows, err := db.GetEngine(ctx).Table("label"). Join("LEFT", "issue_label", "issue_label.label_id = label.id"). In("issue_label.issue_id", issueIDs[:limit]). @@ -192,10 +183,7 @@ func (issues IssueList) LoadMilestones(ctx context.Context) error { milestoneMaps := make(map[int64]*Milestone, len(milestoneIDs)) left := len(milestoneIDs) for left > 0 { - limit := db.DefaultMaxInSize - if left < limit { - limit = left - } + limit := min(left, db.DefaultMaxInSize) err := db.GetEngine(ctx). In("id", milestoneIDs[:limit]). Find(&milestoneMaps) @@ -224,10 +212,7 @@ func (issues IssueList) LoadProjects(ctx context.Context) error { } for left > 0 { - limit := db.DefaultMaxInSize - if left < limit { - limit = left - } + limit := min(left, db.DefaultMaxInSize) projects := make([]*projectWithIssueID, 0, limit) err := db.GetEngine(ctx). @@ -266,10 +251,7 @@ func (issues IssueList) LoadAssignees(ctx context.Context) error { issueIDs := issues.getIssueIDs() left := len(issueIDs) for left > 0 { - limit := db.DefaultMaxInSize - if left < limit { - limit = left - } + limit := min(left, db.DefaultMaxInSize) rows, err := db.GetEngine(ctx).Table("issue_assignees"). Join("INNER", "`user`", "`user`.id = `issue_assignees`.assignee_id"). In("`issue_assignees`.issue_id", issueIDs[:limit]).OrderBy(user_model.GetOrderByName()). @@ -327,10 +309,7 @@ func (issues IssueList) LoadPullRequests(ctx context.Context) error { pullRequestMaps := make(map[int64]*PullRequest, len(issuesIDs)) left := len(issuesIDs) for left > 0 { - limit := db.DefaultMaxInSize - if left < limit { - limit = left - } + limit := min(left, db.DefaultMaxInSize) rows, err := db.GetEngine(ctx). In("issue_id", issuesIDs[:limit]). Rows(new(PullRequest)) @@ -375,10 +354,7 @@ func (issues IssueList) LoadAttachments(ctx context.Context) (err error) { issuesIDs := issues.getIssueIDs() left := len(issuesIDs) for left > 0 { - limit := db.DefaultMaxInSize - if left < limit { - limit = left - } + limit := min(left, db.DefaultMaxInSize) rows, err := db.GetEngine(ctx). In("issue_id", issuesIDs[:limit]). Rows(new(repo_model.Attachment)) @@ -420,10 +396,7 @@ func (issues IssueList) loadComments(ctx context.Context, cond builder.Cond) (er issuesIDs := issues.getIssueIDs() left := len(issuesIDs) for left > 0 { - limit := db.DefaultMaxInSize - if left < limit { - limit = left - } + limit := min(left, db.DefaultMaxInSize) rows, err := db.GetEngine(ctx).Table("comment"). Join("INNER", "issue", "issue.id = comment.issue_id"). In("issue.id", issuesIDs[:limit]). @@ -486,10 +459,7 @@ func (issues IssueList) loadTotalTrackedTimes(ctx context.Context) (err error) { left := len(ids) for left > 0 { - limit := db.DefaultMaxInSize - if left < limit { - limit = left - } + limit := min(left, db.DefaultMaxInSize) // select issue_id, sum(time) from tracked_time where issue_id in () group by issue_id rows, err := db.GetEngine(ctx).Table("tracked_time"). diff --git a/models/issues/issue_stats.go b/models/issues/issue_stats.go index 2fd2641d92..03660803a4 100644 --- a/models/issues/issue_stats.go +++ b/models/issues/issue_stats.go @@ -94,10 +94,7 @@ func GetIssueStats(ctx context.Context, opts *IssuesOptions) (*IssueStats, error // ids in a temporary table and join from them. accum := &IssueStats{} for i := 0; i < len(opts.IssueIDs); { - chunk := i + MaxQueryParameters - if chunk > len(opts.IssueIDs) { - chunk = len(opts.IssueIDs) - } + chunk := min(i+MaxQueryParameters, len(opts.IssueIDs)) stats, err := getIssueStatsChunk(ctx, opts, opts.IssueIDs[i:chunk]) if err != nil { return nil, err diff --git a/models/issues/issue_test.go b/models/issues/issue_test.go index e9617548e9..0c5da6a2aa 100644 --- a/models/issues/issue_test.go +++ b/models/issues/issue_test.go @@ -5,6 +5,7 @@ package issues_test import ( "fmt" + "slices" "sort" "sync" "testing" @@ -311,7 +312,7 @@ func TestIssue_ResolveMentions(t *testing.T) { for i, user := range resolved { ids[i] = user.ID } - sort.Slice(ids, func(i, j int) bool { return ids[i] < ids[j] }) + slices.Sort(ids) assert.Equal(t, expected, ids) } @@ -338,7 +339,7 @@ func TestResourceIndex(t *testing.T) { require.NoError(t, err) var wg sync.WaitGroup - for i := 0; i < 100; i++ { + for i := range 100 { wg.Add(1) t.Run(fmt.Sprintf("issue %d", i+1), func(t *testing.T) { t.Parallel() @@ -369,7 +370,7 @@ func TestCorrectIssueStats(t *testing.T) { issueAmount := issues_model.MaxQueryParameters + 10 var wg sync.WaitGroup - for i := 0; i < issueAmount; i++ { + for i := range issueAmount { wg.Add(1) go func(i int) { testInsertIssue(t, fmt.Sprintf("Issue %d", i+1), "Bugs are nasty", 0) diff --git a/models/issues/issue_update.go b/models/issues/issue_update.go index 22e6fcb8d4..35f69e3a0b 100644 --- a/models/issues/issue_update.go +++ b/models/issues/issue_update.go @@ -244,7 +244,7 @@ func UpdateIssueAttachments(ctx context.Context, issue *Issue, uuids []string) ( if err != nil { return fmt.Errorf("FindRepoAttachmentsByUUID[uuids=%q,repoID=%d]: %w", uuids, issue.RepoID, err) } - for i := 0; i < len(attachments); i++ { + for i := range attachments { attachments[i].IssueID = issue.ID if err := repo_model.UpdateAttachment(ctx, attachments[i]); err != nil { return fmt.Errorf("update attachment [id: %d]: %w", attachments[i].ID, err) diff --git a/models/issues/review_list.go b/models/issues/review_list.go index 04c08bc5c4..878ceac9ce 100644 --- a/models/issues/review_list.go +++ b/models/issues/review_list.go @@ -20,7 +20,7 @@ type ReviewList []*Review // LoadReviewers loads reviewers func (reviews ReviewList) LoadReviewers(ctx context.Context) error { reviewerIDs := make([]int64, len(reviews)) - for i := 0; i < len(reviews); i++ { + for i := range reviews { reviewerIDs[i] = reviews[i].ReviewerID } reviewers, err := user_model.GetPossibleUserByIDs(ctx, reviewerIDs) diff --git a/models/issues/tracked_time.go b/models/issues/tracked_time.go index 2f050759d2..54173681bd 100644 --- a/models/issues/tracked_time.go +++ b/models/issues/tracked_time.go @@ -350,10 +350,7 @@ func GetIssueTotalTrackedTime(ctx context.Context, opts *IssuesOptions, isClosed // we get the statistics in smaller chunks and get accumulates var accum int64 for i := 0; i < len(opts.IssueIDs); { - chunk := i + MaxQueryParameters - if chunk > len(opts.IssueIDs) { - chunk = len(opts.IssueIDs) - } + chunk := min(i+MaxQueryParameters, len(opts.IssueIDs)) time, err := getIssueTotalTrackedTimeChunk(ctx, opts, isClosed, opts.IssueIDs[i:chunk]) if err != nil { return 0, err diff --git a/models/perm/access/repo_permission.go b/models/perm/access/repo_permission.go index 22639d1e42..fd1b93c867 100644 --- a/models/perm/access/repo_permission.go +++ b/models/perm/access/repo_permission.go @@ -6,6 +6,7 @@ package access import ( "context" "fmt" + "strings" actions_model "forgejo.org/models/actions" "forgejo.org/models/db" @@ -115,7 +116,8 @@ func (p *Permission) CanWriteIssuesOrPulls(isPull bool) bool { } func (p *Permission) LogString() string { - format := "") + return fmt.Sprintf(format.String(), args...) } func GetActionRepoPermission(ctx context.Context, repo *repo_model.Repository, task *actions_model.ActionTask) (Permission, error) { diff --git a/models/project/column_test.go b/models/project/column_test.go index aef7a6f9d4..2f4cc79367 100644 --- a/models/project/column_test.go +++ b/models/project/column_test.go @@ -164,7 +164,7 @@ func Test_NewColumn(t *testing.T) { require.NoError(t, err) assert.Len(t, columns, 3) - for i := 0; i < maxProjectColumns-3; i++ { + for i := range maxProjectColumns - 3 { err := NewColumn(db.DefaultContext, &Column{ Title: fmt.Sprintf("column-%d", i+4), ProjectID: project1.ID, diff --git a/models/pull/review_state.go b/models/pull/review_state.go index 2702d5d5a1..3fc3ab65c2 100644 --- a/models/pull/review_state.go +++ b/models/pull/review_state.go @@ -6,6 +6,7 @@ package pull import ( "context" "fmt" + "maps" "forgejo.org/models/db" "forgejo.org/modules/log" @@ -100,9 +101,7 @@ func mergeFiles(oldFiles, newFiles map[string]ViewedState) map[string]ViewedStat return oldFiles } - for file, viewed := range newFiles { - oldFiles[file] = viewed - } + maps.Copy(oldFiles, newFiles) return oldFiles } diff --git a/models/repo/repo.go b/models/repo/repo.go index cdb30aa1a9..0c45f483e4 100644 --- a/models/repo/repo.go +++ b/models/repo/repo.go @@ -9,6 +9,7 @@ import ( "errors" "fmt" "html/template" + "maps" "net" "net/url" "path/filepath" @@ -543,9 +544,7 @@ func (repo *Repository) ComposeMetas(ctx context.Context) map[string]string { func (repo *Repository) ComposeDocumentMetas(ctx context.Context) map[string]string { if len(repo.DocumentRenderingMetas) == 0 { metas := map[string]string{} - for k, v := range repo.ComposeMetas(ctx) { - metas[k] = v - } + maps.Copy(metas, repo.ComposeMetas(ctx)) metas["mode"] = "document" repo.DocumentRenderingMetas = metas } @@ -786,8 +785,8 @@ func GetRepositoryByName(ctx context.Context, ownerID int64, name string) (*Repo // getRepositoryURLPathSegments returns segments (owner, reponame) extracted from a url func getRepositoryURLPathSegments(repoURL string) []string { - if strings.HasPrefix(repoURL, setting.AppURL) { - return strings.Split(strings.TrimPrefix(repoURL, setting.AppURL), "/") + if after, ok := strings.CutPrefix(repoURL, setting.AppURL); ok { + return strings.Split(after, "/") } sshURLVariants := [4]string{ @@ -798,8 +797,8 @@ func getRepositoryURLPathSegments(repoURL string) []string { } for _, sshURL := range sshURLVariants { - if strings.HasPrefix(repoURL, sshURL) { - return strings.Split(strings.TrimPrefix(repoURL, sshURL), "/") + if after, ok := strings.CutPrefix(repoURL, sshURL); ok { + return strings.Split(after, "/") } } diff --git a/models/repo/repo_list.go b/models/repo/repo_list.go index 732d7b627c..daa60c81e9 100644 --- a/models/repo/repo_list.go +++ b/models/repo/repo_list.go @@ -401,7 +401,7 @@ func SearchRepositoryCondition(opts *SearchRepoOptions) builder.Cond { if opts.Keyword != "" { // separate keyword subQueryCond := builder.NewCond() - for _, v := range strings.Split(opts.Keyword, ",") { + for v := range strings.SplitSeq(opts.Keyword, ",") { if opts.TopicOnly { subQueryCond = subQueryCond.Or(builder.Eq{"topic.name": strings.ToLower(v)}) } else { @@ -416,7 +416,7 @@ func SearchRepositoryCondition(opts *SearchRepoOptions) builder.Cond { keywordCond := builder.In("id", subQuery) if !opts.TopicOnly { likes := builder.NewCond() - for _, v := range strings.Split(opts.Keyword, ",") { + for v := range strings.SplitSeq(opts.Keyword, ",") { likes = likes.Or(builder.Like{"lower_name", strings.ToLower(v)}) // If the string looks like "org/repo", match against that pattern too diff --git a/models/repo/repo_unit.go b/models/repo/repo_unit.go index aa6f2fa0ae..3db6dc95e8 100644 --- a/models/repo/repo_unit.go +++ b/models/repo/repo_unit.go @@ -237,10 +237,8 @@ func (cfg *ActionsConfig) IsWorkflowDisabled(file string) bool { } func (cfg *ActionsConfig) DisableWorkflow(file string) { - for _, workflow := range cfg.DisabledWorkflows { - if file == workflow { - return - } + if slices.Contains(cfg.DisabledWorkflows, file) { + return } cfg.DisabledWorkflows = append(cfg.DisabledWorkflows, file) diff --git a/models/repo/upload.go b/models/repo/upload.go index a213cb1986..67b5409650 100644 --- a/models/repo/upload.go +++ b/models/repo/upload.go @@ -117,7 +117,7 @@ func DeleteUploads(ctx context.Context, uploads ...*Upload) (err error) { defer committer.Close() ids := make([]int64, len(uploads)) - for i := 0; i < len(uploads); i++ { + for i := range uploads { ids[i] = uploads[i].ID } if err = db.DeleteByIDs[Upload](ctx, ids...); err != nil { diff --git a/models/unit/unit.go b/models/unit/unit.go index 434e5f0acc..2a31c804aa 100644 --- a/models/unit/unit.go +++ b/models/unit/unit.go @@ -248,22 +248,12 @@ func LoadUnitConfig() error { // UnitGlobalDisabled checks if unit type is global disabled func (u Type) UnitGlobalDisabled() bool { - for _, ud := range DisabledRepoUnitsGet() { - if u == ud { - return true - } - } - return false + return slices.Contains(DisabledRepoUnitsGet(), u) } // CanBeDefault checks if the unit type can be a default repo unit func (u *Type) CanBeDefault() bool { - for _, nadU := range NotAllowedDefaultRepoUnits { - if *u == nadU { - return false - } - } - return true + return !slices.Contains(NotAllowedDefaultRepoUnits, *u) } // Unit is a section of one repository diff --git a/models/unittest/fixture_loader.go b/models/unittest/fixture_loader.go index 5aea06550c..3cf2efdced 100644 --- a/models/unittest/fixture_loader.go +++ b/models/unittest/fixture_loader.go @@ -151,8 +151,8 @@ func (l *loader) buildFixtureFile(fixturePath string) (*fixtureFile, error) { switch v := value.(type) { case string: // Try to decode hex. - if strings.HasPrefix(v, "0x") { - value, err = hex.DecodeString(strings.TrimPrefix(v, "0x")) + if after, ok := strings.CutPrefix(v, "0x"); ok { + value, err = hex.DecodeString(after) if err != nil { return nil, err } diff --git a/models/unittest/mock_http.go b/models/unittest/mock_http.go index b8413104b3..5e420533d8 100644 --- a/models/unittest/mock_http.go +++ b/models/unittest/mock_http.go @@ -102,13 +102,13 @@ func NewMockWebServer(t *testing.T, liveServerBaseURL, testDataDir string, liveM // parse back the fixture file into a series of HTTP headers followed by response body lines := strings.Split(stringFixture, "\n") for idx, line := range lines { - colonIndex := strings.Index(line, ": ") - if colonIndex != -1 { + before, after, ok := strings.Cut(line, ": ") + if ok { // Because we modified the body with ReplaceAll() above, we need to // remove Content-Length. w.Write() should add it back. - header := line[0:colonIndex] + header := before if !strings.EqualFold(header, "Content-Length") { - w.Header().Set(line[0:colonIndex], line[colonIndex+2:]) + w.Header().Set(before, after) } } else { // we reached the end of the headers (empty line), so what follows is the body diff --git a/models/unittest/reflection.go b/models/unittest/reflection.go index 141fc66b99..939891283d 100644 --- a/models/unittest/reflection.go +++ b/models/unittest/reflection.go @@ -9,7 +9,7 @@ import ( ) func fieldByName(v reflect.Value, field string) reflect.Value { - if v.Kind() == reflect.Ptr { + if v.Kind() == reflect.Pointer { v = v.Elem() } f := v.FieldByName(field) diff --git a/models/user/avatar.go b/models/user/avatar.go index cc1b1b7b9d..726d67f5e0 100644 --- a/models/user/avatar.go +++ b/models/user/avatar.go @@ -108,7 +108,7 @@ func (u *User) IsUploadAvatarChanged(data []byte) bool { if !u.UseCustomAvatar || len(u.Avatar) == 0 { return true } - avatarID := fmt.Sprintf("%x", md5.Sum([]byte(fmt.Sprintf("%d-%x", u.ID, md5.Sum(data))))) + avatarID := fmt.Sprintf("%x", md5.Sum(fmt.Appendf(nil, "%d-%x", u.ID, md5.Sum(data)))) return u.Avatar != avatarID } diff --git a/models/user/email_address_test.go b/models/user/email_address_test.go index 85f5b16c65..35b33933c2 100644 --- a/models/user/email_address_test.go +++ b/models/user/email_address_test.go @@ -5,6 +5,7 @@ package user_test import ( "fmt" + "slices" "testing" "forgejo.org/models/db" @@ -77,12 +78,7 @@ func TestListEmails(t *testing.T) { assert.Greater(t, count, int64(5)) contains := func(match func(s *user_model.SearchEmailResult) bool) bool { - for _, v := range emails { - if match(v) { - return true - } - } - return false + return slices.ContainsFunc(emails, match) } assert.True(t, contains(func(s *user_model.SearchEmailResult) bool { return s.UID == 18 })) diff --git a/models/user/moderation.go b/models/user/moderation.go index 7bc857489a..765414acc0 100644 --- a/models/user/moderation.go +++ b/models/user/moderation.go @@ -87,7 +87,7 @@ func newUserData(user *User) UserData { // (e.g. FieldName -> field_name) corresponding to UserData struct fields. var userDataColumnNames = sync.OnceValue(func() []string { mapper := new(names.GonicMapper) - udType := reflect.TypeOf(UserData{}) + udType := reflect.TypeFor[UserData]() columnNames := make([]string, 0, udType.NumField()) for i := 0; i < udType.NumField(); i++ { columnNames = append(columnNames, mapper.Obj2Table(udType.Field(i).Name)) diff --git a/models/user/user.go b/models/user/user.go index 7e2101d7cc..2c20cd977d 100644 --- a/models/user/user.go +++ b/models/user/user.go @@ -1243,8 +1243,8 @@ func GetUserByEmail(ctx context.Context, email string) (*User, error) { } // Finally, if email address is the protected email address: - if strings.HasSuffix(email, fmt.Sprintf("@%s", setting.Service.NoReplyAddress)) { - username := strings.TrimSuffix(email, fmt.Sprintf("@%s", setting.Service.NoReplyAddress)) + if before, ok := strings.CutSuffix(email, fmt.Sprintf("@%s", setting.Service.NoReplyAddress)); ok { + username := before user := &User{} has, err := db.GetEngine(ctx).Where("lower_name=?", username).Get(user) if err != nil { diff --git a/models/user/user_test.go b/models/user/user_test.go index d1af3a750f..6da645d672 100644 --- a/models/user/user_test.go +++ b/models/user/user_test.go @@ -273,9 +273,9 @@ func TestHashPasswordDeterministic(t *testing.T) { b := make([]byte, 16) u := &user_model.User{} algos := hash.RecommendedHashAlgorithms - for j := 0; j < len(algos); j++ { + for j := range algos { u.PasswdHashAlgo = algos[j] - for i := 0; i < 50; i++ { + for range 50 { // generate a random password rand.Read(b) pass := string(b) diff --git a/models/webhook/webhook.go b/models/webhook/webhook.go index b23f3fd348..196a5313bc 100644 --- a/models/webhook/webhook.go +++ b/models/webhook/webhook.go @@ -429,7 +429,7 @@ func CreateWebhooks(ctx context.Context, ws []*Webhook) error { if len(ws) == 0 { return nil } - for i := 0; i < len(ws); i++ { + for i := range ws { ws[i].Type = strings.TrimSpace(ws[i].Type) } return db.Insert(ctx, ws) diff --git a/modules/actions/workflows.go b/modules/actions/workflows.go index 99c9446805..ed54ccc98b 100644 --- a/modules/actions/workflows.go +++ b/modules/actions/workflows.go @@ -7,6 +7,7 @@ import ( "bytes" "fmt" "io" + "slices" "strings" actions_model "forgejo.org/models/actions" @@ -609,11 +610,8 @@ func matchPullRequestReviewEvent(prPayload *api.PullRequestPayload, evt *jobpars matched := false for _, val := range vals { - for _, action := range actions { - if glob.MustCompile(val, '/').Match(action) { - matched = true - break - } + if slices.ContainsFunc(actions, glob.MustCompile(val, '/').Match) { + matched = true } if matched { break @@ -658,11 +656,8 @@ func matchPullRequestReviewCommentEvent(prPayload *api.PullRequestPayload, evt * matched := false for _, val := range vals { - for _, action := range actions { - if glob.MustCompile(val, '/').Match(action) { - matched = true - break - } + if slices.ContainsFunc(actions, glob.MustCompile(val, '/').Match) { + matched = true } if matched { break diff --git a/modules/auth/password/password.go b/modules/auth/password/password.go index fdbc4ff291..744a431ea8 100644 --- a/modules/auth/password/password.go +++ b/modules/auth/password/password.go @@ -101,7 +101,7 @@ func Generate(n int) (string, error) { buffer := make([]byte, n) max := big.NewInt(int64(len(validChars))) for { - for j := 0; j < n; j++ { + for j := range n { rnd, err := rand.Int(rand.Reader, max) if err != nil { return "", err diff --git a/modules/auth/password/password_test.go b/modules/auth/password/password_test.go index 1fe3fb5ce1..8f5d64514c 100644 --- a/modules/auth/password/password_test.go +++ b/modules/auth/password/password_test.go @@ -51,7 +51,7 @@ func TestComplexity_Generate(t *testing.T) { test := func(t *testing.T, modes []string) { testComplextity(modes) - for i := 0; i < maxCount; i++ { + for range maxCount { pwd, err := Generate(pwdLen) require.NoError(t, err) assert.Len(t, pwd, pwdLen) diff --git a/modules/auth/password/pwn/pwn.go b/modules/auth/password/pwn/pwn.go index 10693ec663..f3277ff616 100644 --- a/modules/auth/password/pwn/pwn.go +++ b/modules/auth/password/pwn/pwn.go @@ -101,7 +101,7 @@ func (c *Client) CheckPassword(pw string, padding bool) (int, error) { } defer resp.Body.Close() - for _, pair := range strings.Split(string(body), "\n") { + for pair := range strings.SplitSeq(string(body), "\n") { parts := strings.Split(pair, ":") if len(parts) != 2 { continue diff --git a/modules/avatar/identicon/block.go b/modules/avatar/identicon/block.go index cb1803a231..fc8ce90212 100644 --- a/modules/avatar/identicon/block.go +++ b/modules/avatar/identicon/block.go @@ -24,8 +24,8 @@ func drawBlock(img *image.Paletted, x, y, size, angle int, points []int) { rotate(points, m, m, angle) } - for i := 0; i < size; i++ { - for j := 0; j < size; j++ { + for i := range size { + for j := range size { if pointInPolygon(i, j, points) { img.SetColorIndex(x+i, y+j, 1) } diff --git a/modules/avatar/identicon/identicon.go b/modules/avatar/identicon/identicon.go index 13e8ec88e6..19f87da85a 100644 --- a/modules/avatar/identicon/identicon.go +++ b/modules/avatar/identicon/identicon.go @@ -134,7 +134,7 @@ func drawBlocks(p *image.Paletted, size int, c, b1, b2 blockFunc, b1Angle, b2Ang // then we make it left-right mirror, so we didn't draw 3/6/9 before for x := 0; x < size/2; x++ { - for y := 0; y < size; y++ { + for y := range size { p.SetColorIndex(size-x, y, p.ColorIndexAt(x, y)) } } diff --git a/modules/charset/charset.go b/modules/charset/charset.go index cb03deb966..d4121fb27f 100644 --- a/modules/charset/charset.go +++ b/modules/charset/charset.go @@ -164,7 +164,7 @@ func DetectEncoding(content []byte) (string, error) { } times := 1024 / len(content) detectContent = make([]byte, 0, times*len(content)) - for i := 0; i < times; i++ { + for range times { detectContent = append(detectContent, content...) } } else { diff --git a/modules/charset/charset_test.go b/modules/charset/charset_test.go index 358220494b..c29987beb6 100644 --- a/modules/charset/charset_test.go +++ b/modules/charset/charset_test.go @@ -243,7 +243,7 @@ func stringMustEndWith(t *testing.T, expected, value string) { func TestToUTF8WithFallbackReader(t *testing.T) { resetDefaultCharsetsOrder() - for testLen := 0; testLen < 2048; testLen++ { + for testLen := range 2048 { pattern := " test { () }\n" input := "" for len(input) < testLen { diff --git a/modules/forgefed/actor.go b/modules/forgefed/actor.go index 5383d5adaf..1f6e1f1fdf 100644 --- a/modules/forgefed/actor.go +++ b/modules/forgefed/actor.go @@ -6,6 +6,7 @@ package forgefed import ( "fmt" "net/url" + "slices" "strconv" "strings" @@ -107,12 +108,7 @@ func newActorID(uri string) (ActorID, error) { } func containsEmptyString(ar []string) bool { - for _, elem := range ar { - if elem == "" { - return true - } - } - return false + return slices.Contains(ar, "") } func removeEmptyStrings(ls []string) []string { diff --git a/modules/forgefed/repository.go b/modules/forgefed/repository.go index 63680ccd35..1e85d1e64c 100644 --- a/modules/forgefed/repository.go +++ b/modules/forgefed/repository.go @@ -88,7 +88,7 @@ func ToRepository(it ap.Item) (*Repository, error) { return (*Repository)(unsafe.Pointer(&i)), nil default: // NOTE(marius): this is an ugly way of dealing with the interface conversion error: types from different scopes - typ := reflect.TypeOf(new(Repository)) + typ := reflect.TypeFor[*Repository]() if i, ok := reflect.ValueOf(it).Convert(typ).Interface().(*Repository); ok { return i, nil } diff --git a/modules/git/commit.go b/modules/git/commit.go index 4fb13ecd4f..36ba8ef8ca 100644 --- a/modules/git/commit.go +++ b/modules/git/commit.go @@ -269,8 +269,8 @@ func NewSearchCommitsOptions(searchString string, forAllRefs bool) SearchCommits var keywords, authors, committers []string var after, before string - fields := strings.Fields(searchString) - for _, k := range fields { + fields := strings.FieldsSeq(searchString) + for k := range fields { switch { case strings.HasPrefix(k, "author:"): authors = append(authors, strings.TrimPrefix(k, "author:")) diff --git a/modules/git/commit_info.go b/modules/git/commit_info.go index 6511a1689a..62f58f8767 100644 --- a/modules/git/commit_info.go +++ b/modules/git/commit_info.go @@ -7,6 +7,7 @@ import ( "context" "fmt" "io" + "maps" "path" "sort" @@ -45,9 +46,7 @@ func (tes Entries) GetCommitsInfo(ctx context.Context, commit *Commit, treePath return nil, nil, err } - for pth, found := range commits { - revs[pth] = found - } + maps.Copy(revs, commits) } } else { sort.Strings(entryPaths) diff --git a/modules/git/foreachref/format.go b/modules/git/foreachref/format.go index 2f5ec08991..87c1c9a4ff 100644 --- a/modules/git/foreachref/format.go +++ b/modules/git/foreachref/format.go @@ -75,9 +75,9 @@ func (f Format) Parser(r io.Reader) *Parser { // hexEscaped produces hex-escaped characters from a string. For example, "\n\0" // would turn into "%0a%00". func (f Format) hexEscaped(delim []byte) string { - escaped := "" - for i := 0; i < len(delim); i++ { - escaped += "%" + hex.EncodeToString([]byte{delim[i]}) + var escaped strings.Builder + for i := range delim { + escaped.WriteString("%" + hex.EncodeToString([]byte{delim[i]})) } - return escaped + return escaped.String() } diff --git a/modules/git/hook.go b/modules/git/hook.go index bef4d024c8..3b650fe9db 100644 --- a/modules/git/hook.go +++ b/modules/git/hook.go @@ -9,6 +9,7 @@ import ( "os" "path" "path/filepath" + "slices" "strings" "forgejo.org/modules/log" @@ -27,12 +28,7 @@ var ErrNotValidHook = errors.New("not a valid Git hook") // IsValidHookName returns true if given name is a valid Git hook. func IsValidHookName(name string) bool { - for _, hn := range hookNames { - if hn == name { - return true - } - } - return false + return slices.Contains(hookNames, name) } // Hook represents a Git hook. diff --git a/modules/git/last_commit_cache.go b/modules/git/last_commit_cache.go index 1d7e74a0d7..9b49a18aaa 100644 --- a/modules/git/last_commit_cache.go +++ b/modules/git/last_commit_cache.go @@ -21,7 +21,7 @@ type Cache interface { } func getCacheKey(repoPath, commitID, entryPath string) string { - hashBytes := sha256.Sum256([]byte(fmt.Sprintf("%s:%s:%s", repoPath, commitID, entryPath))) + hashBytes := sha256.Sum256(fmt.Appendf(nil, "%s:%s:%s", repoPath, commitID, entryPath)) return fmt.Sprintf("last_commit:%x", hashBytes) } diff --git a/modules/git/log_name_status.go b/modules/git/log_name_status.go index 50786e7a42..800e83c4a4 100644 --- a/modules/git/log_name_status.go +++ b/modules/git/log_name_status.go @@ -346,10 +346,7 @@ func WalkGitLog(ctx context.Context, repo *Repository, head *Commit, treepath st results := make([]string, len(paths)) remaining := len(paths) - nextRestart := (len(paths) * 3) / 4 - if nextRestart > 70 { - nextRestart = 70 - } + nextRestart := min((len(paths)*3)/4, 70) lastEmptyParent := head.ID.String() commitSinceLastEmptyParent := uint64(0) commitSinceNextRestart := uint64(0) diff --git a/modules/git/notes.go b/modules/git/notes.go index a52314bdd7..1bc68b6366 100644 --- a/modules/git/notes.go +++ b/modules/git/notes.go @@ -8,6 +8,7 @@ import ( "context" "io" "os" + "strings" "forgejo.org/modules/log" ) @@ -33,7 +34,7 @@ func GetNote(ctx context.Context, repo *Repository, commitID string) (*Note, err return nil, err } - path := "" + var path strings.Builder tree := ¬es.Tree log.Trace("Found tree with ID %q while searching for git note corresponding to the commit %q", tree.ID, commitID) @@ -43,12 +44,12 @@ func GetNote(ctx context.Context, repo *Repository, commitID string) (*Note, err for len(commitID) > 2 { entry, err = tree.GetTreeEntryByPath(commitID) if err == nil { - path += commitID + path.WriteString(commitID) break } if IsErrNotExist(err) { tree, err = tree.SubTree(commitID[0:2]) - path += commitID[0:2] + "/" + path.WriteString(commitID[0:2] + "/") commitID = commitID[2:] } if err != nil { @@ -80,9 +81,9 @@ func GetNote(ctx context.Context, repo *Repository, commitID string) (*Note, err _ = dataRc.Close() closed = true - lastCommit, err := repo.getCommitByPathWithID(notes.ID, path) + lastCommit, err := repo.getCommitByPathWithID(notes.ID, path.String()) if err != nil { - log.Error("Unable to get the commit for the path %q. Error: %v", path, err) + log.Error("Unable to get the commit for the path %q. Error: %v", path.String(), err) return nil, err } diff --git a/modules/git/parse.go b/modules/git/parse.go index c7b84d7198..d2d70d4cfa 100644 --- a/modules/git/parse.go +++ b/modules/git/parse.go @@ -33,16 +33,16 @@ func parseTreeEntries(data []byte, ptree *Tree) ([]*TreeEntry, error) { posEnd += pos } line := data[pos:posEnd] - posTab := bytes.IndexByte(line, '\t') - if posTab == -1 { + before, after, ok := bytes.Cut(line, []byte{'\t'}) + if !ok { return nil, fmt.Errorf("invalid ls-tree output (no tab): %q", line) } entry := new(TreeEntry) entry.ptree = ptree - entryAttrs := line[:posTab] - entryName := line[posTab+1:] + entryAttrs := before + entryName := after entryMode, entryAttrs, _ := bytes.Cut(entryAttrs, sepSpace) _ /* entryType */, entryAttrs, _ = bytes.Cut(entryAttrs, sepSpace) // the type is not used, the mode is enough to determine the type diff --git a/modules/git/pushoptions/pushoptions.go b/modules/git/pushoptions/pushoptions.go index 3fa2e01c44..14e2c5d283 100644 --- a/modules/git/pushoptions/pushoptions.go +++ b/modules/git/pushoptions/pushoptions.go @@ -52,7 +52,7 @@ func NewFromMap(o *map[string]string) Interface { func (o *gitPushOptions) ReadEnv() Interface { if pushCount, err := strconv.Atoi(os.Getenv(EnvCount)); err == nil { - for idx := 0; idx < pushCount; idx++ { + for idx := range pushCount { _ = o.Parse(os.Getenv(fmt.Sprintf(EnvFormat, idx))) } } diff --git a/modules/git/ref.go b/modules/git/ref.go index 1475d4dc5a..fdccd2b2e2 100644 --- a/modules/git/ref.go +++ b/modules/git/ref.go @@ -105,8 +105,8 @@ func (ref RefName) IsFor() bool { } func (ref RefName) nameWithoutPrefix(prefix string) string { - if strings.HasPrefix(string(ref), prefix) { - return strings.TrimPrefix(string(ref), prefix) + if after, ok := strings.CutPrefix(string(ref), prefix); ok { + return after } return "" } diff --git a/modules/git/repo.go b/modules/git/repo.go index 21845d9b55..6bd03f8e3c 100644 --- a/modules/git/repo.go +++ b/modules/git/repo.go @@ -46,9 +46,9 @@ func (repo *Repository) parsePrettyFormatLogToList(logs []byte) ([]*Commit, erro return commits, nil } - parts := bytes.Split(logs, []byte{'\n'}) + parts := bytes.SplitSeq(logs, []byte{'\n'}) - for _, commitID := range parts { + for commitID := range parts { commit, err := repo.GetCommit(string(commitID)) if err != nil { return nil, err diff --git a/modules/git/repo_attribute.go b/modules/git/repo_attribute.go index 2b07513162..56a86bde14 100644 --- a/modules/git/repo_attribute.go +++ b/modules/git/repo_attribute.go @@ -96,8 +96,8 @@ func (ca GitAttribute) String() string { // sometimes used within gitlab-language: https://docs.gitlab.com/ee/user/project/highlighting.html#override-syntax-highlighting-for-a-file-type func (ca GitAttribute) Prefix() string { s := ca.String() - if i := strings.IndexByte(s, '?'); i >= 0 { - return s[:i] + if before, _, ok := strings.Cut(s, "?"); ok { + return before } return s } diff --git a/modules/git/repo_index.go b/modules/git/repo_index.go index f58757a9a2..7fc5c573dd 100644 --- a/modules/git/repo_index.go +++ b/modules/git/repo_index.go @@ -95,7 +95,7 @@ func (repo *Repository) LsFiles(filenames ...string) ([]string, error) { return nil, err } filelist := make([]string, 0, len(filenames)) - for _, line := range bytes.Split(res, []byte{'\000'}) { + for line := range bytes.SplitSeq(res, []byte{'\000'}) { filelist = append(filelist, string(line)) } diff --git a/modules/git/repo_tag.go b/modules/git/repo_tag.go index f7f04e1f10..bd851a3be3 100644 --- a/modules/git/repo_tag.go +++ b/modules/git/repo_tag.go @@ -42,8 +42,8 @@ func (repo *Repository) GetTagNameBySHA(sha string) (string, error) { return "", err } - tagRefs := strings.Split(stdout, "\n") - for _, tagRef := range tagRefs { + tagRefs := strings.SplitSeq(stdout, "\n") + for tagRef := range tagRefs { if len(strings.TrimSpace(tagRef)) > 0 { fields := strings.Fields(tagRef) if strings.HasPrefix(fields[0], sha) && strings.HasPrefix(fields[1], TagPrefix) { @@ -65,7 +65,7 @@ func (repo *Repository) GetTagID(name string) (string, error) { return "", err } // Make sure exact match is used: "v1" != "release/v1" - for _, line := range strings.Split(stdout, "\n") { + for line := range strings.SplitSeq(stdout, "\n") { fields := strings.Fields(line) if len(fields) == 2 && fields[1] == "refs/tags/"+name { return fields[0], nil diff --git a/modules/git/tree.go b/modules/git/tree.go index f6201f6cc9..9a91787c9e 100644 --- a/modules/git/tree.go +++ b/modules/git/tree.go @@ -170,7 +170,7 @@ func (repo *Repository) LsTree(ref string, filenames ...string) ([]string, error return nil, err } filelist := make([]string, 0, len(filenames)) - for _, line := range bytes.Split(res, []byte{'\000'}) { + for line := range bytes.SplitSeq(res, []byte{'\000'}) { filelist = append(filelist, string(line)) } diff --git a/modules/git/tree_entry.go b/modules/git/tree_entry.go index 8b6c4c467c..5e3bb8ac21 100644 --- a/modules/git/tree_entry.go +++ b/modules/git/tree_entry.go @@ -171,7 +171,7 @@ func (te *TreeEntry) FollowLinks() (*TreeEntry, string, error) { } entry := te entryLink := "" - for i := 0; i < 999; i++ { + for range 999 { if entry.IsLink() { next, link, err := entry.FollowLink() entryLink = link diff --git a/modules/git/tree_test.go b/modules/git/tree_test.go index aa092cc56b..6277154acd 100644 --- a/modules/git/tree_test.go +++ b/modules/git/tree_test.go @@ -20,7 +20,7 @@ func TestSubTree_Issue29101(t *testing.T) { require.NoError(t, err) // old code could produce a different error if called multiple times - for i := 0; i < 10; i++ { + for range 10 { _, err = commit.SubTree("file1.txt") require.Error(t, err) assert.True(t, IsErrNotExist(err)) diff --git a/modules/hostmatcher/hostmatcher.go b/modules/hostmatcher/hostmatcher.go index 1069310316..15c6371422 100644 --- a/modules/hostmatcher/hostmatcher.go +++ b/modules/hostmatcher/hostmatcher.go @@ -6,6 +6,7 @@ package hostmatcher import ( "net" "path/filepath" + "slices" "strings" ) @@ -38,7 +39,7 @@ func isBuiltin(s string) bool { // ParseHostMatchList parses the host list HostMatchList func ParseHostMatchList(settingKeyHint, hostList string) *HostMatchList { hl := &HostMatchList{SettingKeyHint: settingKeyHint, SettingValue: hostList} - for _, s := range strings.Split(hostList, ",") { + for s := range strings.SplitSeq(hostList, ",") { s = strings.ToLower(strings.TrimSpace(s)) if s == "" { continue @@ -61,7 +62,7 @@ func ParseSimpleMatchList(settingKeyHint, matchList string) *HostMatchList { SettingKeyHint: settingKeyHint, SettingValue: matchList, } - for _, s := range strings.Split(matchList, ",") { + for s := range strings.SplitSeq(matchList, ",") { s = strings.ToLower(strings.TrimSpace(s)) if s == "" { continue @@ -98,10 +99,8 @@ func (hl *HostMatchList) checkPattern(host string) bool { } func (hl *HostMatchList) checkIP(ip net.IP) bool { - for _, pattern := range hl.patterns { - if pattern == "*" { - return true - } + if slices.Contains(hl.patterns, "*") { + return true } for _, builtin := range hl.builtins { switch builtin { diff --git a/modules/httpcache/httpcache.go b/modules/httpcache/httpcache.go index 7978fc38a1..311f7215b2 100644 --- a/modules/httpcache/httpcache.go +++ b/modules/httpcache/httpcache.go @@ -59,7 +59,7 @@ func HandleGenericETagCache(req *http.Request, w http.ResponseWriter, etag strin func checkIfNoneMatchIsValid(req *http.Request, etag string) bool { ifNoneMatch := req.Header.Get("If-None-Match") if len(ifNoneMatch) > 0 { - for _, item := range strings.Split(ifNoneMatch, ",") { + for item := range strings.SplitSeq(ifNoneMatch, ",") { item = strings.TrimPrefix(strings.TrimSpace(item), "W/") // https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag#directives if item == etag { return true diff --git a/modules/httplib/serve.go b/modules/httplib/serve.go index d385ac21c9..4c71437fc5 100644 --- a/modules/httplib/serve.go +++ b/modules/httplib/serve.go @@ -8,6 +8,7 @@ import ( "errors" "fmt" "io" + "maps" "net/http" "net/url" "path" @@ -86,9 +87,7 @@ func ServeSetHeaders(w http.ResponseWriter, opts *ServeHeaderOptions) { } if opts.AdditionalHeaders != nil { - for k, v := range opts.AdditionalHeaders { - header[k] = v - } + maps.Copy(header, opts.AdditionalHeaders) } } diff --git a/modules/indexer/code/git.go b/modules/indexer/code/git.go index 14a43cf3be..8ec3c1181f 100644 --- a/modules/indexer/code/git.go +++ b/modules/indexer/code/git.go @@ -129,8 +129,8 @@ func nonGenesisChanges(ctx context.Context, repo *repo_model.Repository, revisio changes.Updates = append(changes.Updates, updates...) return nil } - lines := strings.Split(stdout, "\n") - for _, line := range lines { + lines := strings.SplitSeq(stdout, "\n") + for line := range lines { line = strings.TrimSpace(line) if len(line) == 0 { continue diff --git a/modules/issue/template/template.go b/modules/issue/template/template.go index 08c1b21c26..09dfe10e08 100644 --- a/modules/issue/template/template.go +++ b/modules/issue/template/template.go @@ -8,6 +8,7 @@ import ( "fmt" "net/url" "regexp" + "slices" "strconv" "strings" @@ -447,12 +448,7 @@ func (o *valuedOption) IsChecked() bool { case api.IssueFormFieldTypeDropdown: checks := strings.Split(o.field.Get(fmt.Sprintf("form-field-%s", o.field.ID)), ",") idx := strconv.Itoa(o.index) - for _, v := range checks { - if v == idx { - return true - } - } - return false + return slices.Contains(checks, idx) case api.IssueFormFieldTypeCheckboxes: return o.field.Get(fmt.Sprintf("form-field-%s-%d", o.field.ID, o.index)) == "on" } diff --git a/modules/label/parser.go b/modules/label/parser.go index 12fc176967..b27b2c9ee6 100644 --- a/modules/label/parser.go +++ b/modules/label/parser.go @@ -72,7 +72,7 @@ func parseYamlFormat(fileName string, data []byte) ([]*Label, error) { func parseLegacyFormat(fileName string, data []byte) ([]*Label, error) { lines := strings.Split(string(data), "\n") list := make([]*Label, 0, len(lines)) - for i := 0; i < len(lines); i++ { + for i := range lines { line := strings.TrimSpace(lines[i]) if len(line) == 0 { continue @@ -108,7 +108,7 @@ func LoadTemplateDescription(fileName string) (string, error) { return "", err } - for i := 0; i < len(list); i++ { + for i := range list { if i > 0 { buf.WriteString(", ") } diff --git a/modules/log/event_format.go b/modules/log/event_format.go index 6835a4ca5b..70df2cbce2 100644 --- a/modules/log/event_format.go +++ b/modules/log/event_format.go @@ -208,7 +208,7 @@ func EventFormatTextMessage(mode *WriterMode, event *Event, msgFormat string, ms } } if hasColorValue { - msg = []byte(fmt.Sprintf(msgFormat, msgArgs...)) + msg = fmt.Appendf(nil, msgFormat, msgArgs...) } } // try to reuse the pre-formatted simple text message @@ -227,8 +227,8 @@ func EventFormatTextMessage(mode *WriterMode, event *Event, msgFormat string, ms buf = append(buf, msg...) if event.Stacktrace != "" && mode.StacktraceLevel <= event.Level { - lines := bytes.Split([]byte(event.Stacktrace), []byte("\n")) - for _, line := range lines { + lines := bytes.SplitSeq([]byte(event.Stacktrace), []byte("\n")) + for line := range lines { buf = append(buf, "\n\t"...) buf = append(buf, line...) } diff --git a/modules/log/event_writer_conn_test.go b/modules/log/event_writer_conn_test.go index 0cf447149a..6d528a68d1 100644 --- a/modules/log/event_writer_conn_test.go +++ b/modules/log/event_writer_conn_test.go @@ -63,11 +63,9 @@ func TestConnLogger(t *testing.T) { } expected := fmt.Sprintf("%s%s %s:%d:%s [%c] %s\n", prefix, dateString, event.Filename, event.Line, event.Caller, strings.ToUpper(event.Level.String())[0], event.MsgSimpleText) var wg sync.WaitGroup - wg.Add(1) - go func() { - defer wg.Done() + wg.Go(func() { listenReadAndClose(t, l, expected) - }() + }) logger.SendLogEvent(&event) wg.Wait() diff --git a/modules/log/flags.go b/modules/log/flags.go index 1e4fe830c1..c428d58a1d 100644 --- a/modules/log/flags.go +++ b/modules/log/flags.go @@ -124,7 +124,7 @@ func FlagsFromString(from string, def ...uint32) Flags { return Flags{defined: true, flags: def[0]} } flags := uint32(0) - for _, flag := range strings.Split(strings.ToLower(from), ",") { + for flag := range strings.SplitSeq(strings.ToLower(from), ",") { flags |= flagFromString[strings.TrimSpace(flag)] } return Flags{defined: true, flags: flags} diff --git a/modules/log/level_test.go b/modules/log/level_test.go index e6cacc723b..73e2355960 100644 --- a/modules/log/level_test.go +++ b/modules/log/level_test.go @@ -33,11 +33,11 @@ func TestLevelMarshalUnmarshalJSON(t *testing.T) { require.NoError(t, err) assert.Equal(t, INFO, testLevel.Level) - err = json.Unmarshal([]byte(fmt.Sprintf(`{"level":%d}`, 2)), &testLevel) + err = json.Unmarshal(fmt.Appendf(nil, `{"level":%d}`, 2), &testLevel) require.NoError(t, err) assert.Equal(t, INFO, testLevel.Level) - err = json.Unmarshal([]byte(fmt.Sprintf(`{"level":%d}`, 10012)), &testLevel) + err = json.Unmarshal(fmt.Appendf(nil, `{"level":%d}`, 10012), &testLevel) require.NoError(t, err) assert.Equal(t, INFO, testLevel.Level) @@ -52,5 +52,5 @@ func TestLevelMarshalUnmarshalJSON(t *testing.T) { } func makeTestLevelBytes(level string) []byte { - return []byte(fmt.Sprintf(`{"level":"%s"}`, level)) + return fmt.Appendf(nil, `{"level":"%s"}`, level) } diff --git a/modules/markup/file_preview.go b/modules/markup/file_preview.go index dab6057cf4..22dcf93d75 100644 --- a/modules/markup/file_preview.go +++ b/modules/markup/file_preview.go @@ -80,8 +80,8 @@ func newFilePreview(ctx *RenderContext, node *html.Node, locale translation.Loca filePath := node.Data[m[6]:m[7]] hash := node.Data[m[8]:m[9]] urlFullSource := urlFull - if strings.HasSuffix(filePath, "?display=source") { - filePath = strings.TrimSuffix(filePath, "?display=source") + if before, ok := strings.CutSuffix(filePath, "?display=source"); ok { + filePath = before } else if Type(filePath) != "" { urlFullSource = node.Data[m[0]:m[6]] + filePath + "?display=source#" + hash } diff --git a/modules/markup/html.go b/modules/markup/html.go index d60021bfbb..77b5dc8029 100644 --- a/modules/markup/html.go +++ b/modules/markup/html.go @@ -11,6 +11,7 @@ import ( "path" "path/filepath" "regexp" + "slices" "strings" "sync" @@ -124,13 +125,7 @@ func CustomLinkURLSchemes(schemes []string) { if !validScheme.MatchString(s) { continue } - without := false - for _, sna := range xurls.SchemesNoAuthority { - if s == sna { - without = true - break - } - } + without := slices.Contains(xurls.SchemesNoAuthority, s) if without { s += ":" } else { @@ -675,9 +670,9 @@ func shortLinkProcessor(ctx *RenderContext, node *html.Node) { // It makes page handling terrible, but we prefer GitHub syntax // And fall back to MediaWiki only when it is obvious from the look // Of text and link contents - sl := strings.Split(content, "|") - for _, v := range sl { - if equalPos := strings.IndexByte(v, '='); equalPos == -1 { + sl := strings.SplitSeq(content, "|") + for v := range sl { + if found := strings.Contains(v, "="); !found { // There is no equal in this argument; this is a mandatory arg if props["name"] == "" { if IsLinkStr(v) { @@ -1148,7 +1143,7 @@ func comparePatternProcessor(ctx *RenderContext, node *html.Node) { } // Ensure that every group (m[0]...m[9]) has a match - for i := 0; i < 10; i++ { + for i := range 10 { if m[i] == -1 { return } diff --git a/modules/markup/markdown/markdown.go b/modules/markup/markdown/markdown.go index 2b19e0f1c9..9a112109dd 100644 --- a/modules/markup/markdown/markdown.go +++ b/modules/markup/markdown/markdown.go @@ -182,10 +182,7 @@ func actualRender(ctx *markup.RenderContext, input io.Reader, output io.Writer) } buf, _ = ExtractMetadataBytes(buf, rc) - metaLength := bufWithMetadataLength - len(buf) - if metaLength < 0 { - metaLength = 0 - } + metaLength := max(bufWithMetadataLength-len(buf), 0) rc.metaLength = metaLength pc.Set(markdownutil.RenderConfigKey, rc) diff --git a/modules/markup/markdown/markdown_test.go b/modules/markup/markdown/markdown_test.go index 82c2c7fe8c..61ded3cedc 100644 --- a/modules/markup/markdown/markdown_test.go +++ b/modules/markup/markdown/markdown_test.go @@ -319,7 +319,7 @@ func TestTotal_RenderWiki(t *testing.T) { answers := testAnswers(util.URLJoin(FullURL, "wiki"), util.URLJoin(FullURL, "wiki", "raw")) - for i := 0; i < len(sameCases); i++ { + for i := range sameCases { line, err := markdown.RenderString(&markup.RenderContext{ Ctx: git.DefaultContext, Links: markup.Links{ @@ -363,7 +363,7 @@ func TestTotal_RenderString(t *testing.T) { answers := testAnswers(util.URLJoin(FullURL, "src", "master"), util.URLJoin(FullURL, "media", "master")) - for i := 0; i < len(sameCases); i++ { + for i := range sameCases { line, err := markdown.RenderString(&markup.RenderContext{ Ctx: git.DefaultContext, Links: markup.Links{ diff --git a/modules/markup/markdown/math/block_renderer.go b/modules/markup/markdown/math/block_renderer.go index 84817ef1e4..d27318c623 100644 --- a/modules/markup/markdown/math/block_renderer.go +++ b/modules/markup/markdown/math/block_renderer.go @@ -24,7 +24,7 @@ func (r *BlockRenderer) RegisterFuncs(reg renderer.NodeRendererFuncRegisterer) { func (r *BlockRenderer) writeLines(w util.BufWriter, source []byte, n gast.Node) { l := n.Lines().Len() - for i := 0; i < l; i++ { + for i := range l { line := n.Lines().At(i) _, _ = w.Write(util.EscapeHTML(line.Value(source))) } diff --git a/modules/markup/markdown/meta_test.go b/modules/markup/markdown/meta_test.go index aaf116ff20..9345dd528a 100644 --- a/modules/markup/markdown/meta_test.go +++ b/modules/markup/markdown/meta_test.go @@ -63,7 +63,7 @@ func TestExtractMetadata(t *testing.T) { func TestExtractMetadataBytes(t *testing.T) { t.Run("ValidFrontAndBody", func(t *testing.T) { var meta IssueTemplate - body, err := ExtractMetadataBytes([]byte(fmt.Sprintf("%s\n%s\n%s\n%s", sepTest, frontTest, sepTest, bodyTest)), &meta) + body, err := ExtractMetadataBytes(fmt.Appendf(nil, "%s\n%s\n%s\n%s", sepTest, frontTest, sepTest, bodyTest), &meta) require.NoError(t, err) assert.Equal(t, bodyTest, string(body)) assert.Equal(t, metaTest, meta) @@ -72,19 +72,19 @@ func TestExtractMetadataBytes(t *testing.T) { t.Run("NoFirstSeparator", func(t *testing.T) { var meta IssueTemplate - _, err := ExtractMetadataBytes([]byte(fmt.Sprintf("%s\n%s\n%s", frontTest, sepTest, bodyTest)), &meta) + _, err := ExtractMetadataBytes(fmt.Appendf(nil, "%s\n%s\n%s", frontTest, sepTest, bodyTest), &meta) require.Error(t, err) }) t.Run("NoLastSeparator", func(t *testing.T) { var meta IssueTemplate - _, err := ExtractMetadataBytes([]byte(fmt.Sprintf("%s\n%s\n%s", sepTest, frontTest, bodyTest)), &meta) + _, err := ExtractMetadataBytes(fmt.Appendf(nil, "%s\n%s\n%s", sepTest, frontTest, bodyTest), &meta) require.Error(t, err) }) t.Run("NoBody", func(t *testing.T) { var meta IssueTemplate - body, err := ExtractMetadataBytes([]byte(fmt.Sprintf("%s\n%s\n%s", sepTest, frontTest, sepTest)), &meta) + body, err := ExtractMetadataBytes(fmt.Appendf(nil, "%s\n%s\n%s", sepTest, frontTest, sepTest), &meta) require.NoError(t, err) assert.Empty(t, string(body)) assert.Equal(t, metaTest, meta) diff --git a/modules/markup/markdown/toc.go b/modules/markup/markdown/toc.go index dbfab3e9dc..53add219f5 100644 --- a/modules/markup/markdown/toc.go +++ b/modules/markup/markdown/toc.go @@ -44,7 +44,7 @@ func createTOCNode(toc []markup.Header, lang string, detailsAttrs map[string]str } li := ast.NewListItem(currentLevel * 2) a := ast.NewLink() - a.Destination = []byte(fmt.Sprintf("#%s", url.QueryEscape(header.ID))) + a.Destination = fmt.Appendf(nil, "#%s", url.QueryEscape(header.ID)) a.AppendChild(a, ast.NewString([]byte(header.Text))) li.AppendChild(li, a) ul.AppendChild(ul, li) diff --git a/modules/markup/markdown/transform_heading.go b/modules/markup/markdown/transform_heading.go index eedaf58556..16779d5099 100644 --- a/modules/markup/markdown/transform_heading.go +++ b/modules/markup/markdown/transform_heading.go @@ -17,7 +17,7 @@ import ( func (g *ASTTransformer) transformHeading(_ *markup.RenderContext, v *ast.Heading, reader text.Reader, tocList *[]markup.Header) { for _, attr := range v.Attributes() { if _, ok := attr.Value.([]byte); !ok { - v.SetAttribute(attr.Name, []byte(fmt.Sprintf("%v", attr.Value))) + v.SetAttribute(attr.Name, fmt.Appendf(nil, "%v", attr.Value)) } } txt := mdutil.Text(v, reader.Source()) diff --git a/modules/markup/renderer.go b/modules/markup/renderer.go index b1c3d35e73..0a66caf1d5 100644 --- a/modules/markup/renderer.go +++ b/modules/markup/renderer.go @@ -319,23 +319,19 @@ func render(ctx *RenderContext, renderer Renderer, input io.Reader, output io.Wr _ = pw2.Close() }() - wg.Add(1) - go func() { + wg.Go(func() { err = donotpanic.SafeFuncWithError(func() error { return SanitizeReader(pr2, renderer.Name(), output) }) _ = pr2.Close() - wg.Done() - }() + }) } else { pw2 = nopCloser{output} } - wg.Add(1) - go func() { + wg.Go(func() { err = donotpanic.SafeFuncWithError(func() error { return postProcessOrCopy(ctx, renderer, pr, pw2) }) _ = pr.Close() _ = pw2.Close() - wg.Done() - }() + }) if err1 := renderer.Render(ctx, input, pw); err1 != nil { return err1 diff --git a/modules/packages/npm/creator.go b/modules/packages/npm/creator.go index ed163d30ac..2f83d2ee7b 100644 --- a/modules/packages/npm/creator.go +++ b/modules/packages/npm/creator.go @@ -58,7 +58,7 @@ type PackageMetadata struct { Time map[string]time.Time `json:"time,omitempty"` Homepage string `json:"homepage,omitempty"` Keywords []string `json:"keywords,omitempty"` - Repository Repository `json:"repository,omitempty"` + Repository Repository `json:"repository"` Author User `json:"author"` ReadmeFilename string `json:"readmeFilename,omitempty"` Users map[string]bool `json:"users,omitempty"` @@ -75,7 +75,7 @@ type PackageMetadataVersion struct { Author User `json:"author"` Homepage string `json:"homepage,omitempty"` License string `json:"license,omitempty"` - Repository Repository `json:"repository,omitempty"` + Repository Repository `json:"repository"` Keywords []string `json:"keywords,omitempty"` Dependencies map[string]string `json:"dependencies,omitempty"` BundleDependencies []string `json:"bundleDependencies,omitempty"` diff --git a/modules/packages/npm/metadata.go b/modules/packages/npm/metadata.go index 6bb77f302b..0e5bf19ce7 100644 --- a/modules/packages/npm/metadata.go +++ b/modules/packages/npm/metadata.go @@ -22,5 +22,5 @@ type Metadata struct { OptionalDependencies map[string]string `json:"optional_dependencies,omitempty"` Bin map[string]string `json:"bin,omitempty"` Readme string `json:"readme,omitempty"` - Repository Repository `json:"repository,omitempty"` + Repository Repository `json:"repository"` } diff --git a/modules/packages/nuget/symbol_extractor.go b/modules/packages/nuget/symbol_extractor.go index 992ade7e8f..dd9fac96c6 100644 --- a/modules/packages/nuget/symbol_extractor.go +++ b/modules/packages/nuget/symbol_extractor.go @@ -142,8 +142,8 @@ func ParseDebugHeaderID(r io.ReadSeeker) (string, error) { if _, err := r.Read(b); err != nil { return "", err } - if i := bytes.IndexByte(b, 0); i != -1 { - buf.Write(b[:i]) + if before, _, ok := bytes.Cut(b, []byte{0}); ok { + buf.Write(before) return buf.String(), nil } buf.Write(b) diff --git a/modules/packages/rubygems/marshal.go b/modules/packages/rubygems/marshal.go index 191efc7c0e..7d498c66b8 100644 --- a/modules/packages/rubygems/marshal.go +++ b/modules/packages/rubygems/marshal.go @@ -91,7 +91,7 @@ func (e *MarshalEncoder) marshal(v any) error { val := reflect.ValueOf(v) typ := reflect.TypeOf(v) - if typ.Kind() == reflect.Ptr { + if typ.Kind() == reflect.Pointer { val = val.Elem() typ = typ.Elem() } @@ -250,7 +250,7 @@ func (e *MarshalEncoder) marshalArray(arr reflect.Value) error { return err } - for i := 0; i < length; i++ { + for i := range length { if err := e.marshal(arr.Index(i).Interface()); err != nil { return err } diff --git a/modules/packages/swift/metadata.go b/modules/packages/swift/metadata.go index 34fc4f1784..094fa0c7a4 100644 --- a/modules/packages/swift/metadata.go +++ b/modules/packages/swift/metadata.go @@ -47,7 +47,7 @@ type Metadata struct { Keywords []string `json:"keywords,omitempty"` RepositoryURL string `json:"repository_url,omitempty"` License string `json:"license,omitempty"` - Author Person `json:"author,omitempty"` + Author Person `json:"author"` Manifests map[string]*Manifest `json:"manifests,omitempty"` } diff --git a/modules/private/serv.go b/modules/private/serv.go index fb8496930e..ac5be3b767 100644 --- a/modules/private/serv.go +++ b/modules/private/serv.go @@ -7,6 +7,7 @@ import ( "context" "fmt" "net/url" + "strings" asymkey_model "forgejo.org/models/asymkey" "forgejo.org/models/perm" @@ -47,17 +48,18 @@ type ServCommandResults struct { // ServCommand preps for a serv call func ServCommand(ctx context.Context, keyID int64, ownerName, repoName string, mode perm.AccessMode, verbs ...string) (*ServCommandResults, ResponseExtra) { - reqURL := setting.LocalURL + fmt.Sprintf("api/internal/serv/command/%d/%s/%s?mode=%d", + var reqURL strings.Builder + reqURL.WriteString(setting.LocalURL + fmt.Sprintf("api/internal/serv/command/%d/%s/%s?mode=%d", keyID, url.PathEscape(ownerName), url.PathEscape(repoName), mode, - ) + )) for _, verb := range verbs { if verb != "" { - reqURL += fmt.Sprintf("&verb=%s", url.QueryEscape(verb)) + fmt.Fprintf(&reqURL, "&verb=%s", url.QueryEscape(verb)) } } - req := newInternalRequest(ctx, reqURL, "GET") + req := newInternalRequest(ctx, reqURL.String(), "GET") return requestJSONResp(req, &ServCommandResults{}) } diff --git a/modules/public/public.go b/modules/public/public.go index a7db5b62e9..52cb8757a0 100644 --- a/modules/public/public.go +++ b/modules/public/public.go @@ -45,7 +45,7 @@ func FileHandlerFunc() http.HandlerFunc { func parseAcceptEncoding(val string) container.Set[string] { parts := strings.Split(val, ";") types := make(container.Set[string]) - for _, v := range strings.Split(parts[0], ",") { + for v := range strings.SplitSeq(parts[0], ",") { types.Add(strings.TrimSpace(v)) } return types diff --git a/modules/queue/base_levelqueue_common.go b/modules/queue/base_levelqueue_common.go index 8b4f35c47d..c57bf8597b 100644 --- a/modules/queue/base_levelqueue_common.go +++ b/modules/queue/base_levelqueue_common.go @@ -83,7 +83,7 @@ func prepareLevelDB(cfg *BaseConfig) (conn string, db *leveldb.DB, err error) { } conn = cfg.ConnStr } - for i := 0; i < 10; i++ { + for range 10 { if db, err = nosql.GetManager().GetLevelDB(conn); err == nil { break } diff --git a/modules/queue/base_redis.go b/modules/queue/base_redis.go index ec3c6dc16d..8b20e0b443 100644 --- a/modules/queue/base_redis.go +++ b/modules/queue/base_redis.go @@ -49,7 +49,7 @@ func newBaseRedisGeneric(cfg *BaseConfig, unique bool, client nosql.RedisClient) } var err error - for i := 0; i < 10; i++ { + for range 10 { err = client.Ping(graceful.GetManager().ShutdownContext()).Err() if err == nil { break diff --git a/modules/queue/base_test.go b/modules/queue/base_test.go index caa930158c..758faf1459 100644 --- a/modules/queue/base_test.go +++ b/modules/queue/base_test.go @@ -88,7 +88,7 @@ func testQueueBasic(t *testing.T, newFn func(cfg *BaseConfig) (baseQueue, error) // test blocking push if queue is full for i := 0; i < cfg.Length; i++ { - err = q.PushItem(ctx, []byte(fmt.Sprintf("item-%d", i))) + err = q.PushItem(ctx, fmt.Appendf(nil, "item-%d", i)) require.NoError(t, err) } ctxTimed, cancel = context.WithTimeout(ctx, 10*time.Millisecond) diff --git a/modules/queue/manager.go b/modules/queue/manager.go index 8f1a93f273..9c655b7fdc 100644 --- a/modules/queue/manager.go +++ b/modules/queue/manager.go @@ -5,6 +5,7 @@ package queue import ( "context" + "maps" "sync" "time" @@ -68,9 +69,7 @@ func (m *Manager) ManagedQueues() map[int64]ManagedWorkerPoolQueue { defer m.mu.Unlock() queues := make(map[int64]ManagedWorkerPoolQueue, len(m.Queues)) - for k, v := range m.Queues { - queues[k] = v - } + maps.Copy(queues, m.Queues) return queues } diff --git a/modules/queue/workergroup.go b/modules/queue/workergroup.go index 2d1228db2c..87f01755aa 100644 --- a/modules/queue/workergroup.go +++ b/modules/queue/workergroup.go @@ -142,11 +142,7 @@ func (q *WorkerPoolQueue[T]) basePushForShutdown(items ...T) bool { // doStartNewWorker starts a new worker for the queue, the worker reads from worker's channel and handles the items. func (q *WorkerPoolQueue[T]) doStartNewWorker(wp *workerGroup[T]) { - wp.wg.Add(1) - - go func() { - defer wp.wg.Done() - + wp.wg.Go(func() { log.Debug("Queue %q starts new worker", q.GetName()) defer log.Debug("Queue %q stops idle worker", q.GetName()) @@ -187,7 +183,7 @@ func (q *WorkerPoolQueue[T]) doStartNewWorker(wp *workerGroup[T]) { q.workerNumMu.Unlock() } } - }() + }) } // doFlush flushes the queue: it tries to read all items from the queue and handles them. diff --git a/modules/queue/workerqueue_test.go b/modules/queue/workerqueue_test.go index 8d907ed8cd..da857b9405 100644 --- a/modules/queue/workerqueue_test.go +++ b/modules/queue/workerqueue_test.go @@ -78,17 +78,17 @@ func TestWorkerPoolQueueUnhandled(t *testing.T) { runCount := 2 // we can run these tests even hundreds times to see its stability t.Run("1/1", func(t *testing.T) { - for i := 0; i < runCount; i++ { + for range runCount { test(t, setting.QueueSettings{BatchLength: 1, MaxWorkers: 1}) } }) t.Run("3/1", func(t *testing.T) { - for i := 0; i < runCount; i++ { + for range runCount { test(t, setting.QueueSettings{BatchLength: 3, MaxWorkers: 1}) } }) t.Run("4/5", func(t *testing.T) { - for i := 0; i < runCount; i++ { + for range runCount { test(t, setting.QueueSettings{BatchLength: 4, MaxWorkers: 5}) } }) @@ -97,17 +97,17 @@ func TestWorkerPoolQueueUnhandled(t *testing.T) { func TestWorkerPoolQueuePersistence(t *testing.T) { runCount := 2 // we can run these tests even hundreds times to see its stability t.Run("1/1", func(t *testing.T) { - for i := 0; i < runCount; i++ { + for range runCount { testWorkerPoolQueuePersistence(t, setting.QueueSettings{BatchLength: 1, MaxWorkers: 1, Length: 100}) } }) t.Run("3/1", func(t *testing.T) { - for i := 0; i < runCount; i++ { + for range runCount { testWorkerPoolQueuePersistence(t, setting.QueueSettings{BatchLength: 3, MaxWorkers: 1, Length: 100}) } }) t.Run("4/5", func(t *testing.T) { - for i := 0; i < runCount; i++ { + for range runCount { testWorkerPoolQueuePersistence(t, setting.QueueSettings{BatchLength: 4, MaxWorkers: 5, Length: 100}) } }) @@ -142,7 +142,7 @@ func testWorkerPoolQueuePersistence(t *testing.T, queueSetting setting.QueueSett q, _ := newWorkerPoolQueueForTest("pr_patch_checker_test", queueSetting, testHandler, true) stop := runWorkerPoolQueue(q) - for i := 0; i < testCount; i++ { + for i := range testCount { _ = q.Push("task-" + strconv.Itoa(i)) } close(startWhenAllReady) @@ -187,7 +187,7 @@ func TestWorkerPoolQueueActiveWorkers(t *testing.T) { q, _ := newWorkerPoolQueueForTest("test-workpoolqueue", setting.QueueSettings{Type: "channel", BatchLength: 1, MaxWorkers: 1, Length: 100}, handler, false) stop := runWorkerPoolQueue(q) - for i := 0; i < 5; i++ { + for i := range 5 { require.NoError(t, q.Push(i)) } @@ -203,7 +203,7 @@ func TestWorkerPoolQueueActiveWorkers(t *testing.T) { q, _ = newWorkerPoolQueueForTest("test-workpoolqueue", setting.QueueSettings{Type: "channel", BatchLength: 1, MaxWorkers: 3, Length: 100}, handler, false) stop = runWorkerPoolQueue(q) - for i := 0; i < 15; i++ { + for i := range 15 { require.NoError(t, q.Push(i)) } @@ -264,12 +264,12 @@ func TestWorkerPoolQueueWorkerIdleReset(t *testing.T) { stop := runWorkerPoolQueue(q) const workloadSize = 12 - for i := 0; i < workloadSize; i++ { + for i := range workloadSize { require.NoError(t, q.Push(i)) } workerIDs := make(map[string]struct{}) - for i := 0; i < workloadSize; i++ { + for i := range workloadSize { c := <-chGoroutineIDs workerIDs[c] = struct{}{} t.Logf("%d workers: overall=%d current=%d", i, len(workerIDs), q.GetWorkerNumber()) diff --git a/modules/repository/init.go b/modules/repository/init.go index 7b1442be93..66a65599a8 100644 --- a/modules/repository/init.go +++ b/modules/repository/init.go @@ -152,7 +152,7 @@ func InitializeLabels(ctx context.Context, id int64, labelTemplate string, isOrg } labels := make([]*issues_model.Label, len(list)) - for i := 0; i < len(list); i++ { + for i := range list { labels[i] = &issues_model.Label{ Name: list[i].Name, Exclusive: list[i].Exclusive, diff --git a/modules/setting/config.go b/modules/setting/config.go index 6299640e61..90f3d12d11 100644 --- a/modules/setting/config.go +++ b/modules/setting/config.go @@ -4,6 +4,7 @@ package setting import ( + "strings" "sync" "forgejo.org/modules/log" @@ -23,11 +24,11 @@ type OpenWithEditorApp struct { type OpenWithEditorAppsType []OpenWithEditorApp func (t OpenWithEditorAppsType) ToTextareaString() string { - ret := "" + var ret strings.Builder for _, app := range t { - ret += app.DisplayName + " = " + app.OpenURL + "\n" + ret.WriteString(app.DisplayName + " = " + app.OpenURL + "\n") } - return ret + return ret.String() } func DefaultOpenWithEditorApps() OpenWithEditorAppsType { diff --git a/modules/setting/config_env.go b/modules/setting/config_env.go index 458dbb51bb..68a7c94db2 100644 --- a/modules/setting/config_env.go +++ b/modules/setting/config_env.go @@ -51,10 +51,10 @@ func decodeEnvSectionKey(encoded string) (ok bool, section, key string) { for _, unescapeIdx := range escapeStringIndices { preceding := encoded[last:unescapeIdx[0]] if !inKey { - if splitter := strings.Index(preceding, "__"); splitter > -1 { - section += preceding[:splitter] + if before, after, ok := strings.Cut(preceding, "__"); ok { + section += before inKey = true - key += preceding[splitter+2:] + key += after } else { section += preceding } @@ -77,9 +77,9 @@ func decodeEnvSectionKey(encoded string) (ok bool, section, key string) { } remaining := encoded[last:] if !inKey { - if splitter := strings.Index(remaining, "__"); splitter > -1 { - section += remaining[:splitter] - key += remaining[splitter+2:] + if before, after, ok := strings.Cut(remaining, "__"); ok { + section += before + key += after } else { section += remaining } @@ -113,25 +113,24 @@ func decodeEnvironmentKey(prefixRegexp *regexp.Regexp, suffixFile, envKey string func EnvironmentToConfig(cfg ConfigProvider, envs []string) (changed bool) { prefixRegexp := regexp.MustCompile(EnvConfigKeyPrefixGitea) for _, kv := range envs { - idx := strings.IndexByte(kv, '=') - if idx < 0 { + before, after, ok0 := strings.Cut(kv, "=") + if !ok0 { continue } // parse the environment variable to config section name and key name - envKey := kv[:idx] - envValue := kv[idx+1:] + envKey := before + keyValue := after ok, sectionName, keyName, useFileValue := decodeEnvironmentKey(prefixRegexp, EnvConfigKeySuffixFile, envKey) if !ok { continue } // use environment value as config value, or read the file content as value if the key indicates a file - keyValue := envValue if useFileValue { - fileContent, err := os.ReadFile(envValue) + fileContent, err := os.ReadFile(keyValue) if err != nil { - log.Error("Error reading file for %s : %v", envKey, envValue, err) + log.Error("Error reading file for %s : %v", envKey, keyValue, err) continue } if bytes.HasSuffix(fileContent, []byte("\r\n")) { diff --git a/modules/setting/indexer.go b/modules/setting/indexer.go index b112a50cfa..948dae0bea 100644 --- a/modules/setting/indexer.go +++ b/modules/setting/indexer.go @@ -108,7 +108,7 @@ func loadIndexerFrom(rootCfg ConfigProvider) { // IndexerGlobFromString parses a comma separated list of patterns and returns a glob.Glob slice suited for repo indexing func IndexerGlobFromString(globstr string) []Glob { extarr := make([]Glob, 0, 10) - for _, expr := range strings.Split(strings.ToLower(globstr), ",") { + for expr := range strings.SplitSeq(strings.ToLower(globstr), ",") { expr = strings.TrimSpace(expr) if expr != "" { if g, err := glob.Compile(expr, '.', '/'); err != nil { diff --git a/modules/setting/log.go b/modules/setting/log.go index ecc591fd35..7799f8187b 100644 --- a/modules/setting/log.go +++ b/modules/setting/log.go @@ -269,8 +269,8 @@ func initLoggerByName(manager *log.LoggerManager, rootCfg ConfigProvider, logger } var eventWriters []log.EventWriter - modes := strings.Split(modeVal, ",") - for _, modeName := range modes { + modes := strings.SplitSeq(modeVal, ",") + for modeName := range modes { modeName = strings.TrimSpace(modeName) if modeName == "" { continue diff --git a/modules/setting/markup.go b/modules/setting/markup.go index 4ab9e7b2d1..0ece86dfd1 100644 --- a/modules/setting/markup.go +++ b/modules/setting/markup.go @@ -85,8 +85,8 @@ func loadMarkupFrom(rootCfg ConfigProvider) { func newMarkupSanitizer(name string, sec ConfigSection) { rule, ok := createMarkupSanitizerRule(name, sec) if ok { - if strings.HasPrefix(name, "sanitizer.") { - names := strings.SplitN(strings.TrimPrefix(name, "sanitizer."), ".", 2) + if after, ok0 := strings.CutPrefix(name, "sanitizer."); ok0 { + names := strings.SplitN(after, ".", 2) name = names[0] } for _, renderer := range ExternalMarkupRenderers { diff --git a/modules/setting/mirror.go b/modules/setting/mirror.go index 58c57c5c95..083c67db45 100644 --- a/modules/setting/mirror.go +++ b/modules/setting/mirror.go @@ -48,11 +48,7 @@ func loadMirrorFrom(rootCfg ConfigProvider) { Mirror.MinInterval = 1 * time.Minute } if Mirror.DefaultInterval < Mirror.MinInterval { - if time.Hour*8 < Mirror.MinInterval { - Mirror.DefaultInterval = Mirror.MinInterval - } else { - Mirror.DefaultInterval = time.Hour * 8 - } + Mirror.DefaultInterval = max(time.Hour*8, Mirror.MinInterval) log.Warn("Mirror.DefaultInterval is less than Mirror.MinInterval, set to %s", Mirror.DefaultInterval.String()) } } diff --git a/modules/setting/storage.go b/modules/setting/storage.go index e458300727..93958219a8 100644 --- a/modules/setting/storage.go +++ b/modules/setting/storage.go @@ -7,6 +7,7 @@ import ( "errors" "fmt" "path/filepath" + "slices" "strings" ) @@ -27,12 +28,7 @@ var storageTypes = []StorageType{ // IsValidStorageType returns true if the given storage type is valid func IsValidStorageType(storageType StorageType) bool { - for _, t := range storageTypes { - if t == storageType { - return true - } - } - return false + return slices.Contains(storageTypes, storageType) } // MinioStorageConfig represents the configuration for a minio storage diff --git a/modules/structs/action.go b/modules/structs/action.go index a39ae11d65..cb6d76f3e3 100644 --- a/modules/structs/action.go +++ b/modules/structs/action.go @@ -70,13 +70,13 @@ type ActionRun struct { // the current status of this run Status string `json:"status"` // when the action run was started - Started time.Time `json:"started,omitempty"` + Started time.Time `json:"started"` // when the action run was stopped - Stopped time.Time `json:"stopped,omitempty"` + Stopped time.Time `json:"stopped"` // when the action run was created - Created time.Time `json:"created,omitempty"` + Created time.Time `json:"created"` // when the action run was last updated - Updated time.Time `json:"updated,omitempty"` + Updated time.Time `json:"updated"` // how long the action run ran for Duration time.Duration `json:"duration,omitempty"` // the url of this action run diff --git a/modules/structs/issue.go b/modules/structs/issue.go index 6208c28be1..37c71f5736 100644 --- a/modules/structs/issue.go +++ b/modules/structs/issue.go @@ -204,7 +204,7 @@ func (l *IssueTemplateLabels) UnmarshalYAML(value *yaml.Node) error { if err != nil { return err } - for _, v := range strings.Split(str, ",") { + for v := range strings.SplitSeq(str, ",") { if v = strings.TrimSpace(v); v == "" { continue } diff --git a/modules/structs/repo.go b/modules/structs/repo.go index 059f19c2bb..3fa43ce0cb 100644 --- a/modules/structs/repo.go +++ b/modules/structs/repo.go @@ -118,7 +118,7 @@ type Repository struct { // enum: ["sha1", "sha256"] ObjectFormatName string `json:"object_format_name"` // swagger:strfmt date-time - MirrorUpdated time.Time `json:"mirror_updated,omitempty"` + MirrorUpdated time.Time `json:"mirror_updated"` RepoTransfer *RepoTransfer `json:"repo_transfer"` Topics []string `json:"topics"` } diff --git a/modules/structs/user.go b/modules/structs/user.go index 49e4c495cf..e0767071d0 100644 --- a/modules/structs/user.go +++ b/modules/structs/user.go @@ -34,9 +34,9 @@ type User struct { // Is the user an administrator IsAdmin bool `json:"is_admin"` // swagger:strfmt date-time - LastLogin time.Time `json:"last_login,omitempty"` + LastLogin time.Time `json:"last_login"` // swagger:strfmt date-time - Created time.Time `json:"created,omitempty"` + Created time.Time `json:"created"` // Is user restricted Restricted bool `json:"restricted"` // Is user active diff --git a/modules/structs/user_gpgkey.go b/modules/structs/user_gpgkey.go index ff9b0aea1d..deae70de33 100644 --- a/modules/structs/user_gpgkey.go +++ b/modules/structs/user_gpgkey.go @@ -21,9 +21,9 @@ type GPGKey struct { CanCertify bool `json:"can_certify"` Verified bool `json:"verified"` // swagger:strfmt date-time - Created time.Time `json:"created_at,omitempty"` + Created time.Time `json:"created_at"` // swagger:strfmt date-time - Expires time.Time `json:"expires_at,omitempty"` + Expires time.Time `json:"expires_at"` } // GPGKeyEmail an email attached to a GPGKey diff --git a/modules/structs/user_key.go b/modules/structs/user_key.go index 08eed59a89..b92552b200 100644 --- a/modules/structs/user_key.go +++ b/modules/structs/user_key.go @@ -15,7 +15,7 @@ type PublicKey struct { Title string `json:"title,omitempty"` Fingerprint string `json:"fingerprint,omitempty"` // swagger:strfmt date-time - Created time.Time `json:"created_at,omitempty"` + Created time.Time `json:"created_at"` Owner *User `json:"user,omitempty"` ReadOnly bool `json:"read_only,omitempty"` KeyType string `json:"key_type,omitempty"` diff --git a/modules/templates/eval/eval_test.go b/modules/templates/eval/eval_test.go index 3e68203638..6b13d14007 100644 --- a/modules/templates/eval/eval_test.go +++ b/modules/templates/eval/eval_test.go @@ -13,7 +13,7 @@ import ( ) func tokens(s string) (a []any) { - for _, v := range strings.Fields(s) { + for v := range strings.FieldsSeq(s) { a = append(a, v) } return a diff --git a/modules/templates/htmlrenderer.go b/modules/templates/htmlrenderer.go index d60397df08..4290e1c29f 100644 --- a/modules/templates/htmlrenderer.go +++ b/modules/templates/htmlrenderer.go @@ -248,7 +248,7 @@ func extractErrorLine(code []byte, lineNum, posNum int, target string) string { b := bufio.NewReader(bytes.NewReader(code)) var line []byte var err error - for i := 0; i < lineNum; i++ { + for i := range lineNum { if line, err = b.ReadBytes('\n'); err != nil { if i == lineNum-1 && errors.Is(err, io.EOF) { err = nil diff --git a/modules/templates/scopedtmpl/scopedtmpl.go b/modules/templates/scopedtmpl/scopedtmpl.go index 41a8ca86e9..d9866b3513 100644 --- a/modules/templates/scopedtmpl/scopedtmpl.go +++ b/modules/templates/scopedtmpl/scopedtmpl.go @@ -7,6 +7,7 @@ import ( "fmt" "html/template" "io" + "maps" "reflect" "sync" texttemplate "text/template" @@ -40,9 +41,7 @@ func (t *ScopedTemplate) Funcs(funcMap template.FuncMap) { panic("cannot add new functions to frozen template set") } t.all.Funcs(funcMap) - for k, v := range funcMap { - t.parseFuncs[k] = v - } + maps.Copy(t.parseFuncs, funcMap) } func (t *ScopedTemplate) New(name string) *template.Template { @@ -159,9 +158,7 @@ func newScopedTemplateSet(all *template.Template, name string) (*scopedTemplateS textTmplPtr.muFuncs.Lock() ts.execFuncs = map[string]reflect.Value{} - for k, v := range textTmplPtr.execFuncs { - ts.execFuncs[k] = v - } + maps.Copy(ts.execFuncs, textTmplPtr.execFuncs) textTmplPtr.muFuncs.Unlock() var collectTemplates func(nodes []parse.Node) @@ -220,9 +217,7 @@ func (ts *scopedTemplateSet) newExecutor(funcMap map[string]any) TemplateExecuto tmpl := texttemplate.New("") tmplPtr := ptr[textTemplate](tmpl) tmplPtr.execFuncs = map[string]reflect.Value{} - for k, v := range ts.execFuncs { - tmplPtr.execFuncs[k] = v - } + maps.Copy(tmplPtr.execFuncs, ts.execFuncs) if funcMap != nil { tmpl.Funcs(funcMap) } diff --git a/modules/templates/util_render.go b/modules/templates/util_render.go index 02851ed75d..e1ad83b88d 100644 --- a/modules/templates/util_render.go +++ b/modules/templates/util_render.go @@ -246,7 +246,8 @@ func RenderMarkdownToHtml(ctx context.Context, input string) template.HTML { //n } func RenderLabels(ctx *Context, labels []*issues_model.Label, repoLink string, isPull bool) template.HTML { - htmlCode := `` + var htmlCode strings.Builder + htmlCode.WriteString(``) for _, label := range labels { // Protect against nil value in labels - shouldn't happen but would cause a panic if so if label == nil { @@ -257,11 +258,11 @@ func RenderLabels(ctx *Context, labels []*issues_model.Label, repoLink string, i if isPull { issuesOrPull = "pulls" } - htmlCode += fmt.Sprintf("%s ", + fmt.Fprintf(&htmlCode, "%s ", repoLink, issuesOrPull, label.ID, RenderLabel(ctx, label)) } - htmlCode += "" - return template.HTML(htmlCode) + htmlCode.WriteString("") + return template.HTML(htmlCode.String()) } func RenderUser(ctx context.Context, user user_model.User) template.HTML { diff --git a/modules/test/logchecker.go b/modules/test/logchecker.go index 8e8fc32216..af82ff0461 100644 --- a/modules/test/logchecker.go +++ b/modules/test/logchecker.go @@ -53,11 +53,11 @@ func (lc *LogChecker) checkLogEvent(event *log.EventFormatted) { } } -var checkerIndex int64 +var checkerIndex atomic.Int64 func NewLogChecker(namePrefix string, level log.Level) (logChecker *LogChecker, cancel func()) { logger := log.GetManager().GetLogger(namePrefix) - newCheckerIndex := atomic.AddInt64(&checkerIndex, 1) + newCheckerIndex := checkerIndex.Add(1) writerName := namePrefix + "-" + fmt.Sprint(newCheckerIndex) lc := &LogChecker{} diff --git a/modules/testlogger/testlogger.go b/modules/testlogger/testlogger.go index 6ced5f6780..54f0462703 100644 --- a/modules/testlogger/testlogger.go +++ b/modules/testlogger/testlogger.go @@ -501,7 +501,7 @@ func PrintCurrentTest(t testing.TB, skip ...int) func() { // Printf takes a format and args and prints the string to os.Stdout func Printf(format string, args ...any) { if log.CanColorStdout { - for i := 0; i < len(args); i++ { + for i := range args { args[i] = log.NewColoredValue(args[i]) } } diff --git a/modules/updatechecker/update_checker.go b/modules/updatechecker/update_checker.go index b0932ba663..8b524b6519 100644 --- a/modules/updatechecker/update_checker.go +++ b/modules/updatechecker/update_checker.go @@ -60,9 +60,9 @@ func getVersionDNS(domainEndpoint string) (version string, err error) { } for _, record := range records { - if strings.HasPrefix(record, "forgejo_versions=") { + if after, ok := strings.CutPrefix(record, "forgejo_versions="); ok { // Get all supported versions, separated by a comma. - supportedVersions := strings.Split(strings.TrimPrefix(record, "forgejo_versions="), ",") + supportedVersions := strings.Split(after, ",") // For now always return the latest supported version. return supportedVersions[len(supportedVersions)-1], nil } diff --git a/modules/util/remove.go b/modules/util/remove.go index 2a65a6b0aa..e2cffc92c9 100644 --- a/modules/util/remove.go +++ b/modules/util/remove.go @@ -12,7 +12,7 @@ import ( // Remove removes the named file or (empty) directory with at most 5 attempts. func Remove(name string) error { var err error - for i := 0; i < 5; i++ { + for range 5 { err = os.Remove(name) if err == nil { break @@ -35,7 +35,7 @@ func Remove(name string) error { // RemoveAll removes the named file or (empty) directory with at most 5 attempts. func RemoveAll(name string) error { var err error - for i := 0; i < 5; i++ { + for range 5 { err = os.RemoveAll(name) if err == nil { break @@ -58,7 +58,7 @@ func RemoveAll(name string) error { // Rename renames (moves) oldpath to newpath with at most 5 attempts. func Rename(oldpath, newpath string) error { var err error - for i := 0; i < 5; i++ { + for i := range 5 { err = os.Rename(oldpath, newpath) if err == nil { break diff --git a/modules/util/rotatingfilewriter/writer_test.go b/modules/util/rotatingfilewriter/writer_test.go index 5b3b351667..c3664d8c4f 100644 --- a/modules/util/rotatingfilewriter/writer_test.go +++ b/modules/util/rotatingfilewriter/writer_test.go @@ -24,7 +24,7 @@ func TestCompressOldFile(t *testing.T) { ng, err := os.OpenFile(nonGzip, os.O_CREATE|os.O_WRONLY, 0o660) require.NoError(t, err) - for i := 0; i < 999; i++ { + for range 999 { f.WriteString("This is a test file\n") ng.WriteString("This is a test file\n") } diff --git a/modules/util/timer_test.go b/modules/util/timer_test.go index 602800c248..1f9a4ac586 100644 --- a/modules/util/timer_test.go +++ b/modules/util/timer_test.go @@ -12,19 +12,19 @@ import ( ) func TestDebounce(t *testing.T) { - var c int64 + var c atomic.Int64 d := Debounce(50 * time.Millisecond) - d(func() { atomic.AddInt64(&c, 1) }) - assert.EqualValues(t, 0, atomic.LoadInt64(&c)) - d(func() { atomic.AddInt64(&c, 1) }) - d(func() { atomic.AddInt64(&c, 1) }) + d(func() { c.Add(1) }) + assert.EqualValues(t, 0, c.Load()) + d(func() { c.Add(1) }) + d(func() { c.Add(1) }) time.Sleep(100 * time.Millisecond) - assert.EqualValues(t, 1, atomic.LoadInt64(&c)) - d(func() { atomic.AddInt64(&c, 1) }) - assert.EqualValues(t, 1, atomic.LoadInt64(&c)) - d(func() { atomic.AddInt64(&c, 1) }) - d(func() { atomic.AddInt64(&c, 1) }) - d(func() { atomic.AddInt64(&c, 1) }) + assert.EqualValues(t, 1, c.Load()) + d(func() { c.Add(1) }) + assert.EqualValues(t, 1, c.Load()) + d(func() { c.Add(1) }) + d(func() { c.Add(1) }) + d(func() { c.Add(1) }) time.Sleep(100 * time.Millisecond) - assert.EqualValues(t, 2, atomic.LoadInt64(&c)) + assert.EqualValues(t, 2, c.Load()) } diff --git a/modules/util/truncate.go b/modules/util/truncate.go index 7207a89177..35836f745c 100644 --- a/modules/util/truncate.go +++ b/modules/util/truncate.go @@ -47,7 +47,7 @@ func SplitTrimSpace(input, sep string) []string { input = strings.ReplaceAll(input, "\r\n", "\n") var stringList []string - for _, s := range strings.Split(input, sep) { + for s := range strings.SplitSeq(input, sep) { // trim leading and trailing space stringList = append(stringList, strings.TrimSpace(s)) } diff --git a/modules/util/util_test.go b/modules/util/util_test.go index a85113b2f4..24fca75e7b 100644 --- a/modules/util/util_test.go +++ b/modules/util/util_test.go @@ -243,7 +243,7 @@ func TestGeneratingEd25519Keypair(t *testing.T) { // And another 32 bytes are required, which is included as random value // in the OpenSSH format. b := make([]byte, 64) - for i := 0; i < 64; i++ { + for i := range 64 { b[i] = byte(i) } rand.Reader = bytes.NewReader(b) diff --git a/modules/validation/binding.go b/modules/validation/binding.go index 463e7e8f7a..23d0622de4 100644 --- a/modules/validation/binding.go +++ b/modules/validation/binding.go @@ -266,17 +266,17 @@ func addEmailBindingRules() { } func portOnly(hostport string) string { - colon := strings.IndexByte(hostport, ':') - if colon == -1 { + _, after, ok := strings.Cut(hostport, ":") + if !ok { return "" } - if i := strings.Index(hostport, "]:"); i != -1 { - return hostport[i+len("]:"):] + if _, after, ok := strings.Cut(hostport, "]:"); ok { + return after } if strings.Contains(hostport, "]") { return "" } - return hostport[colon+len(":"):] + return after } func validPort(p string) bool { diff --git a/modules/validation/helpers.go b/modules/validation/helpers.go index 848fb70af5..ce451b8ff4 100644 --- a/modules/validation/helpers.go +++ b/modules/validation/helpers.go @@ -7,6 +7,7 @@ import ( "net" "net/url" "regexp" + "slices" "strings" "forgejo.org/modules/setting" @@ -40,12 +41,7 @@ func IsValidSiteURL(uri string) bool { return false } - for _, scheme := range setting.Service.ValidSiteURLSchemes { - if scheme == u.Scheme { - return true - } - } - return false + return slices.Contains(setting.Service.ValidSiteURLSchemes, u.Scheme) } // IsAPIURL checks if URL is current Gitea instance API URL diff --git a/modules/validation/validatable.go b/modules/validation/validatable.go index 1b0d4aa382..1751e727f3 100644 --- a/modules/validation/validatable.go +++ b/modules/validation/validatable.go @@ -6,6 +6,7 @@ package validation import ( "fmt" "reflect" + "slices" "strings" "unicode/utf8" @@ -87,10 +88,8 @@ func ValidateMaxLen(value string, maxLen int, name string) []string { } func ValidateOneOf(value any, allowed []any, name string) []string { - for _, allowedElem := range allowed { - if value == allowedElem { - return []string{} - } + if slices.Contains(allowed, value) { + return []string{} } return []string{fmt.Sprintf("Field %s contains the value %v, which is not in allowed subset %v", name, value, allowed)} } diff --git a/modules/web/handler.go b/modules/web/handler.go index 4a7f28b1fa..e3f0b029fd 100644 --- a/modules/web/handler.go +++ b/modules/web/handler.go @@ -17,7 +17,7 @@ import ( var responseStatusProviders = map[reflect.Type]func(req *http.Request) types.ResponseStatusProvider{} func RegisterResponseStatusProvider[T any](fn func(req *http.Request) types.ResponseStatusProvider) { - responseStatusProviders[reflect.TypeOf((*T)(nil)).Elem()] = fn + responseStatusProviders[reflect.TypeFor[T]()] = fn } // responseWriter is a wrapper of http.ResponseWriter, to check whether the response has been written @@ -49,9 +49,9 @@ func (r *responseWriter) WriteHeader(statusCode int) { } var ( - httpReqType = reflect.TypeOf((*http.Request)(nil)) - respWriterType = reflect.TypeOf((*http.ResponseWriter)(nil)).Elem() - cancelFuncType = reflect.TypeOf((*goctx.CancelFunc)(nil)).Elem() + httpReqType = reflect.TypeFor[*http.Request]() + respWriterType = reflect.TypeFor[http.ResponseWriter]() + cancelFuncType = reflect.TypeFor[goctx.CancelFunc]() ) // preCheckHandler checks whether the handler is valid, developers could get first-time feedback, all mistakes could be found at startup diff --git a/modules/web/middleware/binding.go b/modules/web/middleware/binding.go index 123eb29015..06bf55b571 100644 --- a/modules/web/middleware/binding.go +++ b/modules/web/middleware/binding.go @@ -30,7 +30,7 @@ func AssignForm(form any, data map[string]any) { typ := reflect.TypeOf(form) val := reflect.ValueOf(form) - for typ.Kind() == reflect.Ptr { + for typ.Kind() == reflect.Pointer { typ = typ.Elem() val = val.Elem() } @@ -51,7 +51,7 @@ func AssignForm(form any, data map[string]any) { } func getRuleBody(field reflect.StructField, prefix string) string { - for _, rule := range strings.Split(field.Tag.Get("binding"), ";") { + for rule := range strings.SplitSeq(field.Tag.Get("binding"), ";") { if strings.HasPrefix(rule, prefix) { return rule[len(prefix) : len(rule)-1] } @@ -99,7 +99,7 @@ func Validate(errs binding.Errors, data map[string]any, f any, l translation.Loc typ := reflect.TypeOf(f) - if typ.Kind() == reflect.Ptr { + if typ.Kind() == reflect.Pointer { typ = typ.Elem() } diff --git a/modules/web/middleware/data.go b/modules/web/middleware/data.go index 4603e64052..c8bb8276c5 100644 --- a/modules/web/middleware/data.go +++ b/modules/web/middleware/data.go @@ -5,6 +5,7 @@ package middleware import ( "context" + "maps" "time" "forgejo.org/modules/setting" @@ -22,9 +23,7 @@ func (ds ContextData) GetData() ContextData { } func (ds ContextData) MergeFrom(other ContextData) ContextData { - for k, v := range other { - ds[k] = v - } + maps.Copy(ds, other) return ds } diff --git a/modules/web/route.go b/modules/web/route.go index ceb97ba333..dc83178f74 100644 --- a/modules/web/route.go +++ b/modules/web/route.go @@ -107,8 +107,8 @@ func (r *Route) Methods(methods, pattern string, h ...any) { middlewares, handlerFunc := r.wrapMiddlewareAndHandler(h) fullPattern := r.getPattern(pattern) if strings.Contains(methods, ",") { - methods := strings.Split(methods, ",") - for _, method := range methods { + methods := strings.SplitSeq(methods, ",") + for method := range methods { r.R.With(middlewares...).Method(strings.TrimSpace(method), fullPattern, handlerFunc) } } else { diff --git a/routers/api/actions/oidc.go b/routers/api/actions/oidc.go index 92341e4f66..d824030ca7 100644 --- a/routers/api/actions/oidc.go +++ b/routers/api/actions/oidc.go @@ -99,8 +99,7 @@ func OIDCRoutes(prefix string) *web.Route { // Add custom claims by iterating over [actions_service.IDTokenCustomClaims] // and inspecting the names of the json struct tags - customClaims := actions_service.IDTokenCustomClaims{} - rt := reflect.TypeOf(customClaims) + rt := reflect.TypeFor[actions_service.IDTokenCustomClaims]() for i := 0; i < rt.NumField(); i++ { f := rt.Field(i) diff --git a/routers/api/packages/cargo/cargo.go b/routers/api/packages/cargo/cargo.go index 50dc8d1c3d..9d4539a732 100644 --- a/routers/api/packages/cargo/cargo.go +++ b/routers/api/packages/cargo/cargo.go @@ -95,10 +95,7 @@ type SearchResultMeta struct { // https://doc.rust-lang.org/cargo/reference/registries.html#search func SearchPackages(ctx *context.Context) { - page := ctx.FormInt("page") - if page < 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) perPage := ctx.FormInt("per_page") paginator := db.ListOptions{ Page: page, diff --git a/routers/api/packages/composer/composer.go b/routers/api/packages/composer/composer.go index 9e67d419ec..8f87d27f3f 100644 --- a/routers/api/packages/composer/composer.go +++ b/routers/api/packages/composer/composer.go @@ -53,10 +53,7 @@ func ServiceIndex(ctx *context.Context) { // SearchPackages searches packages, only "q" is supported // https://packagist.org/apidoc#search-packages func SearchPackages(ctx *context.Context) { - page := ctx.FormInt("page") - if page < 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) perPage := ctx.FormInt("per_page") paginator := db.ListOptions{ Page: page, diff --git a/routers/api/v1/repo/issue_dependency.go b/routers/api/v1/repo/issue_dependency.go index 7c087e28b9..3d3ca58bd4 100644 --- a/routers/api/v1/repo/issue_dependency.go +++ b/routers/api/v1/repo/issue_dependency.go @@ -78,10 +78,7 @@ func GetIssueDependencies(ctx *context.APIContext) { return } - page := ctx.FormInt("page") - if page <= 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) limit := ctx.FormInt("limit") if limit == 0 { limit = setting.API.DefaultPagingNum @@ -332,10 +329,7 @@ func GetIssueBlocks(ctx *context.APIContext) { return } - page := ctx.FormInt("page") - if page <= 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) limit := ctx.FormInt("limit") if limit <= 1 { limit = setting.API.DefaultPagingNum diff --git a/routers/api/v1/repo/wiki.go b/routers/api/v1/repo/wiki.go index 71d29d026b..2b14bacf89 100644 --- a/routers/api/v1/repo/wiki.go +++ b/routers/api/v1/repo/wiki.go @@ -303,10 +303,7 @@ func ListWikiPages(ctx *context.APIContext) { return } - page := ctx.FormInt("page") - if page <= 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) limit := ctx.FormInt("limit") if limit <= 1 { limit = setting.API.DefaultPagingNum @@ -439,10 +436,7 @@ func ListPageRevisions(ctx *context.APIContext) { // get commit count - wiki revisions commitsCount, _ := wikiRepo.FileCommitsCount(ctx.Repo.Repository.GetWikiBranchName(), pageFilename) - page := ctx.FormInt("page") - if page <= 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) // get Commit Count commitsHistory, err := wikiRepo.CommitsByFileAndRange( diff --git a/routers/install/install.go b/routers/install/install.go index 243f4a8f19..95452e64dd 100644 --- a/routers/install/install.go +++ b/routers/install/install.go @@ -11,6 +11,7 @@ import ( "os" "os/exec" "path/filepath" + "slices" "strconv" "strings" "time" @@ -103,11 +104,8 @@ func Install(ctx *context.Context) { curDBType := setting.Database.Type.String() var isCurDBTypeSupported bool - for _, dbType := range setting.SupportedDatabaseTypes { - if dbType == curDBType { - isCurDBTypeSupported = true - break - } + if slices.Contains(setting.SupportedDatabaseTypes, curDBType) { + isCurDBTypeSupported = true } if !isCurDBTypeSupported { curDBType = "mysql" diff --git a/routers/private/serv.go b/routers/private/serv.go index 7c4a5b8bb7..26f7e288cd 100644 --- a/routers/private/serv.go +++ b/routers/private/serv.go @@ -6,6 +6,7 @@ package private import ( "fmt" "net/http" + "slices" "strings" asymkey_model "forgejo.org/models/asymkey" @@ -165,15 +166,13 @@ func ServCommand(ctx *context.PrivateContext) { if err != nil { if repo_model.IsErrRepoNotExist(err) { repoExist = false - for _, verb := range ctx.FormStrings("verb") { - if verb == "git-upload-pack" { - // User is fetching/cloning a non-existent repository - sshLogger.Warn("Failed authentication attempt (cannot find repository: %s/%s) from %s", results.OwnerName, results.RepoName, ctx.RemoteAddr()) - ctx.JSON(http.StatusNotFound, private.Response{ - UserMsg: fmt.Sprintf("Cannot find repository: %s/%s", results.OwnerName, results.RepoName), - }) - return - } + if slices.Contains(ctx.FormStrings("verb"), "git-upload-pack") { + // User is fetching/cloning a non-existent repository + sshLogger.Warn("Failed authentication attempt (cannot find repository: %s/%s) from %s", results.OwnerName, results.RepoName, ctx.RemoteAddr()) + ctx.JSON(http.StatusNotFound, private.Response{ + UserMsg: fmt.Sprintf("Cannot find repository: %s/%s", results.OwnerName, results.RepoName), + }) + return } } else { sshLogger.Error("Unable to get repository: %s/%s Error: %v", results.OwnerName, results.RepoName, err) diff --git a/routers/web/admin/auths.go b/routers/web/admin/auths.go index 27a241f508..e03fea9da2 100644 --- a/routers/web/admin/auths.go +++ b/routers/web/admin/auths.go @@ -166,7 +166,7 @@ func parseOAuth2Config(form forms.AuthenticationForm) *oauth2.Source { customURLMapping = nil } var scopes []string - for _, s := range strings.Split(form.Oauth2Scopes, ",") { + for s := range strings.SplitSeq(form.Oauth2Scopes, ",") { s = strings.TrimSpace(s) if s != "" { scopes = append(scopes, s) diff --git a/routers/web/admin/config.go b/routers/web/admin/config.go index e1c3a5f9ee..2d3ea78052 100644 --- a/routers/web/admin/config.go +++ b/routers/web/admin/config.go @@ -62,7 +62,7 @@ func TestCache(ctx *context.Context) { func shadowPasswordKV(cfgItem, splitter string) string { fields := strings.Split(cfgItem, splitter) - for i := 0; i < len(fields); i++ { + for i := range fields { if strings.HasPrefix(fields[i], "password=") { fields[i] = "password=******" break diff --git a/routers/web/admin/notice.go b/routers/web/admin/notice.go index 8bcaadf915..f67430f386 100644 --- a/routers/web/admin/notice.go +++ b/routers/web/admin/notice.go @@ -26,10 +26,7 @@ func Notices(ctx *context.Context) { ctx.Data["PageIsAdminNotices"] = true total := system_model.CountNotices(ctx) - page := ctx.FormInt("page") - if page <= 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) notices, err := system_model.Notices(ctx, page, setting.UI.Admin.NoticePagingNum) if err != nil { diff --git a/routers/web/admin/packages.go b/routers/web/admin/packages.go index 5c80a1eada..032d10fc41 100644 --- a/routers/web/admin/packages.go +++ b/routers/web/admin/packages.go @@ -24,10 +24,7 @@ const ( // Packages shows all packages func Packages(ctx *context.Context) { - page := ctx.FormInt("page") - if page <= 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) query := ctx.FormTrim("q") packageType := ctx.FormTrim("type") sort := ctx.FormTrim("sort") diff --git a/routers/web/auth/oauth.go b/routers/web/auth/oauth.go index 89d2fd37ce..eba2b441c4 100644 --- a/routers/web/auth/oauth.go +++ b/routers/web/auth/oauth.go @@ -283,8 +283,8 @@ type userInfoResponse struct { func ifOnlyPublicGroups(scopes string) bool { scopes = strings.ReplaceAll(scopes, ",", " ") - scopesList := strings.Fields(scopes) - for _, scope := range scopesList { + scopesList := strings.FieldsSeq(scopes) + for scope := range scopesList { if scope == "all" || scope == "read:organization" || scope == "read:admin" { return false } @@ -424,11 +424,11 @@ func AuthorizeOAuth(ctx *context.Context) { errs := binding.Errors{} errs = form.Validate(ctx.Req, errs) if len(errs) > 0 { - errstring := "" + var errstring strings.Builder for _, e := range errs { - errstring += e.Error() + "\n" + errstring.WriteString(e.Error() + "\n") } - ctx.ServerError("AuthorizeOAuth: Validate: ", fmt.Errorf("errors occurred during validation: %s", errstring)) + ctx.ServerError("AuthorizeOAuth: Validate: ", fmt.Errorf("errors occurred during validation: %s", errstring.String())) return } diff --git a/routers/web/org/members.go b/routers/web/org/members.go index 65e2b032e8..3237e23ab9 100644 --- a/routers/web/org/members.go +++ b/routers/web/org/members.go @@ -27,10 +27,7 @@ func Members(ctx *context.Context) { ctx.Data["Title"] = org.FullName ctx.Data["PageIsOrgMembers"] = true - page := ctx.FormInt("page") - if page <= 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) opts := &organization.FindOrgMembersOpts{ Doer: ctx.Doer, diff --git a/routers/web/org/projects.go b/routers/web/org/projects.go index a492d85d84..6e5f4079ac 100644 --- a/routers/web/org/projects.go +++ b/routers/web/org/projects.go @@ -48,10 +48,7 @@ func Projects(ctx *context.Context) { isShowClosed := strings.ToLower(ctx.FormTrim("state")) == "closed" keyword := ctx.FormTrim("q") - page := ctx.FormInt("page") - if page <= 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) var projectType project_model.Type if ctx.ContextUser.IsOrganization() { diff --git a/routers/web/repo/branch.go b/routers/web/repo/branch.go index 0fe52bfb48..3c32a3b9e5 100644 --- a/routers/web/repo/branch.go +++ b/routers/web/repo/branch.go @@ -46,10 +46,7 @@ func Branches(ctx *context.Context) { ctx.Data["PageIsViewCode"] = true ctx.Data["PageIsBranches"] = true - page := ctx.FormInt("page") - if page <= 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) pageSize := setting.Git.BranchesRangeSize kw := ctx.FormString("q") diff --git a/routers/web/repo/commit.go b/routers/web/repo/commit.go index 3db8a091b0..5b8f35f5b1 100644 --- a/routers/web/repo/commit.go +++ b/routers/web/repo/commit.go @@ -68,10 +68,7 @@ func Commits(ctx *context.Context) { return } - page := ctx.FormInt("page") - if page <= 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) pageSize := ctx.FormInt("limit") if pageSize <= 0 { @@ -241,10 +238,7 @@ func FileHistory(ctx *context.Context) { return } - page := ctx.FormInt("page") - if page <= 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) commits, err := ctx.Repo.GitRepo.CommitsByFileAndRange( git.CommitsByFileAndRangeOptions{ diff --git a/routers/web/repo/editor.go b/routers/web/repo/editor.go index 2948c1db3c..fb1aaf042a 100644 --- a/routers/web/repo/editor.go +++ b/routers/web/repo/editor.go @@ -832,7 +832,7 @@ func cleanUploadFileName(name string) string { // Rebase the filename name = util.PathJoinRel(name) // Git disallows any filenames to have a .git directory in them. - for _, part := range strings.Split(name, "/") { + for part := range strings.SplitSeq(name, "/") { if strings.ToLower(part) == ".git" { return "" } diff --git a/routers/web/repo/githttp.go b/routers/web/repo/githttp.go index 403245596d..de05d99a3e 100644 --- a/routers/web/repo/githttp.go +++ b/routers/web/repo/githttp.go @@ -13,6 +13,7 @@ import ( "os" "path/filepath" "regexp" + "slices" "strconv" "strings" "sync" @@ -363,12 +364,7 @@ func containsParentDirectorySeparator(v string) bool { if !strings.Contains(v, "..") { return false } - for _, ent := range strings.FieldsFunc(v, isSlashRune) { - if ent == ".." { - return true - } - } - return false + return slices.Contains(strings.FieldsFunc(v, isSlashRune), "..") } func isSlashRune(r rune) bool { return r == '/' || r == '\\' } diff --git a/routers/web/repo/issue.go b/routers/web/repo/issue.go index cc09651e97..3852c4c18f 100644 --- a/routers/web/repo/issue.go +++ b/routers/web/repo/issue.go @@ -11,6 +11,7 @@ import ( "errors" "fmt" "html/template" + "maps" "math/big" "net/http" "net/url" @@ -267,10 +268,7 @@ func issues(ctx *context.Context, milestoneID, projectID int64, isPullOption opt archived := ctx.FormBool("archived") - page := ctx.FormInt("page") - if page <= 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) var total int switch { @@ -1014,9 +1012,7 @@ func NewIssue(ctx *context.Context) { _, templateErrs := issue_service.GetTemplatesFromDefaultBranch(ctx.Repo.Repository, ctx.Repo.GitRepo) templateLoaded, errs := setTemplateIfExists(ctx, issueTemplateKey, issueTemplateCandidates) - for k, v := range errs { - templateErrs[k] = v - } + maps.Copy(templateErrs, errs) if ctx.Written() { return } @@ -2190,7 +2186,7 @@ func getActionIssues(ctx *context.Context) issues_model.IssueList { return nil } issueIDs := make([]int64, 0, 10) - for _, stringIssueID := range strings.Split(commaSeparatedIssueIDs, ",") { + for stringIssueID := range strings.SplitSeq(commaSeparatedIssueIDs, ",") { issueID, err := strconv.ParseInt(stringIssueID, 10, 64) if err != nil { ctx.ServerError("ParseInt", err) diff --git a/routers/web/repo/issue_label_test.go b/routers/web/repo/issue_label_test.go index 0adcc39499..b4d350b31d 100644 --- a/routers/web/repo/issue_label_test.go +++ b/routers/web/repo/issue_label_test.go @@ -6,6 +6,7 @@ package repo import ( "net/http" "strconv" + "strings" "testing" issues_model "forgejo.org/models/issues" @@ -21,14 +22,14 @@ import ( ) func int64SliceToCommaSeparated(a []int64) string { - s := "" + var s strings.Builder for i, n := range a { if i > 0 { - s += "," + s.WriteString(",") } - s += strconv.Itoa(int(n)) + s.WriteString(strconv.Itoa(int(n))) } - return s + return s.String() } func TestInitializeLabels(t *testing.T) { diff --git a/routers/web/repo/milestone.go b/routers/web/repo/milestone.go index 920a9ee12a..5ede62d992 100644 --- a/routers/web/repo/milestone.go +++ b/routers/web/repo/milestone.go @@ -40,10 +40,7 @@ func Milestones(ctx *context.Context) { isShowClosed := ctx.FormString("state") == "closed" sortType := ctx.FormString("sort") keyword := ctx.FormTrim("q") - page := ctx.FormInt("page") - if page <= 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) miles, total, err := db.FindAndCount[issues_model.Milestone](ctx, issues_model.FindMilestoneOptions{ ListOptions: db.ListOptions{ diff --git a/routers/web/repo/packages.go b/routers/web/repo/packages.go index c947fb99bf..fd7e886557 100644 --- a/routers/web/repo/packages.go +++ b/routers/web/repo/packages.go @@ -21,10 +21,7 @@ const ( // Packages displays a list of all packages in the repository func Packages(ctx *context.Context) { - page := ctx.FormInt("page") - if page <= 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) query := ctx.FormTrim("q") packageType := ctx.FormTrim("type") diff --git a/routers/web/repo/projects.go b/routers/web/repo/projects.go index e3e9ce0eb7..98e4c35fa2 100644 --- a/routers/web/repo/projects.go +++ b/routers/web/repo/projects.go @@ -57,10 +57,7 @@ func Projects(ctx *context.Context) { isShowClosed := strings.ToLower(ctx.FormTrim("state")) == "closed" keyword := ctx.FormTrim("q") repo := ctx.Repo.Repository - page := ctx.FormInt("page") - if page <= 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) ctx.Data["OpenCount"] = repo.NumOpenProjects ctx.Data["ClosedCount"] = repo.NumClosedProjects diff --git a/routers/web/repo/repo.go b/routers/web/repo/repo.go index 6b6ec55720..e5ede13bf5 100644 --- a/routers/web/repo/repo.go +++ b/routers/web/repo/repo.go @@ -92,7 +92,7 @@ func checkContextUser(ctx *context.Context, uid int64) *user_model.User { if !ctx.Doer.IsAdmin { orgsAvailable := []*organization.Organization{} - for i := 0; i < len(orgs); i++ { + for i := range orgs { if orgs[i].CanCreateRepo() { orgsAvailable = append(orgsAvailable, orgs[i]) } diff --git a/routers/web/repo/setting/lfs.go b/routers/web/repo/setting/lfs.go index 78184930d3..40181ebb52 100644 --- a/routers/web/repo/setting/lfs.go +++ b/routers/web/repo/setting/lfs.go @@ -44,10 +44,7 @@ func LFSFiles(ctx *context.Context) { ctx.NotFound("LFSFiles", nil) return } - page := ctx.FormInt("page") - if page <= 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) total, err := git_model.CountLFSMetaObjects(ctx, ctx.Repo.Repository.ID) if err != nil { ctx.ServerError("LFSFiles", err) @@ -76,10 +73,7 @@ func LFSLocks(ctx *context.Context) { } ctx.Data["LFSFilesLink"] = ctx.Repo.RepoLink + "/settings/lfs" - page := ctx.FormInt("page") - if page <= 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) total, err := git_model.CountLFSLockByRepoID(ctx, ctx.Repo.Repository.ID) if err != nil { ctx.ServerError("LFSLocks", err) diff --git a/routers/web/repo/wiki.go b/routers/web/repo/wiki.go index 1b5265978a..8153288312 100644 --- a/routers/web/repo/wiki.go +++ b/routers/web/repo/wiki.go @@ -374,10 +374,7 @@ func renderRevisionPage(ctx *context.Context) (*git.Repository, *git.TreeEntry) ctx.Data["CommitCount"] = commitsCount // get page - page := ctx.FormInt("page") - if page <= 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) // get Commit Count commitsHistory, err := wikiRepo.CommitsByFileAndRange( diff --git a/routers/web/shared/actions/runners.go b/routers/web/shared/actions/runners.go index 012ec246fd..93543f92df 100644 --- a/routers/web/shared/actions/runners.go +++ b/routers/web/shared/actions/runners.go @@ -134,10 +134,7 @@ func RunnersList(ctx *context.Context) { return } - page := ctx.FormInt("page") - if page <= 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) opts := actions_model.FindRunnerOptions{ ListOptions: db.ListOptions{ @@ -216,10 +213,7 @@ func RunnerDetails(ctx *context.Context) { } runnerID := ctx.ParamsInt64(":runnerid") - page := ctx.FormInt("page") - if page <= 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) runner, err := actions_model.GetVisibleRunnerByID(ctx, runnerID, rCtx.OwnerID, rCtx.RepoID) if errors.Is(err, util.ErrNotExist) { diff --git a/routers/web/user/home.go b/routers/web/user/home.go index 9c40236475..60d6fc2bd0 100644 --- a/routers/web/user/home.go +++ b/routers/web/user/home.go @@ -191,7 +191,7 @@ func Milestones(ctx *context.Context) { reposQuery = reposQuery[1 : len(reposQuery)-1] // for each ID (delimiter ",") add to int to repoIDs - for _, rID := range strings.Split(reposQuery, ",") { + for rID := range strings.SplitSeq(reposQuery, ",") { // Ensure nonempty string entries if rID != "" && rID != "0" { rIDint64, err := strconv.ParseInt(rID, 10, 64) @@ -532,10 +532,7 @@ func buildIssueOverview(ctx *context.Context, unitType unit.Type) { opts.IsClosed = optional.Some(isShowClosed) // Make sure page number is at least 1. Will be posted to ctx.Data. - page := ctx.FormInt("page") - if page <= 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) opts.Paginator = &db.ListOptions{ Page: page, PageSize: setting.UI.IssuePagingNum, diff --git a/routers/web/user/notification.go b/routers/web/user/notification.go index 3b69e5bddf..1445658c18 100644 --- a/routers/web/user/notification.go +++ b/routers/web/user/notification.go @@ -226,10 +226,7 @@ func NotificationPurgePost(ctx *context.Context) { // NotificationSubscriptions returns the list of subscribed issues func NotificationSubscriptions(ctx *context.Context) { - page := ctx.FormInt("page") - if page < 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) sortType := ctx.FormString("sort") ctx.Data["SortType"] = sortType @@ -358,10 +355,7 @@ func NotificationSubscriptions(ctx *context.Context) { // NotificationWatching returns the list of watching repos func NotificationWatching(ctx *context.Context) { - page := ctx.FormInt("page") - if page < 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) keyword := ctx.FormTrim("q") ctx.Data["Keyword"] = keyword diff --git a/routers/web/user/package.go b/routers/web/user/package.go index 9a77af0bb2..0c403a2613 100644 --- a/routers/web/user/package.go +++ b/routers/web/user/package.go @@ -42,10 +42,7 @@ const ( // ListPackages displays a list of all packages of the context user func ListPackages(ctx *context.Context) { shared_user.PrepareContextForProfileBigAvatar(ctx) - page := ctx.FormInt("page") - if page <= 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) query := ctx.FormTrim("q") packageType := ctx.FormTrim("type") @@ -314,10 +311,7 @@ func ListPackageVersions(ctx *context.Context) { return } - page := ctx.FormInt("page") - if page <= 1 { - page = 1 - } + page := max(ctx.FormInt("page"), 1) pagination := &db.ListOptions{ PageSize: setting.UI.PackagesPagingNum, Page: page, diff --git a/services/actions/rerun.go b/services/actions/rerun.go index f6dd4af5c7..61aede5d7c 100644 --- a/services/actions/rerun.go +++ b/services/actions/rerun.go @@ -4,6 +4,8 @@ package actions import ( + "slices" + actions_model "forgejo.org/models/actions" "forgejo.org/modules/container" ) @@ -20,13 +22,10 @@ func GetAllRerunJobs(job *actions_model.ActionRunJob, allJobs []*actions_model.A if rerunJobsIDSet.Contains(j.JobID) { continue } - for _, need := range j.Needs { - if rerunJobsIDSet.Contains(need) { - found = true - rerunJobs = append(rerunJobs, j) - rerunJobsIDSet.Add(j.JobID) - break - } + if slices.ContainsFunc(j.Needs, rerunJobsIDSet.Contains) { + found = true + rerunJobs = append(rerunJobs, j) + rerunJobsIDSet.Add(j.JobID) } } if !found { diff --git a/services/auth/oauth2.go b/services/auth/oauth2.go index 1c1809b092..a23f586ffd 100644 --- a/services/auth/oauth2.go +++ b/services/auth/oauth2.go @@ -39,7 +39,7 @@ func grantAdditionalScopes(grantScopes string) string { } var apiTokenScopes []string - for _, apiTokenScope := range strings.Split(grantScopes, " ") { + for apiTokenScope := range strings.SplitSeq(grantScopes, " ") { if slices.Index(scopesSupported, apiTokenScope) == -1 { apiTokenScopes = append(apiTokenScopes, apiTokenScope) } diff --git a/services/auth/source/oauth2/urlmapping.go b/services/auth/source/oauth2/urlmapping.go index d0442d58a8..b9f445caa7 100644 --- a/services/auth/source/oauth2/urlmapping.go +++ b/services/auth/source/oauth2/urlmapping.go @@ -14,11 +14,11 @@ type CustomURLMapping struct { // CustomURLSettings describes the urls values and availability to use when customizing OAuth2 provider URLs type CustomURLSettings struct { - AuthURL Attribute `json:",omitempty"` - TokenURL Attribute `json:",omitempty"` - ProfileURL Attribute `json:",omitempty"` - EmailURL Attribute `json:",omitempty"` - Tenant Attribute `json:",omitempty"` + AuthURL Attribute + TokenURL Attribute + ProfileURL Attribute + EmailURL Attribute + Tenant Attribute } // Attribute describes the availability, and required status for a custom url configuration diff --git a/services/auth/source/pam/source_authenticate.go b/services/auth/source/pam/source_authenticate.go index 8a84683d29..f368d7e0b1 100644 --- a/services/auth/source/pam/source_authenticate.go +++ b/services/auth/source/pam/source_authenticate.go @@ -37,9 +37,9 @@ func (source *Source) Authenticate(ctx context.Context, user *user_model.User, u // Allow PAM sources with `@` in their name, like from Active Directory username := pamLogin email := pamLogin - idx := strings.Index(pamLogin, "@") - if idx > -1 { - username = pamLogin[:idx] + before, _, ok := strings.Cut(pamLogin, "@") + if ok { + username = before } if validation.ValidateEmail(email) != nil { if source.EmailDomain != "" { diff --git a/services/auth/source/smtp/source_authenticate.go b/services/auth/source/smtp/source_authenticate.go index 3d7ccd0669..919a7d0b5b 100644 --- a/services/auth/source/smtp/source_authenticate.go +++ b/services/auth/source/smtp/source_authenticate.go @@ -21,10 +21,10 @@ import ( func (source *Source) Authenticate(ctx context.Context, user *user_model.User, userName, password string) (*user_model.User, error) { // Verify allowed domains. if len(source.AllowedDomains) > 0 { - idx := strings.Index(userName, "@") - if idx == -1 { + _, after, ok := strings.Cut(userName, "@") + if !ok { return nil, user_model.ErrUserNotExist{Name: userName} - } else if !util.SliceContainsString(strings.Split(source.AllowedDomains, ","), userName[idx+1:], true) { + } else if !util.SliceContainsString(strings.Split(source.AllowedDomains, ","), after, true) { return nil, user_model.ErrUserNotExist{Name: userName} } } @@ -61,9 +61,9 @@ func (source *Source) Authenticate(ctx context.Context, user *user_model.User, u } username := userName - idx := strings.Index(userName, "@") - if idx > -1 { - username = userName[:idx] + before, _, ok := strings.Cut(userName, "@") + if ok { + username = before } user = &user_model.User{ diff --git a/services/context/api.go b/services/context/api.go index 434da29906..1064b1ab4a 100644 --- a/services/context/api.go +++ b/services/context/api.go @@ -10,6 +10,7 @@ import ( "fmt" "net/http" "net/url" + "slices" "strings" issues_model "forgejo.org/models/issues" @@ -466,13 +467,7 @@ func (ctx *APIContext) IsUserRepoAdmin() bool { // IsUserRepoWriter returns true if current user has write privilege in current repo func (ctx *APIContext) IsUserRepoWriter(unitTypes []unit.Type) bool { - for _, unitType := range unitTypes { - if ctx.Repo.CanWrite(unitType) { - return true - } - } - - return false + return slices.ContainsFunc(unitTypes, ctx.Repo.CanWrite) } // Returns true when the requests indicates that it accepts a Github response. diff --git a/services/context/context_model.go b/services/context/context_model.go index 1a8751ee63..dae244f843 100644 --- a/services/context/context_model.go +++ b/services/context/context_model.go @@ -4,6 +4,8 @@ package context import ( + "slices" + "forgejo.org/models/unit" ) @@ -19,11 +21,5 @@ func (ctx *Context) IsUserRepoAdmin() bool { // IsUserRepoWriter returns true if current user has write privilege in current repo func (ctx *Context) IsUserRepoWriter(unitTypes []unit.Type) bool { - for _, unitType := range unitTypes { - if ctx.Repo.CanWrite(unitType) { - return true - } - } - - return false + return slices.ContainsFunc(unitTypes, ctx.Repo.CanWrite) } diff --git a/services/context/permission.go b/services/context/permission.go index 49504e5043..f898bd98ae 100644 --- a/services/context/permission.go +++ b/services/context/permission.go @@ -5,6 +5,7 @@ package context import ( "net/http" + "slices" auth_model "forgejo.org/models/auth" "forgejo.org/models/perm" @@ -47,10 +48,8 @@ func CanEnableEditor() func(ctx *Context) { // RequireRepoWriterOr returns a middleware for requiring repository write to one of the unit permission func RequireRepoWriterOr(unitTypes ...unit.Type) func(ctx *Context) { return func(ctx *Context) { - for _, unitType := range unitTypes { - if ctx.Repo.CanWrite(unitType) { - return - } + if slices.ContainsFunc(unitTypes, ctx.Repo.CanWrite) { + return } ctx.NotFound(ctx.Req.URL.RequestURI(), nil) } @@ -85,10 +84,8 @@ func RequireRepoReader(unitType unit.Type) func(ctx *Context) { // RequireRepoReaderOr returns a middleware for requiring repository write to one of the unit permission func RequireRepoReaderOr(unitTypes ...unit.Type) func(ctx *Context) { return func(ctx *Context) { - for _, unitType := range unitTypes { - if ctx.Repo.CanRead(unitType) { - return - } + if slices.ContainsFunc(unitTypes, ctx.Repo.CanRead) { + return } if log.IsTrace() { var format string diff --git a/services/context/repo.go b/services/context/repo.go index ebc9109c96..583eb35474 100644 --- a/services/context/repo.go +++ b/services/context/repo.go @@ -395,14 +395,14 @@ func repoAssignment(ctx *Context, repo *repo_model.Repository) { followingRepoList, err := repo_model.FindFollowingReposByRepoID(ctx, repo.ID) if err == nil { - followingRepoString := "" + var followingRepoString strings.Builder for idx, followingRepo := range followingRepoList { if idx > 0 { - followingRepoString += ";" + followingRepoString.WriteString(";") } - followingRepoString += followingRepo.URI + followingRepoString.WriteString(followingRepo.URI) } - ctx.Data["FollowingRepos"] = followingRepoString + ctx.Data["FollowingRepos"] = followingRepoString.String() } else if err != repo_model.ErrMirrorNotExist { ctx.ServerError("FindFollowingRepoByRepoID", err) return diff --git a/services/context/upload/upload.go b/services/context/upload/upload.go index e71fc50c1f..79f4d66f5f 100644 --- a/services/context/upload/upload.go +++ b/services/context/upload/upload.go @@ -38,7 +38,7 @@ func Verify(buf []byte, fileName, allowedTypesStr string) error { allowedTypesStr = strings.ReplaceAll(allowedTypesStr, "|", ",") // compat for old config format allowedTypes := []string{} - for _, entry := range strings.Split(allowedTypesStr, ",") { + for entry := range strings.SplitSeq(allowedTypesStr, ",") { entry = strings.ToLower(strings.TrimSpace(entry)) if entry != "" { allowedTypes = append(allowedTypes, entry) diff --git a/services/convert/activitypub_user_action.go b/services/convert/activitypub_user_action.go index b08eaa14c7..6db3834ef2 100644 --- a/services/convert/activitypub_user_action.go +++ b/services/convert/activitypub_user_action.go @@ -8,6 +8,7 @@ import ( "fmt" "html" "net/url" + "strings" "time" activities_model "forgejo.org/models/activities" @@ -73,7 +74,7 @@ func ActionToForgeUserActivity(ctx context.Context, action *activities_model.Act if err := json.Unmarshal([]byte(action.GetContent()), commits); err != nil { return fm.ForgeUserActivity{}, err } - commitsHTML := "" + var commitsHTML strings.Builder renderCommit := func(commit *PushCommit) string { return fmt.Sprintf(`
  • %s
    %s
  • `, fmt.Sprintf("%s/commit/%s", action.GetRepoAbsoluteLink(ctx), url.PathEscape(commit.Sha1)), @@ -82,9 +83,9 @@ func ActionToForgeUserActivity(ctx context.Context, action *activities_model.Act ) } for _, commit := range commits.Commits { - commitsHTML += renderCommit(commit) + commitsHTML.WriteString(renderCommit(commit)) } - return makeUserActivity("pushed to %s at %s:
      %s
    ", renderBranch(), renderRepo(), commitsHTML) + return makeUserActivity("pushed to %s at %s:
      %s
    ", renderBranch(), renderRepo(), commitsHTML.String()) case activities_model.ActionCreateIssue: if err := action.LoadIssue(ctx); err != nil { return fm.ForgeUserActivity{}, err diff --git a/services/cron/tasks.go b/services/cron/tasks.go index b547acdf05..bd64dd081d 100644 --- a/services/cron/tasks.go +++ b/services/cron/tasks.go @@ -54,7 +54,7 @@ func (t *Task) IsEnabled() bool { // GetConfig will return a copy of the task's config func (t *Task) GetConfig() Config { - if reflect.TypeOf(t.config).Kind() == reflect.Ptr { + if reflect.TypeOf(t.config).Kind() == reflect.Pointer { // Pointer: return reflect.New(reflect.ValueOf(t.config).Elem().Type()).Interface().(Config) } diff --git a/services/doctor/push_mirror_consistency.go b/services/doctor/push_mirror_consistency.go index 07986770b2..e85b1740a4 100644 --- a/services/doctor/push_mirror_consistency.go +++ b/services/doctor/push_mirror_consistency.go @@ -23,7 +23,7 @@ func FixPushMirrorsWithoutGitRemote(ctx context.Context, logger log.Logger, auto return err } - for i := 0; i < len(pushMirrors); i++ { + for i := range pushMirrors { _, err = repo_model.GetPushMirrorRemoteAddress(repo.OwnerName, repo.Name, pushMirrors[i].RemoteName) if err != nil { if strings.Contains(err.Error(), "No such remote") { diff --git a/services/forms/repo_form.go b/services/forms/repo_form.go index e894b79a14..f9443b8b6c 100644 --- a/services/forms/repo_form.go +++ b/services/forms/repo_form.go @@ -10,6 +10,7 @@ import ( "net/http" "net/url" "regexp" + "slices" "strings" "forgejo.org/models" @@ -383,13 +384,7 @@ func (i IssueLockForm) HasValidReason() bool { return true } - for _, v := range setting.Repository.Issue.LockReasons { - if v == i.Reason { - return true - } - } - - return false + return slices.Contains(setting.Repository.Issue.LockReasons, i.Reason) } // CreateProjectForm form for creating a project diff --git a/services/gitdiff/csv.go b/services/gitdiff/csv.go index 8db73c56a3..c10ee14490 100644 --- a/services/gitdiff/csv.go +++ b/services/gitdiff/csv.go @@ -134,7 +134,7 @@ func createCsvDiffSingle(reader *csv.Reader, celltype TableDiffCellType) ([]*Tab return nil, err } cells := make([]*TableDiffCell, len(row)) - for j := 0; j < len(row); j++ { + for j := range row { if celltype == TableDiffCellDel { cells[j] = &TableDiffCell{LeftCell: row[j], Type: celltype} } else { @@ -365,11 +365,11 @@ func getColumnMapping(baseCSVReader, headCSVReader *csvReader) ([]int, []int) { } // Loops through the baseRow and see if there is a match in the head row - for i := 0; i < len(baseRow); i++ { + for i := range baseRow { base2HeadColMap[i] = unmappedColumn baseCell, err := getCell(baseRow, i) if err == nil { - for j := 0; j < len(headRow); j++ { + for j := range headRow { if head2BaseColMap[j] == -1 { headCell, err := getCell(headRow, j) if err == nil && baseCell == headCell { @@ -390,7 +390,7 @@ func getColumnMapping(baseCSVReader, headCSVReader *csvReader) ([]int, []int) { // tryMapColumnsByContent tries to map missing columns by the content of the first lines. func tryMapColumnsByContent(baseCSVReader *csvReader, base2HeadColMap []int, headCSVReader *csvReader, head2BaseColMap []int) { - for i := 0; i < len(base2HeadColMap); i++ { + for i := range base2HeadColMap { headStart := 0 for base2HeadColMap[i] == unmappedColumn && headStart < len(head2BaseColMap) { if head2BaseColMap[headStart] == unmappedColumn { @@ -424,7 +424,7 @@ func getCell(row []string, column int) (string, error) { // countUnmappedColumns returns the count of unmapped columns. func countUnmappedColumns(mapping []int) int { count := 0 - for i := 0; i < len(mapping); i++ { + for i := range mapping { if mapping[i] == unmappedColumn { count++ } diff --git a/services/gitdiff/gitdiff.go b/services/gitdiff/gitdiff.go index 544c664ca2..c1d0bc0107 100644 --- a/services/gitdiff/gitdiff.go +++ b/services/gitdiff/gitdiff.go @@ -524,10 +524,7 @@ func ParsePatch(ctx context.Context, maxLines, maxLineCharacters, maxFiles int, // OK let's set a reasonable buffer size. // This should be at least the size of maxLineCharacters or 4096 whichever is larger. - readerSize := maxLineCharacters - if readerSize < 4096 { - readerSize = 4096 - } + readerSize := max(maxLineCharacters, 4096) input := bufio.NewReaderSize(reader, readerSize) line, err := input.ReadString('\n') diff --git a/services/gitdiff/gitdiff_test.go b/services/gitdiff/gitdiff_test.go index d4d1cd4460..7ba439be35 100644 --- a/services/gitdiff/gitdiff_test.go +++ b/services/gitdiff/gitdiff_test.go @@ -445,7 +445,7 @@ index 0000000..6bb8f39 ` diffBuilder.WriteString(diff) - for i := 0; i < 35; i++ { + for i := range 35 { diffBuilder.WriteString("+line" + strconv.Itoa(i) + "\n") } diff = diffBuilder.String() @@ -482,11 +482,11 @@ index 0000000..6bb8f39 diffBuilder.Reset() diffBuilder.WriteString(diff) - for i := 0; i < 33; i++ { + for i := range 33 { diffBuilder.WriteString("+line" + strconv.Itoa(i) + "\n") } diffBuilder.WriteString("+line33") - for i := 0; i < 512; i++ { + for range 512 { diffBuilder.WriteString("0123456789ABCDEF") } diffBuilder.WriteByte('\n') diff --git a/services/gitdiff/highlightdiff_test.go b/services/gitdiff/highlightdiff_test.go index 0070173b9f..f5486b3f34 100644 --- a/services/gitdiff/highlightdiff_test.go +++ b/services/gitdiff/highlightdiff_test.go @@ -101,7 +101,7 @@ func TestDiffWithHighlightPlaceholderExhausted(t *testing.T) { func TestDiffWithHighlightTagMatch(t *testing.T) { totalOverflow := 0 - for i := 0; i < 100; i++ { + for i := range 100 { hcd := NewHighlightCodeDiff() hcd.placeholderMaxCount = i diffs := hcd.diffWithHighlight( diff --git a/services/issue/issue.go b/services/issue/issue.go index ab42916017..41a6000d52 100644 --- a/services/issue/issue.go +++ b/services/issue/issue.go @@ -8,6 +8,7 @@ import ( "context" "errors" "fmt" + "slices" "time" activities_model "forgejo.org/models/activities" @@ -127,11 +128,8 @@ func UpdateAssignees(ctx context.Context, issue *issues_model.Issue, oneAssignee if oneAssignee != "" { // Prevent double adding assignees var isDouble bool - for _, assignee := range multipleAssignees { - if assignee == oneAssignee { - isDouble = true - break - } + if slices.Contains(multipleAssignees, oneAssignee) { + isDouble = true } if !isDouble { diff --git a/services/issue/milestone.go b/services/issue/milestone.go index 928979d74e..158abb0663 100644 --- a/services/issue/milestone.go +++ b/services/issue/milestone.go @@ -26,10 +26,7 @@ func updateMilestoneCounters(ctx context.Context, issue *issues_model.Issue, id if err != nil { return fmt.Errorf("GetMilestoneByRepoID: %w", err) } - updatedUnix := milestone.UpdatedUnix - if issue.UpdatedUnix > updatedUnix { - updatedUnix = issue.UpdatedUnix - } + updatedUnix := max(issue.UpdatedUnix, milestone.UpdatedUnix) stats.QueueRecalcMilestoneByIDWithDate(ctx, id, updatedUnix) } else { stats.QueueRecalcMilestoneByID(ctx, id) diff --git a/services/lfs/locks.go b/services/lfs/locks.go index a45b2cc93b..16f6dc1631 100644 --- a/services/lfs/locks.go +++ b/services/lfs/locks.go @@ -74,10 +74,7 @@ func GetListLockHandler(ctx *context.Context) { } ctx.Resp.Header().Set("Content-Type", lfs_module.MediaType) - cursor := ctx.FormInt("cursor") - if cursor < 0 { - cursor = 0 - } + cursor := max(ctx.FormInt("cursor"), 0) limit := ctx.FormInt("limit") if limit > setting.LFS.LocksPagingNum && setting.LFS.LocksPagingNum > 0 { limit = setting.LFS.LocksPagingNum @@ -239,10 +236,7 @@ func VerifyLockHandler(ctx *context.Context) { ctx.Resp.Header().Set("Content-Type", lfs_module.MediaType) - cursor := ctx.FormInt("cursor") - if cursor < 0 { - cursor = 0 - } + cursor := max(ctx.FormInt("cursor"), 0) limit := ctx.FormInt("limit") if limit > setting.LFS.LocksPagingNum && setting.LFS.LocksPagingNum > 0 { limit = setting.LFS.LocksPagingNum diff --git a/services/lfs/server.go b/services/lfs/server.go index 30878d8edd..cc8afc2aa8 100644 --- a/services/lfs/server.go +++ b/services/lfs/server.go @@ -11,6 +11,7 @@ import ( "errors" "fmt" "io" + "maps" "net/http" "net/url" "path" @@ -503,9 +504,7 @@ func buildObjectResponse(rc *requestContext, pointer lfs_module.Pointer, downloa rep.Actions["upload"] = &lfs_module.Link{Href: rc.UploadLink(pointer), Header: header} verifyHeader := make(map[string]string) - for key, value := range header { - verifyHeader[key] = value - } + maps.Copy(verifyHeader, header) // This is only needed to workaround https://github.com/git-lfs/git-lfs/issues/3662 verifyHeader["Accept"] = lfs_module.AcceptHeader diff --git a/services/mailer/mailer_test.go b/services/mailer/mailer_test.go index 34fd847c05..855e424ab2 100644 --- a/services/mailer/mailer_test.go +++ b/services/mailer/mailer_test.go @@ -114,9 +114,9 @@ func extractMailHeaderAndContent(t *testing.T, mail string) (map[string]string, } content := strings.TrimSpace("boundary=" + parts[1]) - hParts := strings.Split(parts[0], "\n") + hParts := strings.SplitSeq(parts[0], "\n") - for _, hPart := range hParts { + for hPart := range hParts { parts := strings.SplitN(hPart, ":", 2) hk := strings.TrimSpace(parts[0]) if hk != "" { diff --git a/services/migrations/gitea_uploader_test.go b/services/migrations/gitea_uploader_test.go index e33d597cdc..dc0210a8cc 100644 --- a/services/migrations/gitea_uploader_test.go +++ b/services/migrations/gitea_uploader_test.go @@ -361,7 +361,7 @@ func TestGiteaUploadUpdateGitForPullRequest(t *testing.T) { require.NoError(t, git.InitRepository(git.DefaultContext, fromRepo.RepoPath(), false, fromRepo.ObjectFormatName)) err := git.NewCommand(git.DefaultContext, "symbolic-ref").AddDynamicArguments("HEAD", git.BranchPrefix+baseRef).Run(&git.RunOpts{Dir: fromRepo.RepoPath()}) require.NoError(t, err) - require.NoError(t, os.WriteFile(filepath.Join(fromRepo.RepoPath(), "README.md"), []byte(fmt.Sprintf("# Testing Repository\n\nOriginally created in: %s", fromRepo.RepoPath())), 0o644)) + require.NoError(t, os.WriteFile(filepath.Join(fromRepo.RepoPath(), "README.md"), fmt.Appendf(nil, "# Testing Repository\n\nOriginally created in: %s", fromRepo.RepoPath()), 0o644)) require.NoError(t, git.AddChanges(fromRepo.RepoPath(), true)) signature := git.Signature{ Email: "test@example.com", @@ -409,7 +409,7 @@ func TestGiteaUploadUpdateGitForPullRequest(t *testing.T) { })) _, _, err = git.NewCommand(git.DefaultContext, "checkout", "-b").AddDynamicArguments(forkHeadRef).RunStdString(&git.RunOpts{Dir: forkRepo.RepoPath()}) require.NoError(t, err) - require.NoError(t, os.WriteFile(filepath.Join(forkRepo.RepoPath(), "README.md"), []byte(fmt.Sprintf("# branch2 %s", forkRepo.RepoPath())), 0o644)) + require.NoError(t, os.WriteFile(filepath.Join(forkRepo.RepoPath(), "README.md"), fmt.Appendf(nil, "# branch2 %s", forkRepo.RepoPath()), 0o644)) require.NoError(t, git.AddChanges(forkRepo.RepoPath(), true)) require.NoError(t, git.CommitChanges(forkRepo.RepoPath(), git.CommitChangesOptions{ Committer: &signature, diff --git a/services/migrations/github.go b/services/migrations/github.go index 7cfb626a15..540dfdd9d4 100644 --- a/services/migrations/github.go +++ b/services/migrations/github.go @@ -108,8 +108,8 @@ func NewGithubDownloaderV3(ctx context.Context, baseURL string, getPullRequests, downloader.SetContext(ctx) if token != "" { - tokens := strings.Split(token, ",") - for _, token := range tokens { + tokens := strings.SplitSeq(token, ",") + for token := range tokens { token = strings.TrimSpace(token) ts := oauth2.StaticTokenSource( &oauth2.Token{AccessToken: token}, diff --git a/services/packages/alt/repository.go b/services/packages/alt/repository.go index 9693f4322e..5eed86fb67 100644 --- a/services/packages/alt/repository.go +++ b/services/packages/alt/repository.go @@ -12,6 +12,7 @@ import ( "fmt" "io" "path" + "strings" "time" packages_model "forgejo.org/models/packages" @@ -734,15 +735,15 @@ NotAutomatic: false codename := time.Now().Unix() date := time.Now().UTC().Format(time.RFC1123) - var md5Sum string - var blake2b string + var md5Sum strings.Builder + var blake2b strings.Builder for _, pkglistByArch := range pkglist[architecture] { - md5Sum += fmt.Sprintf(" %s %d %s\n", pkglistByArch.MD5Checksum.Value, pkglistByArch.Size, "base/"+pkglistByArch.Type) - blake2b += fmt.Sprintf(" %s %d %s\n", pkglistByArch.Blake2bHash.Value, pkglistByArch.Size, "base/"+pkglistByArch.Type) + fmt.Fprintf(&md5Sum, " %s %d %s\n", pkglistByArch.MD5Checksum.Value, pkglistByArch.Size, "base/"+pkglistByArch.Type) + fmt.Fprintf(&blake2b, " %s %d %s\n", pkglistByArch.Blake2bHash.Value, pkglistByArch.Size, "base/"+pkglistByArch.Type) } - md5Sum += fmt.Sprintf(" %s %d %s\n", fileInfo.MD5Checksum.Value, fileInfo.Size, "base/"+fileInfo.Type) - blake2b += fmt.Sprintf(" %s %d %s\n", fileInfo.Blake2bHash.Value, fileInfo.Size, "base/"+fileInfo.Type) + fmt.Fprintf(&md5Sum, " %s %d %s\n", fileInfo.MD5Checksum.Value, fileInfo.Size, "base/"+fileInfo.Type) + fmt.Fprintf(&blake2b, " %s %d %s\n", fileInfo.Blake2bHash.Value, fileInfo.Size, "base/"+fileInfo.Type) data = fmt.Sprintf(`Origin: %s Label: %s @@ -755,7 +756,7 @@ MD5Sum: %s `, - origin, label, codename, date, architecture, md5Sum, blake2b) + origin, label, codename, date, architecture, md5Sum.String(), blake2b.String()) _, err = addReleaseAsFileToRepo(ctx, pv, "release", data, group, architecture) if err != nil { return err diff --git a/services/packages/arch/repository.go b/services/packages/arch/repository.go index 2a865e6dbd..384895fd65 100644 --- a/services/packages/arch/repository.go +++ b/services/packages/arch/repository.go @@ -47,8 +47,8 @@ func BuildAllRepositoryFiles(ctx context.Context, ownerID int64) error { return err } for _, pf := range pfs { - if strings.HasSuffix(pf.Name, ".db") { - arch := strings.TrimSuffix(pf.Name, ".db") + if before, ok := strings.CutSuffix(pf.Name, ".db"); ok { + arch := before if err := BuildPacmanDB(ctx, ownerID, pf.CompositeKey, arch); err != nil { return err } diff --git a/services/pull/merge.go b/services/pull/merge.go index 1d5e82e969..aa9b25d738 100644 --- a/services/pull/merge.go +++ b/services/pull/merge.go @@ -7,6 +7,7 @@ package pull import ( "context" "fmt" + "maps" "os" "path/filepath" "regexp" @@ -139,9 +140,7 @@ func getMergeMessage(ctx context.Context, baseGitRepo *git.Repository, pr *issue vars["HeadRepoOwnerName"] = pr.HeadRepo.OwnerName vars["HeadRepoName"] = pr.HeadRepo.Name } - for extraKey, extraValue := range extraVars { - vars[extraKey] = extraValue - } + maps.Copy(vars, extraVars) refs, err := pr.ResolveCrossReferences(ctx) if err == nil { closeIssueIndexes := make([]string, 0, len(refs)) diff --git a/services/repository/adopt_test.go b/services/repository/adopt_test.go index 79e4fc0023..b133deb7c1 100644 --- a/services/repository/adopt_test.go +++ b/services/repository/adopt_test.go @@ -28,7 +28,7 @@ func TestCheckUnadoptedRepositories_Add(t *testing.T) { } total := 30 - for i := 0; i < total; i++ { + for range total { unadopted.add("something") } diff --git a/services/repository/commitstatus/commitstatus.go b/services/repository/commitstatus/commitstatus.go index 23b4a6a132..5a48aa64b4 100644 --- a/services/repository/commitstatus/commitstatus.go +++ b/services/repository/commitstatus/commitstatus.go @@ -23,7 +23,7 @@ import ( ) func getCacheKey(repoID int64, branchName string) string { - hashBytes := sha256.Sum256([]byte(fmt.Sprintf("%d:%s", repoID, branchName))) + hashBytes := sha256.Sum256(fmt.Appendf(nil, "%d:%s", repoID, branchName)) return fmt.Sprintf("commit_status:%x", hashBytes) } diff --git a/services/repository/create.go b/services/repository/create.go index 4491b12497..b7232b27cc 100644 --- a/services/repository/create.go +++ b/services/repository/create.go @@ -97,8 +97,8 @@ func prepareRepoCommit(ctx context.Context, repo *repo_model.Repository, tmpDir, // .gitignore if len(opts.Gitignores) > 0 { var buf bytes.Buffer - names := strings.Split(opts.Gitignores, ",") - for _, name := range names { + names := strings.SplitSeq(opts.Gitignores, ",") + for name := range names { data, err = options.Gitignore(name) if err != nil { return fmt.Errorf("GetRepoInitFile[%s]: %w", name, err) diff --git a/services/repository/create_test.go b/services/repository/create_test.go index 0a6c34b6fe..bd14a5e520 100644 --- a/services/repository/create_test.go +++ b/services/repository/create_test.go @@ -54,7 +54,7 @@ func TestIncludesAllRepositoriesTeams(t *testing.T) { // Create repos. repoIDs := make([]int64, 0) - for i := 0; i < 3; i++ { + for i := range 3 { r, err := CreateRepositoryDirectly(db.DefaultContext, user, org.AsUser(), CreateRepoOptions{Name: fmt.Sprintf("repo-%d", i)}) require.NoError(t, err, "CreateRepository %d", i) if r != nil { diff --git a/services/repository/files/file.go b/services/repository/files/file.go index 5b93258840..2e9ba628af 100644 --- a/services/repository/files/file.go +++ b/services/repository/files/file.go @@ -151,7 +151,7 @@ func CleanUploadFileName(name string) string { // Rebase the filename name = util.PathJoinRel(name) // Git disallows any filenames to have a .git directory in them. - for _, part := range strings.Split(name, "/") { + for part := range strings.SplitSeq(name, "/") { if strings.ToLower(part) == ".git" { return "" } diff --git a/services/repository/files/temp_repo.go b/services/repository/files/temp_repo.go index 3ce6a3413c..17a8467e01 100644 --- a/services/repository/files/temp_repo.go +++ b/services/repository/files/temp_repo.go @@ -128,7 +128,7 @@ func (t *TemporaryUploadRepository) LsFiles(filenames ...string) ([]string, erro } fileList := make([]string, 0, len(filenames)) - for _, line := range bytes.Split(stdOut.Bytes(), []byte{'\000'}) { + for line := range bytes.SplitSeq(stdOut.Bytes(), []byte{'\000'}) { fileList = append(fileList, string(line)) } diff --git a/services/repository/files/tree.go b/services/repository/files/tree.go index 5a369b27a5..3e99655261 100644 --- a/services/repository/files/tree.go +++ b/services/repository/files/tree.go @@ -69,11 +69,7 @@ func GetTreeBySHA(ctx context.Context, repo *repo_model.Repository, gitRepo *git if len(entries) > perPage { tree.Truncated = true } - if rangeStart+perPage < len(entries) { - rangeEnd = rangeStart + perPage - } else { - rangeEnd = len(entries) - } + rangeEnd = min(rangeStart+perPage, len(entries)) tree.Entries = make([]api.GitEntry, rangeEnd-rangeStart) for e := rangeStart; e < rangeEnd; e++ { i := e - rangeStart diff --git a/services/repository/files/update.go b/services/repository/files/update.go index 9c2fde1c0e..e97c487846 100644 --- a/services/repository/files/update.go +++ b/services/repository/files/update.go @@ -8,6 +8,7 @@ import ( "fmt" "io" "path" + "slices" "strings" "time" @@ -187,13 +188,7 @@ func ChangeRepoFiles(ctx context.Context, repo *repo_model.Repository, doer *use } // Find the file we want to delete in the index - inFilelist := false - for _, indexFile := range filesInIndex { - if indexFile == file.TreePath { - inFilelist = true - break - } - } + inFilelist := slices.Contains(filesInIndex, file.TreePath) if !inFilelist { return nil, models.ErrRepoFileDoesNotExist{ Path: file.TreePath, @@ -390,11 +385,9 @@ func CreateOrUpdateFile(ctx context.Context, t *TemporaryUploadRepository, file } // If is a new file (not updating) then the given path shouldn't exist if file.Operation == "create" { - for _, indexFile := range filesInIndex { - if indexFile == file.TreePath { - return models.ErrRepoFileAlreadyExists{ - Path: file.TreePath, - } + if slices.Contains(filesInIndex, file.TreePath) { + return models.ErrRepoFileAlreadyExists{ + Path: file.TreePath, } } } diff --git a/services/repository/gitgraph/graph_models.go b/services/repository/gitgraph/graph_models.go index 2c4133e1f2..4b5f630679 100644 --- a/services/repository/gitgraph/graph_models.go +++ b/services/repository/gitgraph/graph_models.go @@ -304,8 +304,8 @@ func newRefsFromRefNames(refNames []byte) []git.Reference { continue } refName := string(refNameBytes) - if strings.HasPrefix(refName, "tag: ") { - refName = strings.TrimPrefix(refName, "tag: ") + if after, ok := strings.CutPrefix(refName, "tag: "); ok { + refName = after } else { refName = strings.TrimPrefix(refName, "HEAD -> ") } diff --git a/services/repository/gitgraph/graph_test.go b/services/repository/gitgraph/graph_test.go index 6dafaf03fd..776b8cacab 100644 --- a/services/repository/gitgraph/graph_test.go +++ b/services/repository/gitgraph/graph_test.go @@ -6,6 +6,7 @@ package gitgraph import ( "bytes" "fmt" + "slices" "strings" "testing" @@ -118,13 +119,7 @@ func TestReleaseUnusedColors(t *testing.T) { if parser.firstAvailable == -1 { // All in use for _, color := range parser.availableColors { - found := false - for _, oldColor := range parser.oldColors { - if oldColor == color { - found = true - break - } - } + found := slices.Contains(parser.oldColors, color) if !found { t.Errorf("In testcase:\n%d\t%d\t%d %d =>\n%d\t%d\t%d %d: %d should be available but is not", testcase.availableColors, @@ -142,13 +137,7 @@ func TestReleaseUnusedColors(t *testing.T) { // Some in use for i := parser.firstInUse; i != parser.firstAvailable; i = (i + 1) % len(parser.availableColors) { color := parser.availableColors[i] - found := false - for _, oldColor := range parser.oldColors { - if oldColor == color { - found = true - break - } - } + found := slices.Contains(parser.oldColors, color) if !found { t.Errorf("In testcase:\n%d\t%d\t%d %d =>\n%d\t%d\t%d %d: %d should be available but is not", testcase.availableColors, @@ -164,13 +153,7 @@ func TestReleaseUnusedColors(t *testing.T) { } for i := parser.firstAvailable; i != parser.firstInUse; i = (i + 1) % len(parser.availableColors) { color := parser.availableColors[i] - found := false - for _, oldColor := range parser.oldColors { - if oldColor == color { - found = true - break - } - } + found := slices.Contains(parser.oldColors, color) if found { t.Errorf("In testcase:\n%d\t%d\t%d %d =>\n%d\t%d\t%d %d: %d should not be available but is", testcase.availableColors, @@ -258,8 +241,8 @@ func TestCommitStringParsing(t *testing.T) { for _, test := range tests { t.Run(test.testName, func(t *testing.T) { testString := fmt.Sprintf("%s%s", dataFirstPart, test.commitMessage) - idx := strings.Index(testString, "DATA:") - commit, err := NewCommit(0, 0, []byte(testString[idx+5:])) + _, after, _ := strings.Cut(testString, "DATA:") + commit, err := NewCommit(0, 0, []byte(after)) if err != nil && test.shouldPass { t.Errorf("Could not parse %s", testString) return diff --git a/services/repository/gitgraph/parser.go b/services/repository/gitgraph/parser.go index fcbc666f7e..2331e02207 100644 --- a/services/repository/gitgraph/parser.go +++ b/services/repository/gitgraph/parser.go @@ -44,11 +44,11 @@ func (parser *Parser) Reset() { // AddLineToGraph adds the line as a row to the graph func (parser *Parser) AddLineToGraph(graph *Graph, row int, line []byte) error { - idx := bytes.Index(line, []byte("DATA:")) - if idx < 0 { + before, after, ok := bytes.Cut(line, []byte("DATA:")) + if !ok { parser.ParseGlyphs(line) } else { - parser.ParseGlyphs(line[:idx]) + parser.ParseGlyphs(before) } var err error @@ -72,7 +72,7 @@ func (parser *Parser) AddLineToGraph(graph *Graph, row int, line []byte) error { } } commitDone = true - if idx < 0 { + if !ok { if err != nil { err = fmt.Errorf("missing data section on line %d with commit: %s. %w", row, string(line), err) } else { @@ -83,7 +83,7 @@ func (parser *Parser) AddLineToGraph(graph *Graph, row int, line []byte) error { if column < len(parser.oldGlyphs) && parser.oldGlyphs[column] == '|' { graph.continuationAbove[[2]int{row, column}] = true } - err2 := graph.AddCommit(row, column, flowID, line[idx+5:]) + err2 := graph.AddCommit(row, column, flowID, after) if err != nil && err2 != nil { err = fmt.Errorf("%v %w", err2, err) continue diff --git a/services/webhook/deliver_test.go b/services/webhook/deliver_test.go index 13890621c6..303a3d9bb1 100644 --- a/services/webhook/deliver_test.go +++ b/services/webhook/deliver_test.go @@ -281,8 +281,6 @@ func TestWebhookDeliverSpecificTypes(t *testing.T) { require.NoError(t, err) for typ, hc := range cases { - typ := typ - hc := hc t.Run(typ, func(t *testing.T) { t.Parallel() hook := &webhook_model.Webhook{ diff --git a/services/webhook/dingtalk.go b/services/webhook/dingtalk.go index e7dece30d3..96d4c18c11 100644 --- a/services/webhook/dingtalk.go +++ b/services/webhook/dingtalk.go @@ -110,22 +110,22 @@ func (dc dingtalkConvertor) Push(p *api.PushPayload) (DingtalkPayload, error) { title := fmt.Sprintf("[%s:%s] %s", p.Repo.FullName, branchName, commitDesc) - var text string + var text strings.Builder // for each commit, generate attachment text for i, commit := range p.Commits { var authorName string if commit.Author != nil { authorName = " - " + commit.Author.Name } - text += fmt.Sprintf("[%s](%s) %s", commit.ID[:7], commit.URL, - strings.TrimRight(commit.Message, "\r\n")) + authorName + text.WriteString(fmt.Sprintf("[%s](%s) %s", commit.ID[:7], commit.URL, + strings.TrimRight(commit.Message, "\r\n")) + authorName) // add linebreak to each commit but the last if i < len(p.Commits)-1 { - text += "\r\n" + text.WriteString("\r\n") } } - return createDingtalkPayload(title, text, linkText, titleLink), nil + return createDingtalkPayload(title, text.String(), linkText, titleLink), nil } // Issue implements PayloadConvertor Issue method diff --git a/services/webhook/discord.go b/services/webhook/discord.go index 2383a1402b..cb7584e157 100644 --- a/services/webhook/discord.go +++ b/services/webhook/discord.go @@ -210,7 +210,7 @@ func (d discordConvertor) Push(p *api.PushPayload) (DiscordPayload, error) { title := fmt.Sprintf("[%s:%s] %s", p.Repo.FullName, branchName, commitDesc) - var text string + var text strings.Builder // for each commit, generate attachment text for i, commit := range p.Commits { // limit the commit message display to just the summary, otherwise it would be hard to read @@ -223,14 +223,14 @@ func (d discordConvertor) Push(p *api.PushPayload) (DiscordPayload, error) { if utf8.RuneCountInString(message) > 50 { message = fmt.Sprintf("%.47s...", message) } - text += fmt.Sprintf("[`%s`](%s) %s \\- %s", commit.ID[:7], commit.URL, message, commit.Author.Name) + fmt.Fprintf(&text, "[`%s`](%s) %s \\- %s", commit.ID[:7], commit.URL, message, commit.Author.Name) // add linebreak to each commit but the last if i < len(p.Commits)-1 { - text += "\n" + text.WriteString("\n") } } - return d.createPayload(p.Sender, title, text, titleLink, greenColor), nil + return d.createPayload(p.Sender, title, text.String(), titleLink, greenColor), nil } // Issue implements PayloadConvertor Issue method diff --git a/services/webhook/feishu.go b/services/webhook/feishu.go index f6ffea9acc..ffbd4eb469 100644 --- a/services/webhook/feishu.go +++ b/services/webhook/feishu.go @@ -95,22 +95,23 @@ func (fc feishuConvertor) Push(p *api.PushPayload) (FeishuPayload, error) { commitDesc string ) - text := fmt.Sprintf("[%s:%s] %s\r\n", p.Repo.FullName, branchName, commitDesc) + var text strings.Builder + fmt.Fprintf(&text, "[%s:%s] %s\r\n", p.Repo.FullName, branchName, commitDesc) // for each commit, generate attachment text for i, commit := range p.Commits { var authorName string if commit.Author != nil { authorName = " - " + commit.Author.Name } - text += fmt.Sprintf("[%s](%s) %s", commit.ID[:7], commit.URL, - strings.TrimRight(commit.Message, "\r\n")) + authorName + text.WriteString(fmt.Sprintf("[%s](%s) %s", commit.ID[:7], commit.URL, + strings.TrimRight(commit.Message, "\r\n")) + authorName) // add linebreak to each commit but the last if i < len(p.Commits)-1 { - text += "\r\n" + text.WriteString("\r\n") } } - return newFeishuTextPayload(text), nil + return newFeishuTextPayload(text.String()), nil } // Issue implements PayloadConvertor Issue method diff --git a/services/webhook/matrix.go b/services/webhook/matrix.go index d11933f16a..a4bd774f50 100644 --- a/services/webhook/matrix.go +++ b/services/webhook/matrix.go @@ -206,18 +206,19 @@ func (m matrixConvertor) Push(p *api.PushPayload) (MatrixPayload, error) { } refName := html.EscapeString(git.RefName(p.Ref).ShortName()) - text := fmt.Sprintf("[%s] %s pushed %s to %s:
    ", p.Repo.FullName, p.Pusher.UserName, commitDesc, refName) + var text strings.Builder + fmt.Fprintf(&text, "[%s] %s pushed %s to %s:
    ", p.Repo.FullName, p.Pusher.UserName, commitDesc, refName) // for each commit, generate a new line text for i, commit := range p.Commits { - text += fmt.Sprintf("%s: %s - %s", htmlLinkFormatter(commit.URL, commit.ID[:7]), commit.Message, commit.Author.Name) + fmt.Fprintf(&text, "%s: %s - %s", htmlLinkFormatter(commit.URL, commit.ID[:7]), commit.Message, commit.Author.Name) // add linebreak to each commit but the last if i < len(p.Commits)-1 { - text += "
    " + text.WriteString("
    ") } } - return m.newPayload(text, p.Commits...) + return m.newPayload(text.String(), p.Commits...) } // PullRequest implements payloadConvertor PullRequest method diff --git a/services/webhook/msteams.go b/services/webhook/msteams.go index 798d7fb5fc..43ab588230 100644 --- a/services/webhook/msteams.go +++ b/services/webhook/msteams.go @@ -154,14 +154,14 @@ func (m msteamsConvertor) Push(p *api.PushPayload) (MSTeamsPayload, error) { title := fmt.Sprintf("[%s:%s] %s", p.Repo.FullName, branchName, commitDesc) - var text string + var text strings.Builder // for each commit, generate attachment text for i, commit := range p.Commits { - text += fmt.Sprintf("[%s](%s) %s - %s", commit.ID[:7], commit.URL, + fmt.Fprintf(&text, "[%s](%s) %s - %s", commit.ID[:7], commit.URL, strings.TrimRight(commit.Message, "\r\n"), commit.Author.Name) // add linebreak to each commit but the last if i < len(p.Commits)-1 { - text += "\n\n" + text.WriteString("\n\n") } } @@ -169,7 +169,7 @@ func (m msteamsConvertor) Push(p *api.PushPayload) (MSTeamsPayload, error) { p.Repo, p.Sender, title, - text, + text.String(), titleLink, greenColor, &MSTeamsFact{"Commit count:", fmt.Sprintf("%d", p.TotalCommits)}, diff --git a/services/webhook/slack.go b/services/webhook/slack.go index 1804c866f4..fe1bfc8aa4 100644 --- a/services/webhook/slack.go +++ b/services/webhook/slack.go @@ -245,13 +245,13 @@ func (s slackConvertor) Push(p *api.PushPayload) (SlackPayload, error) { branchLink := SlackLinkToRef(p.Repo.HTMLURL, p.Ref) text := fmt.Sprintf("[%s:%s] %s pushed by %s", p.Repo.FullName, branchLink, commitString, SlackNameFormatter(p.Pusher.UserName)) - var attachmentText string + var attachmentText strings.Builder // for each commit, generate attachment text for i, commit := range p.Commits { - attachmentText += fmt.Sprintf("%s: %s - %s", SlackLinkFormatter(commit.URL, commit.ID[:7]), SlackShortTextFormatter(commit.Message), SlackNameFormatter(commit.Author.Name)) + fmt.Fprintf(&attachmentText, "%s: %s - %s", SlackLinkFormatter(commit.URL, commit.ID[:7]), SlackShortTextFormatter(commit.Message), SlackNameFormatter(commit.Author.Name)) // add linebreak to each commit but the last if i < len(p.Commits)-1 { - attachmentText += "\n" + attachmentText.WriteString("\n") } } @@ -259,7 +259,7 @@ func (s slackConvertor) Push(p *api.PushPayload) (SlackPayload, error) { Color: s.Color, Title: p.Repo.HTMLURL, TitleLink: p.Repo.HTMLURL, - Text: attachmentText, + Text: attachmentText.String(), }}), nil } diff --git a/services/webhook/telegram.go b/services/webhook/telegram.go index f8fdea7ae9..47a7514968 100644 --- a/services/webhook/telegram.go +++ b/services/webhook/telegram.go @@ -116,22 +116,22 @@ func (t telegramConvertor) Push(p *api.PushPayload) (TelegramPayload, error) { title := fmt.Sprintf(`[%s:%s] %s`, p.Repo.FullName, branchName, commitDesc) - var text string + var text strings.Builder // for each commit, generate attachment text for i, commit := range p.Commits { var authorName string if commit.Author != nil { authorName = " - " + commit.Author.Name } - text += fmt.Sprintf(`[%s] %s`, commit.URL, commit.ID[:7], - strings.TrimRight(commit.Message, "\r\n")) + authorName + text.WriteString(fmt.Sprintf(`[%s] %s`, commit.URL, commit.ID[:7], + strings.TrimRight(commit.Message, "\r\n")) + authorName) // add linebreak to each commit but the last if i < len(p.Commits)-1 { - text += "\n" + text.WriteString("\n") } } - return createTelegramPayload(title + "\n" + text), nil + return createTelegramPayload(title + "\n" + text.String()), nil } // Issue implements PayloadConvertor Issue method diff --git a/services/webhook/wechatwork.go b/services/webhook/wechatwork.go index 01db3f0cd9..b22935be81 100644 --- a/services/webhook/wechatwork.go +++ b/services/webhook/wechatwork.go @@ -104,7 +104,7 @@ func (wc wechatworkConvertor) Push(p *api.PushPayload) (WechatworkPayload, error title := fmt.Sprintf("# %s:%s %s ", p.Repo.FullName, branchName, commitDesc) - var text string + var text strings.Builder // for each commit, generate attachment text for i, commit := range p.Commits { var authorName string @@ -113,15 +113,15 @@ func (wc wechatworkConvertor) Push(p *api.PushPayload) (WechatworkPayload, error } message := strings.ReplaceAll(commit.Message, "\n\n", "\r\n") - text += fmt.Sprintf(" > [%s](%s) \r\n >%s \n >%s", commit.ID[:7], commit.URL, + fmt.Fprintf(&text, " > [%s](%s) \r\n >%s \n >%s", commit.ID[:7], commit.URL, message, authorName) // add linebreak to each commit but the last if i < len(p.Commits)-1 { - text += "\n" + text.WriteString("\n") } } - return newWechatworkMarkdownPayload(title + "\r\n\r\n" + text), nil + return newWechatworkMarkdownPayload(title + "\r\n\r\n" + text.String()), nil } // Issue implements PayloadConvertor Issue method diff --git a/services/wiki/wiki_path.go b/services/wiki/wiki_path.go index ca312388af..a7963e9be4 100644 --- a/services/wiki/wiki_path.go +++ b/services/wiki/wiki_path.go @@ -129,8 +129,8 @@ func GitPathToWebPath(s string) (wp WebPath, err error) { func WebPathToUserTitle(s WebPath) (dir, display string) { dir = path.Dir(string(s)) display = path.Base(string(s)) - if strings.HasSuffix(display, ".md") { - display = strings.TrimSuffix(display, ".md") + if before, ok := strings.CutSuffix(display, ".md"); ok { + display = before display, _ = url.PathUnescape(display) } display, _ = unescapeSegment(display) diff --git a/services/wiki/wiki_test.go b/services/wiki/wiki_test.go index 9471904e38..551c251d97 100644 --- a/services/wiki/wiki_test.go +++ b/services/wiki/wiki_test.go @@ -116,9 +116,9 @@ func TestGitPathToWebPath(t *testing.T) { func TestUserWebGitPathConsistency(t *testing.T) { maxLen := 20 b := make([]byte, maxLen) - for i := 0; i < 1000; i++ { + for range 1000 { l := rand.Intn(maxLen) - for j := 0; j < l; j++ { + for j := range l { r := rand.Intn(0x80-0x20) + 0x20 b[j] = byte(r) } diff --git a/tests/integration/actions_trigger_test.go b/tests/integration/actions_trigger_test.go index 925434204c..4de8d625ab 100644 --- a/tests/integration/actions_trigger_test.go +++ b/tests/integration/actions_trigger_test.go @@ -88,7 +88,7 @@ jobs: labelStr := "/api/v1/repos/user2/repo-pull-request/labels" labelsCount := 2 labels := make([]*api.Label, labelsCount) - for i := 0; i < labelsCount; i++ { + for i := range labelsCount { color := "abcdef" req := NewRequestWithJSON(t, "POST", labelStr, &api.CreateLabelOption{ Name: fmt.Sprintf("label%d", i), diff --git a/tests/integration/api_activitypub_person_inbox_follow_test.go b/tests/integration/api_activitypub_person_inbox_follow_test.go index 7270d46a0c..5a0b452447 100644 --- a/tests/integration/api_activitypub_person_inbox_follow_test.go +++ b/tests/integration/api_activitypub_person_inbox_follow_test.go @@ -48,13 +48,13 @@ func TestActivityPubPersonInboxFollow(t *testing.T) { ctx, _ := contexttest.MockAPIContext(t, localUser2Inbox) // distant follows local - followActivity := []byte(fmt.Sprintf( + followActivity := fmt.Appendf(nil, `{"type":"Follow",`+ `"actor":"%s",`+ `"object":"%s"}`, distantUser15URL, localUser2URL, - )) + ) cf, err := activitypub.NewClientFactoryWithTimeout(60 * time.Second) require.NoError(t, err) c, err := cf.WithKeysDirect(ctx, mock.ApActor.PrivKey, @@ -84,7 +84,7 @@ func TestActivityPubPersonInboxFollow(t *testing.T) { assert.Contains(t, mock.LastPost, "\"type\":\"Accept\"") // distant undoes follow - undoFollowActivity := []byte(fmt.Sprintf( + undoFollowActivity := fmt.Appendf(nil, `{"type":"Undo",`+ `"actor":"%s",`+ `"object":{"type":"Follow",`+ @@ -93,7 +93,7 @@ func TestActivityPubPersonInboxFollow(t *testing.T) { distantUser15URL, distantUser15URL, localUser2URL, - )) + ) c, err = cf.WithKeysDirect(ctx, mock.ApActor.PrivKey, mock.ApActor.KeyID(federatedSrv.URL)) require.NoError(t, err) diff --git a/tests/integration/api_activitypub_person_inbox_useractivity_test.go b/tests/integration/api_activitypub_person_inbox_useractivity_test.go index 4201fd94bf..55fd62a3fe 100644 --- a/tests/integration/api_activitypub_person_inbox_useractivity_test.go +++ b/tests/integration/api_activitypub_person_inbox_useractivity_test.go @@ -53,13 +53,13 @@ func TestActivityPubPersonInboxNoteToDistant(t *testing.T) { defer f() // follow (distant follows local) - followActivity := []byte(fmt.Sprintf( + followActivity := fmt.Appendf(nil, `{"type":"Follow",`+ `"actor":"%s",`+ `"object":"%s"}`, distantUser15URL, localUser2URL, - )) + ) ctx, _ := contexttest.MockAPIContext(t, localUser2Inbox) cf, err := activitypub.NewClientFactoryWithTimeout(60 * time.Second) require.NoError(t, err) diff --git a/tests/integration/api_activitypub_repository_test.go b/tests/integration/api_activitypub_repository_test.go index 29221bb682..3d17f9b281 100644 --- a/tests/integration/api_activitypub_repository_test.go +++ b/tests/integration/api_activitypub_repository_test.go @@ -89,13 +89,13 @@ func TestActivityPubRepositoryInboxValid(t *testing.T) { mock.Persons[0].KeyID(federatedSrv.URL)) require.NoError(t, err) - activity1 := []byte(fmt.Sprintf( + activity1 := fmt.Appendf(nil, `{"type":"Like",`+ `"startTime":"%s",`+ `"actor":"%s/api/v1/activitypub/user-id/15",`+ `"object":"%s"}`, timeNow.Format(time.RFC3339), - federatedSrv.URL, u.JoinPath(fmt.Sprintf("/api/v1/activitypub/repository-id/%d", repositoryID)).String())) + federatedSrv.URL, u.JoinPath(fmt.Sprintf("/api/v1/activitypub/repository-id/%d", repositoryID)).String()) t.Logf("activity: %s", activity1) resp, err := c.Post(activity1, localRepoInbox) @@ -107,14 +107,14 @@ func TestActivityPubRepositoryInboxValid(t *testing.T) { unittest.AssertExistsAndLoadBean(t, &user.User{ID: federatedUser.UserID}) // A like activity by a different user of the same federated host. - activity2 := []byte(fmt.Sprintf( + activity2 := fmt.Appendf(nil, `{"type":"Like",`+ `"startTime":"%s",`+ `"actor":"%s/api/v1/activitypub/user-id/30",`+ `"object":"%s"}`, // Make sure this activity happens later then the one before timeNow.Add(time.Second).Format(time.RFC3339), - federatedSrv.URL, u.JoinPath(fmt.Sprintf("/api/v1/activitypub/repository-id/%d", repositoryID)).String())) + federatedSrv.URL, u.JoinPath(fmt.Sprintf("/api/v1/activitypub/repository-id/%d", repositoryID)).String()) t.Logf("activity: %s", activity2) resp, err = c.Post(activity2, localRepoInbox) @@ -127,14 +127,14 @@ func TestActivityPubRepositoryInboxValid(t *testing.T) { // The same user sends another like activity otherRepositoryID := 3 otherRepoInboxURL := u.JoinPath(fmt.Sprintf("/api/v1/activitypub/repository-id/%d/inbox", otherRepositoryID)).String() - activity3 := []byte(fmt.Sprintf( + activity3 := fmt.Appendf(nil, `{"type":"Like",`+ `"startTime":"%s",`+ `"actor":"%s/api/v1/activitypub/user-id/30",`+ `"object":"%s"}`, // Make sure this activity happens later then the ones before timeNow.Add(time.Second*2).Format(time.RFC3339), - federatedSrv.URL, u.JoinPath(fmt.Sprintf("/api/v1/activitypub/repository-id/%d", otherRepositoryID)).String())) + federatedSrv.URL, u.JoinPath(fmt.Sprintf("/api/v1/activitypub/repository-id/%d", otherRepositoryID)).String()) t.Logf("activity: %s", activity3) resp, err = c.Post(activity3, otherRepoInboxURL) diff --git a/tests/integration/api_helper_for_declarative_test.go b/tests/integration/api_helper_for_declarative_test.go index ada6a2c311..6f33322253 100644 --- a/tests/integration/api_helper_for_declarative_test.go +++ b/tests/integration/api_helper_for_declarative_test.go @@ -273,7 +273,7 @@ func doAPIMergePullRequestForm(t *testing.T, ctx APITestContext, owner, repo str var req *RequestWrapper var resp *httptest.ResponseRecorder - for i := 0; i < 6; i++ { + for range 6 { req = NewRequestWithJSON(t, http.MethodPost, urlStr, merge).AddTokenAuth(ctx.Token) resp = ctx.Session.MakeRequest(t, req, NoExpectedStatus) diff --git a/tests/integration/api_issue_test.go b/tests/integration/api_issue_test.go index 6267065602..7f601fa303 100644 --- a/tests/integration/api_issue_test.go +++ b/tests/integration/api_issue_test.go @@ -228,7 +228,7 @@ func TestAPICreateIssueParallel(t *testing.T) { urlStr := fmt.Sprintf("/api/v1/repos/%s/%s/issues?state=all", owner.Name, repoBefore.Name) var wg sync.WaitGroup - for i := 0; i < 100; i++ { + for i := range 100 { wg.Add(1) go func(parentT *testing.T, i int) { parentT.Run(fmt.Sprintf("ParallelCreateIssue_%d", i), func(t *testing.T) { @@ -499,10 +499,9 @@ func TestAPISearchIssues(t *testing.T) { defer tests.PrepareTestEnv(t)() // as this API was used in the frontend, it uses UI page size - expectedIssueCount := 20 // from the fixtures - if expectedIssueCount > setting.UI.IssuePagingNum { - expectedIssueCount = setting.UI.IssuePagingNum - } + expectedIssueCount := min( + // from the fixtures + 20, setting.UI.IssuePagingNum) link, _ := url.Parse("/api/v1/repos/issues/search") token := getUserToken(t, "user1", auth_model.AccessTokenScopeReadIssue) @@ -603,10 +602,9 @@ func TestAPISearchIssuesWithLabels(t *testing.T) { defer tests.PrepareTestEnv(t)() // as this API was used in the frontend, it uses UI page size - expectedIssueCount := 20 // from the fixtures - if expectedIssueCount > setting.UI.IssuePagingNum { - expectedIssueCount = setting.UI.IssuePagingNum - } + expectedIssueCount := min( + // from the fixtures + 20, setting.UI.IssuePagingNum) link, _ := url.Parse("/api/v1/repos/issues/search") token := getUserToken(t, "user1", auth_model.AccessTokenScopeReadIssue) diff --git a/tests/integration/api_packages_alt_test.go b/tests/integration/api_packages_alt_test.go index a15accdbbd..fe299ac14f 100644 --- a/tests/integration/api_packages_alt_test.go +++ b/tests/integration/api_packages_alt_test.go @@ -181,9 +181,9 @@ enabled=1`, var result ReleaseClassic - lines := strings.Split(resp, "\n") + lines := strings.SplitSeq(resp, "\n") - for _, line := range lines { + for line := range lines { parts := strings.SplitN(line, ": ", 2) if len(parts) < 2 { continue @@ -406,7 +406,7 @@ enabled=1`, if typ == 6 || typ == 8 || typ == 9 { elem := data[offset:] - for j := uint32(0); j < count; j++ { + for range count { strEnd := bytes.IndexByte(elem, 0) if strEnd == -1 { require.NoError(t, err) @@ -420,13 +420,13 @@ enabled=1`, result.Release = string(elem[:strEnd]) case 1004: var summaries []string - for i := uint32(0); i < count; i++ { + for range count { summaries = append(summaries, string(elem[:strEnd])) } result.Summary = summaries case 1005: var descriptions []string - for i := uint32(0); i < count; i++ { + for range count { descriptions = append(descriptions, string(elem[:strEnd])) } result.Description = descriptions @@ -436,7 +436,7 @@ enabled=1`, result.Packager = string(elem[:strEnd]) case 1016: var groups []string - for i := uint32(0); i < count; i++ { + for range count { groups = append(groups, string(elem[:strEnd])) } result.Group = groups @@ -448,49 +448,49 @@ enabled=1`, result.SourceRpm = string(elem[:strEnd]) case 1047: var provideNames []string - for i := uint32(0); i < count; i++ { + for range count { provideNames = append(provideNames, string(elem[:strEnd])) } result.ProvideNames = provideNames case 1049: var requireNames []string - for i := uint32(0); i < count; i++ { + for range count { requireNames = append(requireNames, string(elem[:strEnd])) } result.RequireNames = requireNames case 1050: var requireVersions []string - for i := uint32(0); i < count; i++ { + for range count { requireVersions = append(requireVersions, string(elem[:strEnd])) } result.RequireVersions = requireVersions case 1081: var changeLogNames []string - for i := uint32(0); i < count; i++ { + for range count { changeLogNames = append(changeLogNames, string(elem[:strEnd])) } result.ChangeLogNames = changeLogNames case 1082: var changeLogTexts []string - for i := uint32(0); i < count; i++ { + for range count { changeLogTexts = append(changeLogTexts, string(elem[:strEnd])) } result.ChangeLogTexts = changeLogTexts case 1113: var provideVersions []string - for i := uint32(0); i < count; i++ { + for range count { provideVersions = append(provideVersions, string(elem[:strEnd])) } result.ProvideVersions = provideVersions case 1117: var baseNames []string - for i := uint32(0); i < count; i++ { + for range count { baseNames = append(baseNames, string(elem[:strEnd])) } result.BaseNames = baseNames case 1118: var dirNames []string - for i := uint32(0); i < count; i++ { + for range count { dirNames = append(dirNames, string(elem[:strEnd])) } result.DirNames = dirNames @@ -509,7 +509,7 @@ enabled=1`, } } else if typ == 4 { elem := data[offset:] - for j := uint32(0); j < count; j++ { + for range count { val := binary.BigEndian.Uint32(elem) switch tag { case 1006: @@ -518,25 +518,25 @@ enabled=1`, result.Size = int(val) case 1048: var requireFlags []int - for i := uint32(0); i < count; i++ { + for range count { requireFlags = append(requireFlags, int(val)) } result.RequireFlags = requireFlags case 1080: var changeLogTimes []int - for i := uint32(0); i < count; i++ { + for range count { changeLogTimes = append(changeLogTimes, int(val)) } result.ChangeLogTimes = changeLogTimes case 1112: var provideFlags []int - for i := uint32(0); i < count; i++ { + for range count { provideFlags = append(provideFlags, int(val)) } result.ProvideFlags = provideFlags case 1116: var dirIndexes []int - for i := uint32(0); i < count; i++ { + for range count { dirIndexes = append(dirIndexes, int(val)) } result.DirIndexes = dirIndexes diff --git a/tests/integration/api_packages_chef_test.go b/tests/integration/api_packages_chef_test.go index 390ac50688..cadf46b03f 100644 --- a/tests/integration/api_packages_chef_test.go +++ b/tests/integration/api_packages_chef_test.go @@ -182,7 +182,7 @@ nwIDAQAB var data []byte if version == "1.3" { - data = []byte(fmt.Sprintf( + data = fmt.Appendf(nil, "Method:%s\nPath:%s\nX-Ops-Content-Hash:%s\nX-Ops-Sign:version=%s\nX-Ops-Timestamp:%s\nX-Ops-UserId:%s\nX-Ops-Server-API-Version:%s", req.Method, path.Clean(req.URL.Path), @@ -191,17 +191,17 @@ nwIDAQAB req.Header.Get("X-Ops-Timestamp"), username, req.Header.Get("X-Ops-Server-Api-Version"), - )) + ) } else { sum := sha1.Sum([]byte(path.Clean(req.URL.Path))) - data = []byte(fmt.Sprintf( + data = fmt.Appendf(nil, "Method:%s\nHashed Path:%s\nX-Ops-Content-Hash:%s\nX-Ops-Timestamp:%s\nX-Ops-UserId:%s", req.Method, base64.StdEncoding.EncodeToString(sum[:]), req.Header.Get("X-Ops-Content-Hash"), req.Header.Get("X-Ops-Timestamp"), username, - )) + ) } for k := range req.Header { diff --git a/tests/integration/api_packages_container_test.go b/tests/integration/api_packages_container_test.go index 6775f3d228..13ce7dc136 100644 --- a/tests/integration/api_packages_container_test.go +++ b/tests/integration/api_packages_container_test.go @@ -873,7 +873,7 @@ func TestPackageContainer(t *testing.T) { url := fmt.Sprintf("%sv2/%s/parallel", setting.AppURL, user.Name) var wg sync.WaitGroup - for i := 0; i < 10; i++ { + for i := range 10 { wg.Add(1) content := []byte{byte(i)} diff --git a/tests/integration/api_packages_maven_test.go b/tests/integration/api_packages_maven_test.go index 74e7883443..2fa43b6b61 100644 --- a/tests/integration/api_packages_maven_test.go +++ b/tests/integration/api_packages_maven_test.go @@ -291,7 +291,7 @@ func TestPackageMavenConcurrent(t *testing.T) { defer tests.PrintCurrentTest(t)() var wg sync.WaitGroup - for i := 0; i < 10; i++ { + for i := range 10 { wg.Add(1) go func(i int) { putFile(t, fmt.Sprintf("/%s/%s.jar", packageVersion, strconv.Itoa(i)), "test", http.StatusCreated) diff --git a/tests/integration/api_repo_topic_test.go b/tests/integration/api_repo_topic_test.go index 69008bbf64..a43e49b0ff 100644 --- a/tests/integration/api_repo_topic_test.go +++ b/tests/integration/api_repo_topic_test.go @@ -31,7 +31,7 @@ func TestAPITopicSearchPaging(t *testing.T) { token2 := getUserToken(t, user2.Name, auth_model.AccessTokenScopeWriteRepository) repo2 := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: 1}) repo3 := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: 2}) - for i := 0; i < 20; i++ { + for i := range 20 { req := NewRequestf(t, "PUT", "/api/v1/repos/%s/%s/topics/paging-topic-%d", user2.Name, repo2.Name, i). AddTokenAuth(token2) MakeRequest(t, req, http.StatusNoContent) diff --git a/tests/integration/cmd_forgejo_actions_test.go b/tests/integration/cmd_forgejo_actions_test.go index 653ff65a9d..088cb5860b 100644 --- a/tests/integration/cmd_forgejo_actions_test.go +++ b/tests/integration/cmd_forgejo_actions_test.go @@ -183,7 +183,7 @@ func TestActions_CmdForgejo_Actions(t *testing.T) { // // Run twice to verify it is idempotent // - for i := 0; i < 2; i++ { + for range 2 { uuid, err := runMainApp("forgejo-cli", cmd...) require.NoError(t, err) if assert.Equal(t, testCase.uuid, uuid) { diff --git a/tests/integration/git_helper_for_declarative_test.go b/tests/integration/git_helper_for_declarative_test.go index 1d8f44c8dc..e1f08509df 100644 --- a/tests/integration/git_helper_for_declarative_test.go +++ b/tests/integration/git_helper_for_declarative_test.go @@ -143,7 +143,7 @@ func doGitInitTestRepository(dstPath string, objectFormat git.ObjectFormat) func // forcibly set default branch to master _, _, err := git.NewCommand(git.DefaultContext, "symbolic-ref", "HEAD", git.BranchPrefix+"master").RunStdString(&git.RunOpts{Dir: dstPath}) require.NoError(t, err) - require.NoError(t, os.WriteFile(filepath.Join(dstPath, "README.md"), []byte(fmt.Sprintf("# Testing Repository\n\nOriginally created in: %s", dstPath)), 0o644)) + require.NoError(t, os.WriteFile(filepath.Join(dstPath, "README.md"), fmt.Appendf(nil, "# Testing Repository\n\nOriginally created in: %s", dstPath), 0o644)) require.NoError(t, git.AddChanges(dstPath, true)) signature := git.Signature{ Email: "test@example.com", @@ -194,7 +194,7 @@ func doGitAddSomeCommits(dstPath, branch string) func(*testing.T) { return func(t *testing.T) { doGitCheckoutBranch(dstPath, branch)(t) - require.NoError(t, os.WriteFile(filepath.Join(dstPath, fmt.Sprintf("file-%s.txt", branch)), []byte(fmt.Sprintf("file %s", branch)), 0o644)) + require.NoError(t, os.WriteFile(filepath.Join(dstPath, fmt.Sprintf("file-%s.txt", branch)), fmt.Appendf(nil, "file %s", branch), 0o644)) require.NoError(t, git.AddChanges(dstPath, true)) signature := git.Signature{ Email: "test@test.test", diff --git a/tests/integration/git_push_test.go b/tests/integration/git_push_test.go index bf8ad8ac12..c7c4f6750e 100644 --- a/tests/integration/git_push_test.go +++ b/tests/integration/git_push_test.go @@ -45,7 +45,7 @@ func testGitPush(t *testing.T, u *url.URL) { forEachObjectFormat(t, func(t *testing.T, objectFormat git.ObjectFormat) { t.Run("Push branches at once", func(t *testing.T) { runTestGitPush(t, u, objectFormat, func(t *testing.T, gitPath string) (pushed, deleted []string) { - for i := 0; i < 10; i++ { + for i := range 10 { branchName := fmt.Sprintf("branch-%d", i) pushed = append(pushed, branchName) doGitCreateBranch(gitPath, branchName)(t) @@ -58,7 +58,7 @@ func testGitPush(t *testing.T, u *url.URL) { t.Run("Push branches exists", func(t *testing.T) { runTestGitPush(t, u, objectFormat, func(t *testing.T, gitPath string) (pushed, deleted []string) { - for i := 0; i < 10; i++ { + for i := range 10 { branchName := fmt.Sprintf("branch-%d", i) if i < 5 { pushed = append(pushed, branchName) @@ -72,7 +72,7 @@ func testGitPush(t *testing.T, u *url.URL) { pushed = pushed[:0] // do some changes for the first 5 branches created above - for i := 0; i < 5; i++ { + for i := range 5 { branchName := fmt.Sprintf("branch-%d", i) pushed = append(pushed, branchName) @@ -93,7 +93,7 @@ func testGitPush(t *testing.T, u *url.URL) { t.Run("Push branches one by one", func(t *testing.T) { runTestGitPush(t, u, objectFormat, func(t *testing.T, gitPath string) (pushed, deleted []string) { - for i := 0; i < 10; i++ { + for i := range 10 { branchName := fmt.Sprintf("branch-%d", i) doGitCreateBranch(gitPath, branchName)(t) doGitPushTestRepository(gitPath, "origin", branchName)(t) @@ -108,14 +108,14 @@ func testGitPush(t *testing.T, u *url.URL) { doGitPushTestRepository(gitPath, "origin", "master")(t) // make sure master is the default branch instead of a branch we are going to delete pushed = append(pushed, "master") - for i := 0; i < 10; i++ { + for i := range 10 { branchName := fmt.Sprintf("branch-%d", i) pushed = append(pushed, branchName) doGitCreateBranch(gitPath, branchName)(t) } doGitPushTestRepository(gitPath, "origin", "--all")(t) - for i := 0; i < 10; i++ { + for i := range 10 { branchName := fmt.Sprintf("branch-%d", i) doGitPushTestRepository(gitPath, "origin", "--delete", branchName)(t) deleted = append(deleted, branchName) diff --git a/tests/integration/git_test.go b/tests/integration/git_test.go index f61338e4ee..a5a242b912 100644 --- a/tests/integration/git_test.go +++ b/tests/integration/git_test.go @@ -9,6 +9,7 @@ import ( "encoding/hex" "fmt" "io" + "maps" "net/http" "net/url" "os" @@ -487,9 +488,7 @@ func doProtectBranch(ctx APITestContext, branch string, addParameter ...paramete "rule_name": branch, } if len(addParameter) > 0 { - for k, v := range addParameter[0] { - parameter[k] = v - } + maps.Copy(parameter, addParameter[0]) } // Change branch to protected @@ -1162,15 +1161,15 @@ func doLFSNoAccess(ctx APITestContext, publicKeyID int64, objectFormat git.Objec } func extractRemoteMessages(stderr string) string { - var remoteMsg string + var remoteMsg strings.Builder for line := range strings.SplitSeq(stderr, "\n") { msg, found := strings.CutPrefix(line, "remote: ") if found { - remoteMsg += msg - remoteMsg += "\n" + remoteMsg.WriteString(msg) + remoteMsg.WriteString("\n") } } - return remoteMsg + return remoteMsg.String() } func doTestForkPushMessages(apictx APITestContext, dstPath string) func(*testing.T) { diff --git a/tests/integration/integration_test.go b/tests/integration/integration_test.go index 8657af839c..6cde9be4df 100644 --- a/tests/integration/integration_test.go +++ b/tests/integration/integration_test.go @@ -471,14 +471,14 @@ func loginUserMaybeTOTP(t testing.TB, user *user_model.User, useTOTP bool) *Test } // token has to be unique this counter take care of -var tokenCounter int64 +var tokenCounter atomic.Int64 // getTokenForLoggedInUser returns a token for a logged in user. // The scope is an optional list of snake_case strings like the frontend form fields, // but without the "scope_" prefix. func getTokenForLoggedInUser(t testing.TB, session *TestSession, scopes ...auth.AccessTokenScope) string { t.Helper() - accessTokenName := fmt.Sprintf("api-testing-token-%d", atomic.AddInt64(&tokenCounter, 1)) + accessTokenName := fmt.Sprintf("api-testing-token-%d", tokenCounter.Add(1)) createApplicationSettingsToken(t, session, accessTokenName, scopes...) token := assertAccessToken(t, session) return token diff --git a/tests/integration/issue_comment_test.go b/tests/integration/issue_comment_test.go index a6c4d6c923..2f72ab766e 100644 --- a/tests/integration/issue_comment_test.go +++ b/tests/integration/issue_comment_test.go @@ -5,6 +5,7 @@ package integration import ( "net/http" + "slices" "strconv" "strings" "testing" @@ -71,13 +72,7 @@ func testIssueCommentChangeEvent(t *testing.T, htmlDoc *HTMLDoc, commentID, badg // Check links (href) issueCommentLink := "#issuecomment-" + commentID - found := false - for _, link := range links { - if link == issueCommentLink { - found = true - break - } - } + found := slices.Contains(links, issueCommentLink) if !found { links = append(links, issueCommentLink) } diff --git a/tests/integration/issue_test.go b/tests/integration/issue_test.go index 19fe59a10a..d764870bab 100644 --- a/tests/integration/issue_test.go +++ b/tests/integration/issue_test.go @@ -114,14 +114,11 @@ func TestViewIssuesSortByType(t *testing.T) { htmlDoc := NewHTMLParser(t, resp.Body) issuesSelection := getIssuesSelection(t, htmlDoc) - expectedNumIssues := unittest.GetCount(t, + expectedNumIssues := min(unittest.GetCount(t, &issues_model.Issue{RepoID: repo.ID, PosterID: user.ID}, unittest.Cond("is_closed=?", false), unittest.Cond("is_pull=?", false), - ) - if expectedNumIssues > setting.UI.IssuePagingNum { - expectedNumIssues = setting.UI.IssuePagingNum - } + ), setting.UI.IssuePagingNum) assert.Equal(t, expectedNumIssues, issuesSelection.Length()) issuesSelection.Each(func(_ int, selection *goquery.Selection) { @@ -891,10 +888,9 @@ func TestSearchIssues(t *testing.T) { session := loginUser(t, "user2") - expectedIssueCount := 20 // from the fixtures - if expectedIssueCount > setting.UI.IssuePagingNum { - expectedIssueCount = setting.UI.IssuePagingNum - } + expectedIssueCount := min( + // from the fixtures + 20, setting.UI.IssuePagingNum) req := NewRequest(t, "GET", "/issues/search") resp := session.MakeRequest(t, req, http.StatusOK) @@ -1017,10 +1013,9 @@ func TestSearchIssues(t *testing.T) { func TestSearchIssuesWithLabels(t *testing.T) { defer tests.PrepareTestEnv(t)() - expectedIssueCount := 20 // from the fixtures - if expectedIssueCount > setting.UI.IssuePagingNum { - expectedIssueCount = setting.UI.IssuePagingNum - } + expectedIssueCount := min( + // from the fixtures + 20, setting.UI.IssuePagingNum) session := loginUser(t, "user1") link, _ := url.Parse("/issues/search") diff --git a/tests/integration/org_test.go b/tests/integration/org_test.go index 157ed5dbcd..ddfaf2fa55 100644 --- a/tests/integration/org_test.go +++ b/tests/integration/org_test.go @@ -46,7 +46,7 @@ func TestOrgRepos(t *testing.T) { sel := htmlDoc.doc.Find("a.name") assert.Len(t, repos, len(sel.Nodes)) - for i := 0; i < len(repos); i++ { + for i := range repos { assert.Equal(t, repos[i], strings.TrimSpace(sel.Eq(i).Text())) } } diff --git a/tests/integration/project_test.go b/tests/integration/project_test.go index 955caaf6f7..629793602b 100644 --- a/tests/integration/project_test.go +++ b/tests/integration/project_test.go @@ -45,7 +45,7 @@ func TestMoveRepoProjectColumns(t *testing.T) { err := project_model.NewProject(db.DefaultContext, &project1) require.NoError(t, err) - for i := 0; i < 3; i++ { + for i := range 3 { err = project_model.NewColumn(db.DefaultContext, &project_model.Column{ Title: fmt.Sprintf("column %d", i+1), ProjectID: project1.ID, diff --git a/tests/integration/pull_merge_test.go b/tests/integration/pull_merge_test.go index a987603ce7..14399fc936 100644 --- a/tests/integration/pull_merge_test.go +++ b/tests/integration/pull_merge_test.go @@ -8,6 +8,7 @@ import ( "context" "encoding/base64" "fmt" + "maps" "math/rand/v2" "net/http" "net/http/httptest" @@ -71,9 +72,7 @@ func testPullMergeForm(t *testing.T, session *TestSession, expectedCode int, use link := path.Join(user, repo, "pulls", pullnum, "merge") options := map[string]string{} - for k, v := range addOptions { - options[k] = v - } + maps.Copy(options, addOptions) req := NewRequestWithValues(t, "POST", link, options) resp := session.MakeRequest(t, req, expectedCode) diff --git a/tests/integration/quota_use_test.go b/tests/integration/quota_use_test.go index 33d987af36..dd5afd1aed 100644 --- a/tests/integration/quota_use_test.go +++ b/tests/integration/quota_use_test.go @@ -7,6 +7,7 @@ import ( "bytes" "fmt" "io" + "maps" "mime/multipart" "net/http" "net/http/httptest" @@ -656,9 +657,7 @@ func (ctx *quotaWebEnvAsContext) With(opts Context) *quotaWebEnvAsContext { ctx.Repo = opts.Repo } if opts.Payload != nil { - for key, value := range *opts.Payload { - ctx.Payload[key] = value - } + maps.Copy(ctx.Payload, *opts.Payload) } return ctx } diff --git a/tests/integration/release_test.go b/tests/integration/release_test.go index 483822e2ac..b54e921fa4 100644 --- a/tests/integration/release_test.go +++ b/tests/integration/release_test.go @@ -231,7 +231,7 @@ func TestCreateReleasePaging(t *testing.T) { session := loginUser(t, "user2") // Create enough releases to have paging - for i := 0; i < 12; i++ { + for i := range 12 { version := fmt.Sprintf("v0.0.%d", i) createNewRelease(t, session, "/user2/repo1", version, version, false, false) } diff --git a/tests/integration/repo_commits_test.go b/tests/integration/repo_commits_test.go index 47cf7e8534..d2bb4e8849 100644 --- a/tests/integration/repo_commits_test.go +++ b/tests/integration/repo_commits_test.go @@ -146,7 +146,7 @@ func TestRepoCommitsStatusParallel(t *testing.T) { assert.NotEmpty(t, commitURL) var wg sync.WaitGroup - for i := 0; i < 10; i++ { + for i := range 10 { wg.Add(1) go func(parentT *testing.T, i int) { parentT.Run(fmt.Sprintf("ParallelCreateStatus_%d", i), func(t *testing.T) { diff --git a/tests/integration/repo_flags_test.go b/tests/integration/repo_flags_test.go index bb489f678c..279feefe73 100644 --- a/tests/integration/repo_flags_test.go +++ b/tests/integration/repo_flags_test.go @@ -135,7 +135,7 @@ func TestRepositoryFlagsAPI(t *testing.T) { assert.Empty(t, flags) // Replacing all tags works, twice in a row - for i := 0; i < 2; i++ { + for range 2 { req = NewRequestWithJSON(t, "PUT", fmt.Sprintf(baseURLFmtStr, ""), &api.ReplaceFlagsOption{ Flags: []string{"flag-1", "flag-2", "flag-3"}, }).AddTokenAuth(token) @@ -160,7 +160,7 @@ func TestRepositoryFlagsAPI(t *testing.T) { MakeRequest(t, req, http.StatusNotFound) // We can add the same flag twice - for i := 0; i < 2; i++ { + for range 2 { req = NewRequestf(t, "PUT", baseURLFmtStr, "/brand-new-flag").AddTokenAuth(token) MakeRequest(t, req, http.StatusNoContent) } @@ -170,7 +170,7 @@ func TestRepositoryFlagsAPI(t *testing.T) { MakeRequest(t, req, http.StatusNoContent) // We can delete a flag, twice - for i := 0; i < 2; i++ { + for range 2 { req = NewRequestf(t, "DELETE", baseURLFmtStr, "/flag-3").AddTokenAuth(token) MakeRequest(t, req, http.StatusNoContent) } diff --git a/tests/integration/repo_topic_test.go b/tests/integration/repo_topic_test.go index 0f11d451d6..7331f23218 100644 --- a/tests/integration/repo_topic_test.go +++ b/tests/integration/repo_topic_test.go @@ -62,7 +62,7 @@ func TestTopicSearchPaging(t *testing.T) { token2 := getUserToken(t, user2.Name, auth_model.AccessTokenScopeWriteRepository) repo2 := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: 1}) repo3 := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: 2}) - for i := 0; i < 20; i++ { + for i := range 20 { req := NewRequestf(t, "PUT", "/api/v1/repos/%s/%s/topics/paging-topic-%d", user2.Name, repo2.Name, i). AddTokenAuth(token2) MakeRequest(t, req, http.StatusNoContent) diff --git a/tests/integration/repo_webhook_test.go b/tests/integration/repo_webhook_test.go index 5320c85de1..bbde44892e 100644 --- a/tests/integration/repo_webhook_test.go +++ b/tests/integration/repo_webhook_test.go @@ -4,6 +4,7 @@ package integration import ( + "maps" "net/http" "net/http/httptest" "net/url" @@ -408,9 +409,7 @@ func testWebhookFormsShared(t *testing.T, endpoint, name string, session *TestSe payload := map[string]string{ "events": "send_everything", } - for k, v := range validFields { - payload[k] = v - } + maps.Copy(payload, validFields) for k, v := range invalidPatch { if v == "" { delete(payload, k) @@ -448,9 +447,7 @@ func assertHasFlashMessages(t *testing.T, resp *httptest.ResponseRecorder, expec for key, value := range flash { // the key is itself url-encoded if flash, err := url.ParseQuery(key); err == nil { - for key, value := range flash { - seenKeys[key] = value - } + maps.Copy(seenKeys, flash) } else { seenKeys[key] = value } diff --git a/tests/integration/signing_git_test.go b/tests/integration/signing_git_test.go index 5dee5b4801..2fbdb22b6f 100644 --- a/tests/integration/signing_git_test.go +++ b/tests/integration/signing_git_test.go @@ -433,7 +433,7 @@ func crudActionCreateFile(_ *testing.T, ctx APITestContext, user *user_model.Use Email: user.Email, }, }, - ContentBase64: base64.StdEncoding.EncodeToString([]byte(fmt.Sprintf("This is new text for %s", path))), + ContentBase64: base64.StdEncoding.EncodeToString(fmt.Appendf(nil, "This is new text for %s", path)), }, callback...) } diff --git a/tests/integration/signup_test.go b/tests/integration/signup_test.go index e66b193e15..eee022f7ab 100644 --- a/tests/integration/signup_test.go +++ b/tests/integration/signup_test.go @@ -185,10 +185,10 @@ func TestSignupImageCaptcha(t *testing.T) { assert.True(t, ok) assert.Len(t, digits, 6) - digitStr := "" + var digitStr strings.Builder // Convert digits to ASCII digits. for _, digit := range digits { - digitStr += string(digit + '0') + digitStr.WriteString(string(digit + '0')) } req = NewRequestWithValues(t, "POST", "/user/sign_up", map[string]string{ @@ -197,7 +197,7 @@ func TestSignupImageCaptcha(t *testing.T) { "password": "examplePassword!1", "retype": "examplePassword!1", "img-captcha-id": idCaptcha, - "img-captcha-response": digitStr, + "img-captcha-response": digitStr.String(), }) MakeRequest(t, req, http.StatusSeeOther) diff --git a/tests/integration/ssh_key_test.go b/tests/integration/ssh_key_test.go index 156bcb137e..747b6c4eb0 100644 --- a/tests/integration/ssh_key_test.go +++ b/tests/integration/ssh_key_test.go @@ -28,7 +28,7 @@ func doCheckRepositoryEmptyStatus(ctx APITestContext, isEmpty bool) func(*testin func doAddChangesToCheckout(dstPath, filename string) func(*testing.T) { return func(t *testing.T) { - require.NoError(t, os.WriteFile(filepath.Join(dstPath, filename), []byte(fmt.Sprintf("# Testing Repository\n\nOriginally created in: %s at time: %v", dstPath, time.Now())), 0o644)) + require.NoError(t, os.WriteFile(filepath.Join(dstPath, filename), fmt.Appendf(nil, "# Testing Repository\n\nOriginally created in: %s at time: %v", dstPath, time.Now()), 0o644)) require.NoError(t, git.AddChanges(dstPath, true)) signature := git.Signature{ Email: "test@example.com", diff --git a/tests/integration/user_test.go b/tests/integration/user_test.go index f1acafbde8..0f7c099c70 100644 --- a/tests/integration/user_test.go +++ b/tests/integration/user_test.go @@ -1030,7 +1030,7 @@ func TestUserRepos(t *testing.T) { sel := htmlDoc.doc.Find("a.name") assert.Len(t, repos, len(sel.Nodes)) - for i := 0; i < len(repos); i++ { + for i := range repos { assert.Equal(t, repos[i], strings.TrimSpace(sel.Eq(i).Text())) } } diff --git a/tests/test_utils.go b/tests/test_utils.go index dff1c283d0..5e5b04c860 100644 --- a/tests/test_utils.go +++ b/tests/test_utils.go @@ -15,6 +15,7 @@ import ( "path" "path/filepath" "runtime" + "slices" "strings" "sync/atomic" "testing" @@ -524,23 +525,20 @@ func CreateDeclarativeRepo(t *testing.T, owner *user_model.User, name string, en if enabledUnits != nil { opts.EnabledUnits = optional.Some(enabledUnits) - for _, unitType := range enabledUnits { - if unitType == unit_model.TypePullRequests { - opts.UnitConfig = optional.Some(map[unit_model.Type]convert.Conversion{ - unit_model.TypePullRequests: &repo_model.PullRequestsConfig{ - AllowMerge: true, - AllowRebase: true, - AllowRebaseMerge: true, - AllowSquash: true, - AllowFastForwardOnly: true, - AllowManualMerge: true, - AllowRebaseUpdate: true, - DefaultMergeStyle: repo_model.MergeStyleMerge, - DefaultUpdateStyle: repo_model.UpdateStyleMerge, - }, - }) - break - } + if slices.Contains(enabledUnits, unit_model.TypePullRequests) { + opts.UnitConfig = optional.Some(map[unit_model.Type]convert.Conversion{ + unit_model.TypePullRequests: &repo_model.PullRequestsConfig{ + AllowMerge: true, + AllowRebase: true, + AllowRebaseMerge: true, + AllowSquash: true, + AllowFastForwardOnly: true, + AllowManualMerge: true, + AllowRebaseUpdate: true, + DefaultMergeStyle: repo_model.MergeStyleMerge, + DefaultUpdateStyle: repo_model.UpdateStyleMerge, + }, + }) } } if disabledUnits != nil { From bbbdc3bf67f3b01257fa240e82104f7427af9019 Mon Sep 17 00:00:00 2001 From: forgejo-backport-action Date: Thu, 2 Apr 2026 22:10:21 +0200 Subject: [PATCH 16/82] [v15.0/forgejo] enh: add suggestion to document reason for repository archival (#11950) **Backport:** https://codeberg.org/forgejo/forgejo/pulls/11375 Fixes #11370 ## Release notes - User Interface features - [PR](https://codeberg.org/forgejo/forgejo/pulls/11375): enh: add suggestion to document reason for repository archival Co-authored-by: Eloy Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/11950 Reviewed-by: Beowulf Reviewed-by: Mathieu Fenniak Reviewed-by: Robert Wolff Co-authored-by: forgejo-backport-action Co-committed-by: forgejo-backport-action --- options/locale/locale_en-US.ini | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/options/locale/locale_en-US.ini b/options/locale/locale_en-US.ini index 6ec0ba978c..2df0b3a728 100644 --- a/options/locale/locale_en-US.ini +++ b/options/locale/locale_en-US.ini @@ -2569,7 +2569,7 @@ settings.matrix.access_token_helper = It is recommended to setup a dedicated Mat settings.matrix.room_id_helper = The Room ID can be retrieved from the Element web client > Room Settings > Advanced > Internal room ID. Example: %s. settings.archive.button = Archive repo settings.archive.header = Archive this repo -settings.archive.text = Archiving the repo will make it entirely read-only. It will be hidden from the dashboard. Nobody (not even you!) will be able to make new commits, or open any issues or pull requests. +settings.archive.text = Archiving the repo will make it entirely read-only. It will be hidden from the dashboard. Nobody (not even you!) will be able to make new commits, or open any issues or pull requests. Documenting the archival reason is recommended to guide future developers who plan to fork the repository. settings.archive.success = The repo was successfully archived. settings.archive.error = An error occurred while trying to archive the repo. See the log for more details. settings.archive.error_ismirror = You cannot archive a mirrored repo. From 8b81d86c38baea2df92be30507f22cafbf289b85 Mon Sep 17 00:00:00 2001 From: forgejo-backport-action Date: Fri, 3 Apr 2026 18:29:31 +0200 Subject: [PATCH 17/82] [v15.0/forgejo] fix: superfluous increment of ActionTask attempt breaks job view (#11964) **Backport:** https://codeberg.org/forgejo/forgejo/pulls/11956 https://codeberg.org/forgejo/forgejo/pulls/11750 missed a place where the attempt number is incremented independently. This caused the job view to break when running a reusable workflow with workflow expansion. ## Checklist The [contributor guide](https://forgejo.org/docs/next/contributor/) contains information that will be helpful to first time contributors. All work and communication must conform to Forgejo's [AI Agreement](https://codeberg.org/forgejo/governance/src/branch/main/AIAgreement.md). There also are a few [conditions for merging Pull Requests in Forgejo repositories](https://codeberg.org/forgejo/governance/src/branch/main/PullRequestsAgreement.md). You are also welcome to join the [Forgejo development chatroom](https://matrix.to/#/#forgejo-development:matrix.org). ### Tests for Go changes (can be removed for JavaScript changes) - I added test coverage for Go changes... - [ ] in their respective `*_test.go` for unit tests. - [x] in the `tests/integration` directory if it involves interactions with a live Forgejo server. - I ran... - [x] `make pr-go` before pushing ### Tests for JavaScript changes (can be removed for Go changes) - I added test coverage for JavaScript changes... - [ ] in `web_src/js/*.test.js` if it can be unit tested. - [ ] in `tests/e2e/*.test.e2e.js` if it requires interactions with a live Forgejo server (see also the [developer guide for JavaScript testing](https://codeberg.org/forgejo/forgejo/src/branch/forgejo/tests/e2e/README.md#end-to-end-tests)). ### Documentation - [ ] I created a pull request [to the documentation](https://codeberg.org/forgejo/docs) to explain to Forgejo users how to use this change. - [x] I did not document these changes and I do not expect someone else to do it. ### Release notes - [ ] This change will be noticed by a Forgejo user or admin (feature, bug fix, performance, etc.). I suggest to include a release note for this change. - [x] This change is not visible to a Forgejo user or admin (refactor, dependency upgrade, etc.). I think there is no need to add a release note for this change. *The decision if the pull request will be shown in the release notes is up to the mergers / release team.* The content of the `release-notes/.md` file will serve as the basis for the release notes. If the file does not exist, the title of the pull request will be used instead. Co-authored-by: Andreas Ahlenstorf Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/11964 Reviewed-by: Andreas Ahlenstorf Co-authored-by: forgejo-backport-action Co-committed-by: forgejo-backport-action --- models/actions/task.go | 4 +--- .../Test_tryHandleWorkflowCallOuterJob/action_run_job.yml | 6 +++++- services/actions/job_emitter.go | 1 - 3 files changed, 6 insertions(+), 5 deletions(-) diff --git a/models/actions/task.go b/models/actions/task.go index 1a208dad9d..ed2cab60e8 100644 --- a/models/actions/task.go +++ b/models/actions/task.go @@ -451,9 +451,7 @@ func CreateTaskForRunner(ctx context.Context, runner *ActionRunner, requestKey, } // Placeholder tasks are created when the status/content of an [ActionRunJob] is resolved by Forgejo without dispatch to -// a runner, specifically in the case of a workflow call's outer job. It is the responsibility of the caller to -// increment the job's Attempt field before invoking this method, and to update that field in the database, so that -// reruns can function for placeholder tasks and provide updated outputs. +// a runner, specifically in the case of a workflow call's outer job. func CreatePlaceholderTask(ctx context.Context, job *ActionRunJob, outputs map[string]string) (*ActionTask, error) { actionTask := &ActionTask{ JobID: job.ID, diff --git a/services/actions/Test_tryHandleWorkflowCallOuterJob/action_run_job.yml b/services/actions/Test_tryHandleWorkflowCallOuterJob/action_run_job.yml index 03449040ca..f710f4bf2e 100644 --- a/services/actions/Test_tryHandleWorkflowCallOuterJob/action_run_job.yml +++ b/services/actions/Test_tryHandleWorkflowCallOuterJob/action_run_job.yml @@ -1,6 +1,7 @@ # Case 600 -- workflow that is not a workflow call outer job - id: 600 + attempt: 1 status: 1 # success started: 1683636528 workflow_payload: | @@ -18,6 +19,7 @@ # contexts should be considered as those in `on.workflow_call.outputs`. - id: 601 + attempt: 1 run_id: 900 status: 1 # success started: 1683636528 @@ -44,6 +46,7 @@ workflow_call_id: b5a9f46f1f2513d7777fde50b169d323a6519e349cc175484c947ac315a209ed - # inner job of run 601 id: 602 + attempt: 1 run_id: 900 status: 1 # success job_id: outer-job.inner-job @@ -54,7 +57,7 @@ - id: 603 run_id: 901 - attempt: 1 + attempt: 2 status: 1 # success started: 1683636528 needs: ["outer-job.inner-job"] @@ -81,6 +84,7 @@ - # inner job of run 603 id: 604 run_id: 901 + attempt: 2 status: 1 # success job_id: outer-job.inner-job task_id: 101 diff --git a/services/actions/job_emitter.go b/services/actions/job_emitter.go index 02ad9d66f5..b23918e42f 100644 --- a/services/actions/job_emitter.go +++ b/services/actions/job_emitter.go @@ -597,7 +597,6 @@ func tryHandleWorkflowCallOuterJob(ctx context.Context, job *actions_model.Actio ) // Insert a placeholder task with all the computed outputs - job.Attempt++ actionTask, err := actions_model.CreatePlaceholderTask(ctx, job, outputs) if err != nil { return nil, fmt.Errorf("failure to insert placeholder task: %w", err) From 0b0aa6170ff924e2fb10979efd38eb70f9972262 Mon Sep 17 00:00:00 2001 From: forgejo-backport-action Date: Fri, 3 Apr 2026 18:45:32 +0200 Subject: [PATCH 18/82] [v15.0/forgejo] Make label dropdown menu items with .tw-hidden unselectable (#11966) **Backport:** https://codeberg.org/forgejo/forgejo/pulls/11858 Fixes https://codeberg.org/forgejo/forgejo/issues/9894. The dropdown menu items are being hidden with `.tw-hidden`. The Fomentic dropdown makes items with `.disabled` and `.filtered` unselectable by default but can be [easily configured](https://fomantic-ui.com/modules/dropdown.html#/settings) to broaden this selector. In the before & after GIFs attached, there is an archived label between "duplicate" and "help wanted". In the before GIF, focus disappears momentarily between the two, which is when the hidden, archived label has been programmatically focused by Fomentic. In the after GIF, focus hops instantaneously between the two selectable labels because of the broader `unselectable` selector. ### Tests for JavaScript changes (can be removed for Go changes) - I added test coverage for JavaScript changes... - [ ] in `web_src/js/*.test.js` if it can be unit tested. - [ ] in `tests/e2e/*.test.e2e.js` if it requires interactions with a live Forgejo server (see also the [developer guide for JavaScript testing](https://codeberg.org/forgejo/forgejo/src/branch/forgejo/tests/e2e/README.md#end-to-end-tests)). ### Documentation - [ ] I created a pull request [to the documentation](https - [ ] - [ ] ://codeberg.org/forgejo/docs) to explain to Forgejo users how to use this change. - [ ] I did not document these changes and I do not expect someone else to do it. ### Release notes - [x] This change will be noticed by a Forgejo user or admin (feature, bug fix, performance, etc.). I suggest to include a release note for this change. - [ ] This change is not visible to a Forgejo user or admin (refactor, dependency upgrade, etc.). I think there is no need to add a release note for this change. *The decision if the pull request will be shown in the release notes is up to the mergers / release team.* The content of the `release-notes/.md` file will serve as the basis for the release notes. If the file does not exist, the title of the pull request will be used instead. Co-authored-by: Henry Catalini Smith Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/11966 Reviewed-by: Mathieu Fenniak Co-authored-by: forgejo-backport-action Co-committed-by: forgejo-backport-action --- web_src/js/features/repo-legacy.js | 3 +++ 1 file changed, 3 insertions(+) diff --git a/web_src/js/features/repo-legacy.js b/web_src/js/features/repo-legacy.js index 4d2f25148c..523ce555c4 100644 --- a/web_src/js/features/repo-legacy.js +++ b/web_src/js/features/repo-legacy.js @@ -90,6 +90,9 @@ export function initRepoCommentForm() { $(`.${selector}`).dropdown({ 'action': 'nothing', // do not hide the menu if user presses Enter fullTextSearch: 'exact', + selector: { + unselectable: '.disabled, .filtered, .tw-hidden', + }, async onHide() { hasUpdateAction = $listMenu.data('action') === 'update'; // Update the var if (hasUpdateAction) { From 7822ed20302afb3b04899a0ad5acd4e9b0b77f0b Mon Sep 17 00:00:00 2001 From: forgejo-backport-action Date: Fri, 3 Apr 2026 18:53:23 +0200 Subject: [PATCH 19/82] [v15.0/forgejo] Add aria-current="page" to active navbar items (#11969) **Backport:** https://codeberg.org/forgejo/forgejo/pulls/11887 By setting `aria-current="page"` on the active navbar item we make the information about which one corresponds to the current page available in a non-visual way. Both the attached screen recordings were produced on http://localhost:3000/pulls, so the "Pull requests" link is the active one. In `before.mp4` all the links are announced identically, and in `after.mp4` the "Pull requests" link is announced like this. > current page, visited, link, Pull requests ### Documentation - [ ] I created a pull request [to the documentation](https://codeberg.org/forgejo/docs) to explain to Forgejo users how to use this change. - [ ] I did not document these changes and I do not expect someone else to do it. ### Release notes - [x] This change will be noticed by a Forgejo user or admin (feature, bug fix, performance, etc.). I suggest to include a release note for this change. - [ ] This change is not visible to a Forgejo user or admin (refactor, dependency upgrade, etc.). I think there is no need to add a release note for this change. *The decision if the pull request will be shown in the release notes is up to the mergers / release team.* The content of the `release-notes/.md` file will serve as the basis for the release notes. If the file does not exist, the title of the pull request will be used instead. Co-authored-by: Henry Catalini Smith Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/11969 Reviewed-by: Mathieu Fenniak Co-authored-by: forgejo-backport-action Co-committed-by: forgejo-backport-action --- templates/base/head_navbar.tmpl | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/templates/base/head_navbar.tmpl b/templates/base/head_navbar.tmpl index e4627451cd..f7405a743e 100644 --- a/templates/base/head_navbar.tmpl +++ b/templates/base/head_navbar.tmpl @@ -28,21 +28,21 @@ {{/* No links */}} {{else if .IsSigned}} {{if not .UnitIssuesGlobalDisabled}} - {{ctx.Locale.Tr "issues"}} + {{ctx.Locale.Tr "issues"}} {{end}} {{if not .UnitPullsGlobalDisabled}} - {{ctx.Locale.Tr "pull_requests"}} + {{ctx.Locale.Tr "pull_requests"}} {{end}} {{if not (and .UnitIssuesGlobalDisabled .UnitPullsGlobalDisabled)}} {{if .ShowMilestonesDashboardPage}} - {{ctx.Locale.Tr "milestones"}} + {{ctx.Locale.Tr "milestones"}} {{end}} {{end}} - {{ctx.Locale.Tr "explore"}} + {{ctx.Locale.Tr "explore"}} {{else if .IsLandingPageOrganizations}} - {{ctx.Locale.Tr "explore"}} + {{ctx.Locale.Tr "explore"}} {{else}} - {{ctx.Locale.Tr "explore"}} + {{ctx.Locale.Tr "explore"}} {{end}} {{template "custom/extra_links" .}} From 6f396c200159f64da6d6c469f09bc0c44917c159 Mon Sep 17 00:00:00 2001 From: forgejo-backport-action Date: Fri, 3 Apr 2026 19:12:51 +0200 Subject: [PATCH 20/82] [v15.0/forgejo] Add aria-label="Copy" to copy button (#11970) **Backport:** https://codeberg.org/forgejo/forgejo/pulls/11895 This copy button on the pull request page lacks an accessible name. You can hear the screen reader announce it as just "button" in the screen recording `button.mp4`, and then hear the amended version in `copy.mp4` where it's announced as "copy, button". The most relevant WCAG success criteria here is [1.1.1 Non-text content](https://www.w3.org/WAI/WCAG21/Understanding/non-text-content.html). ### Documentation - [ ] I created a pull request [to the documentation](https://codeberg.org/forgejo/docs) to explain to Forgejo users how to use this change. - [ ] I did not document these changes and I do not expect someone else to do it. ### Release notes - [x] This change will be noticed by a Forgejo user or admin (feature, bug fix, performance, etc.). I suggest to include a release note for this change. - [ ] This change is not visible to a Forgejo user or admin (refactor, dependency upgrade, etc.). I think there is no need to add a release note for this change. *The decision if the pull request will be shown in the release notes is up to the mergers / release team.* The content of the `release-notes/.md` file will serve as the basis for the release notes. If the file does not exist, the title of the pull request will be used instead. Co-authored-by: Henry Catalini Smith Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/11970 Reviewed-by: Mathieu Fenniak Co-authored-by: forgejo-backport-action Co-committed-by: forgejo-backport-action --- templates/repo/issue/view_content/sidebar/reference.tmpl | 2 +- tests/e2e/issue-sidebar.test.e2e.ts | 4 ++++ 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/templates/repo/issue/view_content/sidebar/reference.tmpl b/templates/repo/issue/view_content/sidebar/reference.tmpl index 5083b97fc2..1b02769ad1 100644 --- a/templates/repo/issue/view_content/sidebar/reference.tmpl +++ b/templates/repo/issue/view_content/sidebar/reference.tmpl @@ -13,7 +13,7 @@
    {{$issueReferenceLink}} - +
    diff --git a/tests/e2e/issue-sidebar.test.e2e.ts b/tests/e2e/issue-sidebar.test.e2e.ts index 6eb44be856..cf3fe607d2 100644 --- a/tests/e2e/issue-sidebar.test.e2e.ts +++ b/tests/e2e/issue-sidebar.test.e2e.ts @@ -378,4 +378,8 @@ test('Issue: Reference', async ({page}) => { await expect(page.locator('.ui.reference .truncate')).toContainText( 'user2/repo1#1', ); + + await page.getByRole('button', {name: 'Copy'}).click(); + const reference = await page.evaluate(() => navigator.clipboard.readText()); + expect(reference).toBe('user2/repo1#1'); }); From 6d67717a21fd9316e832c155bacb84e5fbc9b322 Mon Sep 17 00:00:00 2001 From: forgejo-backport-action Date: Fri, 3 Apr 2026 21:20:36 +0200 Subject: [PATCH 21/82] [v15.0/forgejo] Add aria-labels to ensure watch and star buttons always have a text label (#11967) **Backport:** https://codeberg.org/forgejo/forgejo/pulls/11878 Fixes https://codeberg.org/forgejo/forgejo/issues/6621. The attached screen recording `before.mp4` demos the problem as described by https://codeberg.org/forgejo/forgejo/issues/6621. And `after.mp4` is the fixed version. ### Documentation - [ ] I created a pull request [to the documentation](https://codeberg.org/forgejo/docs) to explain to Forgejo users how to use this change. - [ ] I did not document these changes and I do not expect someone else to do it. ### Release notes - [x] This change will be noticed by a Forgejo user or admin (feature, bug fix, performance, etc.). I suggest to include a release note for this change. - [ ] This change is not visible to a Forgejo user or admin (refactor, dependency upgrade, etc.). I think there is no need to add a release note for this change. *The decision if the pull request will be shown in the release notes is up to the mergers / release team.* The content of the `release-notes/.md` file will serve as the basis for the release notes. If the file does not exist, the title of the pull request will be used instead. Co-authored-by: Henry Catalini Smith Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/11967 Reviewed-by: Mathieu Fenniak Co-authored-by: forgejo-backport-action Co-committed-by: forgejo-backport-action --- templates/repo/star_unstar.tmpl | 2 +- templates/repo/watch_unwatch.tmpl | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/templates/repo/star_unstar.tmpl b/templates/repo/star_unstar.tmpl index 74239fafa4..d483495dd4 100644 --- a/templates/repo/star_unstar.tmpl +++ b/templates/repo/star_unstar.tmpl @@ -1,6 +1,6 @@
    -
    {{if $.PullMirror}} + {{$address := MirrorRemoteAddress $.Context $.PullMirror}}
    {{ctx.Locale.Tr "repo.mirror_from"}} - {{$.PullMirror.RemoteAddress}} + {{$address.Address}} {{if $.PullMirror.UpdatedUnix}}{{ctx.Locale.Tr "repo.mirror_sync"}} {{DateUtils.TimeSince $.PullMirror.UpdatedUnix}}{{end}}
    {{end}} diff --git a/templates/repo/settings/options.tmpl b/templates/repo/settings/options.tmpl index fa25f4630a..137be0f334 100644 --- a/templates/repo/settings/options.tmpl +++ b/templates/repo/settings/options.tmpl @@ -148,9 +148,10 @@ {{else if $isWorkingPullMirror}} + {{$address := MirrorRemoteAddress $.Context .PullMirror}} - {{.PullMirror.RemoteAddress}} + {{$address.Address}} {{ctx.Locale.Tr "repo.settings.mirror_settings.direction.pull"}} {{DateUtils.FullTime .PullMirror.UpdatedUnix}} @@ -176,7 +177,6 @@ - {{$address := MirrorRemoteAddress $.Context .Repository .PullMirror.GetRemoteName}}
    diff --git a/tests/integration/mirror_pull_test.go b/tests/integration/mirror_pull_test.go index 03d4bdcf92..a0de586aaa 100644 --- a/tests/integration/mirror_pull_test.go +++ b/tests/integration/mirror_pull_test.go @@ -5,9 +5,16 @@ package integration import ( + "fmt" "net/http" + "net/url" + "os" + "path" + "strings" "testing" + "time" + "forgejo.org/models/auth" "forgejo.org/models/db" repo_model "forgejo.org/models/repo" "forgejo.org/models/unittest" @@ -15,94 +22,546 @@ import ( "forgejo.org/modules/git" "forgejo.org/modules/gitrepo" "forgejo.org/modules/migration" + "forgejo.org/modules/optional" + "forgejo.org/modules/process" + "forgejo.org/modules/setting" + "forgejo.org/modules/structs" app_context "forgejo.org/services/context" + "forgejo.org/services/forms" + "forgejo.org/services/migrations" mirror_service "forgejo.org/services/mirror" release_service "forgejo.org/services/release" repo_service "forgejo.org/services/repository" + files_service "forgejo.org/services/repository/files" "forgejo.org/tests" + "github.com/PuerkitoBio/goquery" + "github.com/google/uuid" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func TestMirrorPull(t *testing.T) { - defer tests.PrepareTestEnv(t)() + t.Run("Basic", func(t *testing.T) { + defer tests.PrepareTestEnv(t)() - user := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 2}) - repo := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: 1}) - repoPath := repo_model.RepoPath(user.Name, repo.Name) + user := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 2}) + repo := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: 1}) + repoPath := repo_model.RepoPath(user.Name, repo.Name) - opts := migration.MigrateOptions{ - RepoName: "test_mirror", - Description: "Test mirror", - Private: false, - Mirror: true, - CloneAddr: repoPath, - Wiki: true, - Releases: false, - } + opts := migration.MigrateOptions{ + RepoName: "test_mirror", + Description: "Test mirror", + Private: false, + Mirror: true, + CloneAddr: repoPath, + Wiki: true, + Releases: false, + } - mirrorRepo, err := repo_service.CreateRepositoryDirectly(db.DefaultContext, user, user, repo_service.CreateRepoOptions{ - Name: opts.RepoName, - Description: opts.Description, - IsPrivate: opts.Private, - IsMirror: opts.Mirror, - Status: repo_model.RepositoryBeingMigrated, + mirrorRepo, err := repo_service.CreateRepositoryDirectly(db.DefaultContext, user, user, repo_service.CreateRepoOptions{ + Name: opts.RepoName, + Description: opts.Description, + IsPrivate: opts.Private, + IsMirror: opts.Mirror, + Status: repo_model.RepositoryBeingMigrated, + }) + require.NoError(t, err) + assert.True(t, mirrorRepo.IsMirror, "expected pull-mirror repo to be marked as a mirror immediately after its creation") + + ctx := t.Context() + + mirror, err := repo_service.MigrateRepositoryGitData(ctx, user, mirrorRepo, opts, nil) + require.NoError(t, err) + + gitRepo, err := gitrepo.OpenRepository(git.DefaultContext, repo) + require.NoError(t, err) + defer gitRepo.Close() + + findOptions := repo_model.FindReleasesOptions{ + IncludeDrafts: true, + IncludeTags: true, + RepoID: mirror.ID, + } + initCount, err := db.Count[repo_model.Release](db.DefaultContext, findOptions) + require.NoError(t, err) + + require.NoError(t, release_service.CreateRelease(gitRepo, &repo_model.Release{ + RepoID: repo.ID, + Repo: repo, + PublisherID: user.ID, + Publisher: user, + TagName: "v0.2", + Target: "master", + Title: "v0.2 is released", + Note: "v0.2 is released", + IsDraft: false, + IsPrerelease: false, + IsTag: true, + }, "", []*release_service.AttachmentChange{})) + + _, err = repo_model.GetMirrorByRepoID(ctx, mirror.ID) + require.NoError(t, err) + + ok := mirror_service.SyncPullMirror(ctx, mirror.ID) + assert.True(t, ok) + + count, err := db.Count[repo_model.Release](db.DefaultContext, findOptions) + require.NoError(t, err) + assert.Equal(t, initCount+1, count) + + release, err := repo_model.GetRelease(db.DefaultContext, repo.ID, "v0.2") + require.NoError(t, err) + require.NoError(t, release_service.DeleteReleaseByID(ctx, repo, release, user, true)) + + ok = mirror_service.SyncPullMirror(ctx, mirror.ID) + assert.True(t, ok) + + count, err = db.Count[repo_model.Release](db.DefaultContext, findOptions) + require.NoError(t, err) + assert.Equal(t, initCount, count) }) - require.NoError(t, err) - assert.True(t, mirrorRepo.IsMirror, "expected pull-mirror repo to be marked as a mirror immediately after its creation") - ctx := t.Context() - - mirror, err := repo_service.MigrateRepositoryGitData(ctx, user, mirrorRepo, opts, nil) - require.NoError(t, err) - - gitRepo, err := gitrepo.OpenRepository(git.DefaultContext, repo) - require.NoError(t, err) - defer gitRepo.Close() - - findOptions := repo_model.FindReleasesOptions{ - IncludeDrafts: true, - IncludeTags: true, - RepoID: mirror.ID, + // How will we interact with the pull mirror during this test? + interactionMethod := []struct { + name string + useAPI bool + createPullMirror func(t *testing.T, sourceRepo *repo_model.Repository, authenticate bool) (repoName string) + verifyMirrorDetails func(t *testing.T, sourceRepo *repo_model.Repository, mirrorName string, authenticate bool) + triggerPullMirror func(t *testing.T, mirrorName string) + changePullMirrorCredentials func(t *testing.T, sourceRepo *repo_model.Repository, mirrorName string, authenticate bool) + changePullMirrorAddress func(t *testing.T, sourceRepo *repo_model.Repository, mirrorName string, authenticate bool) + }{ + { + name: "API", + useAPI: true, + createPullMirror: createPullMirrorViaAPI, + triggerPullMirror: triggerPullMirrorViaAPI, + verifyMirrorDetails: func(t *testing.T, sourceRepo *repo_model.Repository, mirrorName string, authenticate bool) { + // API provides no visibility into a repo's mirror settings right now + }, + }, + { + name: "Web", + useAPI: false, + createPullMirror: createPullMirrorViaWeb, + triggerPullMirror: triggerPullMirrorViaWeb, + verifyMirrorDetails: verifyPullMirrorViaWeb, + changePullMirrorCredentials: changePullMirrorCredentialsViaWeb, + changePullMirrorAddress: changePullMirrorCredentialsViaWeb, // one endpoint, so same as creds + }, } - initCount, err := db.Count[repo_model.Release](db.DefaultContext, findOptions) + + mirrorConfiguration := []struct { + name string + privateSource bool + }{ + { + name: "HTTP Without Auth", + }, + { + name: "HTTP With Auth", + privateSource: true, + }, + } + + // Not using MockVariableValue due to need to undo `migrations.Init()` + prev := setting.Migrations.AllowedDomains + setting.Migrations.AllowedDomains = "localhost" + migrations.Init() // reinitialize for changed allowList + defer func() { + setting.Migrations.AllowedDomains = prev + migrations.Init() // reinitialize for changed allowList + }() + + onApplicationRun(t, func(t *testing.T, u *url.URL) { + for _, im := range interactionMethod { + for _, mc := range mirrorConfiguration { + t.Run(fmt.Sprintf("%s/%s", im.name, mc.name), func(t *testing.T) { + defer tests.PrintCurrentTest(t)() + + user2 := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 2}) + + // Create the source repository that will be mirrored. + sourceRepo, sourceRepoSha, cleanupSource := tests.CreateDeclarativeRepoWithOptions(t, user2, + tests.DeclarativeRepoOptions{ + IsPrivate: optional.Some(mc.privateSource), + Files: optional.Some([]*files_service.ChangeRepoFile{ + { + Operation: "create", + TreePath: "docs.md", + ContentReader: strings.NewReader("hello, world"), + }, + }), + }, + ) + defer cleanupSource() + require.NotEmpty(t, sourceRepoSha) + + // Create pull mirror + mirror := im.createPullMirror(t, sourceRepo, mc.privateSource) + verifyPullMirrorContents(t, mirror, sourceRepoSha) + verifyPullMirrorConfig(t, mirror, sourceRepo, mc.privateSource) + im.verifyMirrorDetails(t, sourceRepo, mirror, mc.privateSource) + + // Push a change to the source and refresh the mirror + sourceRepoSha = changePullMirrorSource(t, sourceRepo, sourceRepoSha) + im.triggerPullMirror(t, mirror) + waitForPullMirror(t, mirror, sourceRepoSha) + + // Test changing the mirror's authentication method (if available) + if mc.privateSource && im.changePullMirrorCredentials != nil { + sourceRepoSha = changePullMirrorSource(t, sourceRepo, sourceRepoSha) + im.changePullMirrorCredentials(t, sourceRepo, mirror, mc.privateSource) + verifyPullMirrorConfig(t, mirror, sourceRepo, mc.privateSource) + im.verifyMirrorDetails(t, sourceRepo, mirror, mc.privateSource) + im.triggerPullMirror(t, mirror) + waitForPullMirror(t, mirror, sourceRepoSha) + } + + // Test changing the mirror's address (if available) + if im.changePullMirrorAddress != nil { + sourceRepo = renamePullMirrorSourceRepo(t, sourceRepo) + sourceRepoSha = changePullMirrorSource(t, sourceRepo, sourceRepoSha) + im.changePullMirrorAddress(t, sourceRepo, mirror, mc.privateSource) + verifyPullMirrorConfig(t, mirror, sourceRepo, mc.privateSource) + im.verifyMirrorDetails(t, sourceRepo, mirror, mc.privateSource) + im.triggerPullMirror(t, mirror) + waitForPullMirror(t, mirror, sourceRepoSha) + } + }) + } + } + }) + + t.Run("migrate from repo config credentials", func(t *testing.T) { + defer tests.PrintCurrentTest(t)() + + user2 := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 2}) + + mirrorRepo, _, cleanupMirror := tests.CreateDeclarativeRepoWithOptions(t, user2, + tests.DeclarativeRepoOptions{}, + ) + defer cleanupMirror() + + // Write to the repo a config file that would have plausibly existed before EncryptedRemoteAddress was + // introduced: + repoPath := mirrorRepo.RepoPath() + err := os.WriteFile(path.Join(repoPath, "config"), []byte(` +[core] + repositoryformatversion = 0 + filemode = true + bare = true +[remote "origin"] + url = https://user:password@example.com/org/repo.git + tagOpt = --no-tags + fetch = +refs/*:refs/* + mirror = true + fetch = +refs/tags/*:refs/tags/* +`), 0o644) + require.NoError(t, err) + + // Create a Mirror record without an EncryptedRemoteAddress: + mirror := &repo_model.Mirror{ + RepoID: mirrorRepo.ID, + Interval: 8 * time.Hour, + EnablePrune: true, + } + _, err = db.GetEngine(t.Context()).Insert(mirror) + require.NoError(t, err) + require.Nil(t, mirror.EncryptedRemoteAddress) + + remoteURL, err := mirror_service.DecryptOrRecoverRemoteAddress(t.Context(), mirror) + require.NoError(t, err) + assert.Equal(t, "https://user:password@example.com/org/repo.git", remoteURL.URL.String()) + + // EncryptedRemoteAddress should now be populated from the recovery: + assert.NotNil(t, mirror.EncryptedRemoteAddress) + maybeDecryptedURL, err := mirror.DecryptRemoteAddress() + require.NoError(t, err) + has, decryptedURL := maybeDecryptedURL.Get() + require.True(t, has) + assert.Equal(t, "https://user:password@example.com/org/repo.git", decryptedURL) + + // SanitizedRemoteAddress can be fetched: + maybeSanitizedURL, err := mirror.SanitizedRemoteAddress() + require.NoError(t, err) + has, sanitizedURL := maybeSanitizedURL.Get() + require.True(t, has) + assert.Equal(t, "https://user@example.com/org/repo.git", sanitizedURL) + + // Database record is updated in the database: + refetchMirror := unittest.AssertExistsAndLoadBean(t, &repo_model.Mirror{RepoID: mirrorRepo.ID}) + assert.Equal(t, mirror.EncryptedRemoteAddress, refetchMirror.EncryptedRemoteAddress) + + // Config file is rewritten: + config, err := os.ReadFile(path.Join(repoPath, "config")) + require.NoError(t, err) + assert.Equal(t, ` +[core] + repositoryformatversion = 0 + filemode = true + bare = true +[remote "origin"] + url = https://user@example.com/org/repo.git + tagOpt = --no-tags + fetch = +refs/*:refs/* + mirror = true + fetch = +refs/tags/*:refs/tags/* +`, string(config)) + }) +} + +func createPullMirrorViaWeb(t *testing.T, sourceRepo *repo_model.Repository, authenticate bool) string { + session := loginUser(t, "user2") + + mirrorName := fmt.Sprintf("pullmirror-%s", sourceRepo.Name) + form := &forms.MigrateRepoForm{ + CloneAddr: sourceRepo.CloneLink().HTTPS, + Service: structs.PlainGitService, + UID: 2, + RepoName: mirrorName, + Mirror: true, + } + if authenticate { + form.AuthUsername = "user2" + form.AuthPassword = getTokenForLoggedInUser(t, session, auth.AccessTokenScopeReadRepository) + } + + resp := session.MakeRequest(t, + NewRequestWithJSON(t, "POST", "/repo/migrate", form), + http.StatusSeeOther) + location := resp.Header().Get("Location") + assert.Equal(t, fmt.Sprintf("/user2/pullmirror-%s", sourceRepo.Name), location) + + var lastBody string + if !assert.Eventuallyf(t, + func() bool { + resp := session.MakeRequest(t, + NewRequest(t, "GET", location), + http.StatusOK) + body := resp.Body.String() + lastBody = body + // Looking for the repo page to be fully populated indicating that the migration is complete: + // Check that the first commit message is present: + if !strings.Contains(body, "Initial commit") { + return false + } + // Check that the fork button is present: + if !strings.Contains(body, fmt.Sprintf("/user2/%s/fork", mirrorName)) { + return false + } + return true + }, + 15*time.Second, 1*time.Second, + "expected migration to complete and repo page to render") { + t.Logf("last received page body: %s", lastBody) + } + + return mirrorName +} + +func createPullMirrorViaAPI(t *testing.T, sourceRepo *repo_model.Repository, authenticate bool) string { + session := loginUser(t, "user2") + apiToken := getTokenForLoggedInUser(t, session, auth.AccessTokenScopeWriteRepository) + + mirrorName := fmt.Sprintf("pullmirror-%s", sourceRepo.Name) + form := &structs.MigrateRepoOptions{ + CloneAddr: sourceRepo.CloneLink().HTTPS, + Service: "git", + RepoOwner: "user2", + RepoName: mirrorName, + Mirror: true, + } + if authenticate { + form.AuthUsername = "user2" + form.AuthPassword = getTokenForLoggedInUser(t, session, auth.AccessTokenScopeReadRepository) + } + + resp := session.MakeRequest(t, + NewRequestWithJSON(t, "POST", "/api/v1/repos/migrate", form).AddTokenAuth(apiToken), + http.StatusCreated) + var repo structs.Repository + DecodeJSON(t, resp, &repo) + assert.NotNil(t, repo) + assert.True(t, repo.Mirror) + assert.False(t, repo.Empty) + + return mirrorName +} + +func verifyPullMirrorViaWeb(t *testing.T, sourceRepo *repo_model.Repository, mirrorName string, authenticate bool) { + session := loginUser(t, "user2") + resp := session.MakeRequest(t, + NewRequestf(t, "GET", "/user2/%s/settings", mirrorName), + http.StatusOK) + htmlDoc := NewHTMLParser(t, resp.Body) + htmlDoc.AssertAttrEqual(t, "#mirror_address", "value", sourceRepo.CloneLink().HTTPS) + if authenticate { + htmlDoc.AssertAttrEqual(t, "#mirror_username", "value", "user2") + htmlDoc.AssertAttrEqual(t, "#mirror_password", "value", "") + htmlDoc.AssertAttrEqual(t, "#mirror_password", "placeholder", "(Unchanged)") + } else { + htmlDoc.AssertAttrEqual(t, "#mirror_username", "value", "") + htmlDoc.AssertAttrEqual(t, "#mirror_password", "value", "") + htmlDoc.AssertAttrEqual(t, "#mirror_password", "placeholder", "(Unset)") + } + + resp = session.MakeRequest(t, + NewRequestf(t, "GET", "/user2/%s", mirrorName), + http.StatusOK) + htmlDoc = NewHTMLParser(t, resp.Body) + htmlDoc.AssertElementPredicate(t, ".fork-flag", func(selection *goquery.Selection) bool { + text := strings.TrimSpace(selection.Text()) + assert.Contains(t, text, "mirror of") + assert.Contains(t, text, sourceRepo.CloneLink().HTTPS) + return true + }) +} + +func triggerPullMirrorViaWeb(t *testing.T, mirrorName string) { + session := loginUser(t, "user2") + + resp := session.MakeRequest(t, + NewRequestWithValues(t, "POST", fmt.Sprintf("/user2/%s/settings", mirrorName), map[string]string{"action": "mirror-sync"}), + http.StatusSeeOther) + location := resp.Header().Get("Location") + assert.Equal(t, fmt.Sprintf("/user2/%s/settings", mirrorName), location) +} + +func triggerPullMirrorViaAPI(t *testing.T, mirrorName string) { + session := loginUser(t, "user2") + apiToken := getTokenForLoggedInUser(t, session, auth.AccessTokenScopeWriteRepository) + + // Trigger sync... + session.MakeRequest(t, + NewRequestf(t, "POST", "/api/v1/repos/user2/%s/mirror-sync", mirrorName).AddTokenAuth(apiToken), + http.StatusOK) +} + +func changePullMirrorCredentialsViaWeb(t *testing.T, sourceRepo *repo_model.Repository, mirrorName string, authenticate bool) { + session := loginUser(t, "user2") + + form := map[string]string{ + "action": "mirror", + "enable_prune": "on", + "interval": "8h0m0s", + "mirror_address": sourceRepo.CloneLink().HTTPS, + } + if authenticate { + form["mirror_username"] = "user2" + form["mirror_password"] = getTokenForLoggedInUser(t, session, auth.AccessTokenScopeReadRepository) + } + + resp := session.MakeRequest(t, + NewRequestWithValues(t, "POST", fmt.Sprintf("/user2/%s/settings", mirrorName), form), + http.StatusSeeOther) + location := resp.Header().Get("Location") + assert.Equal(t, fmt.Sprintf("/user2/%s/settings", mirrorName), location) +} + +func verifyPullMirrorContents(t *testing.T, mirrorName, expectedSha string) { + session := loginUser(t, "user2") + apiToken := getTokenForLoggedInUser(t, session, auth.AccessTokenScopeReadRepository) + resp := session.MakeRequest(t, + NewRequest(t, "GET", fmt.Sprintf("/api/v1/repos/user2/%s/commits?sha=main&limit=1", mirrorName)).AddTokenAuth(apiToken), + http.StatusOK) + var commits []*structs.Commit + DecodeJSON(t, resp, &commits) + require.Len(t, commits, 1) + assert.Equal(t, expectedSha, commits[0].SHA) +} + +func waitForPullMirror(t *testing.T, mirrorName, expectedSha string) { + session := loginUser(t, "user2") + apiToken := getTokenForLoggedInUser(t, session, auth.AccessTokenScopeReadRepository) + + var commits []*structs.Commit + if !assert.Eventually(t, + func() bool { + resp := session.MakeRequest(t, + NewRequest(t, "GET", fmt.Sprintf("/api/v1/repos/user2/%s/commits?sha=main&limit=1", mirrorName)).AddTokenAuth(apiToken), + http.StatusOK) + DecodeJSON(t, resp, &commits) + require.Len(t, commits, 1) + return commits[0].SHA == expectedSha + }, + 15*time.Second, 1*time.Second) { + t.Logf("sync was supposed to bring repo to commit %s, but observed commits = %#v", expectedSha, commits) + } +} + +func getGitConfig(t *testing.T, configFile, configPath string) string { + stdout, stderr, err := process.GetManager().Exec("getGitConfig", "git", "config", "get", "--file", configFile, configPath) + require.NoError(t, err, "fetch config %s failed: git stderr: %s", configPath, stderr) + return strings.TrimSpace(stdout) +} + +func verifyPullMirrorConfig(t *testing.T, mirrorName string, sourceRepo *repo_model.Repository, authenticate bool) { + mirrorRepo, err := repo_model.GetRepositoryByOwnerAndName(t.Context(), "user2", mirrorName) require.NoError(t, err) - require.NoError(t, release_service.CreateRelease(gitRepo, &repo_model.Release{ - RepoID: repo.ID, - Repo: repo, - PublisherID: user.ID, - Publisher: user, - TagName: "v0.2", - Target: "master", - Title: "v0.2 is released", - Note: "v0.2 is released", - IsDraft: false, - IsPrerelease: false, - IsTag: true, - }, "", []*release_service.AttachmentChange{})) + repoPath := mirrorRepo.RepoPath() + configPath := path.Join(repoPath, "config") - _, err = repo_model.GetMirrorByRepoID(ctx, mirror.ID) + expectedURL := sourceRepo.CloneLink().HTTPS + if authenticate { + expectedURL = strings.Replace(expectedURL, "http://", "http://user2@", 1) + } + assert.Equal(t, expectedURL, getGitConfig(t, configPath, "remote.origin.url")) + assert.Equal(t, "true", getGitConfig(t, configPath, "remote.origin.mirror")) + assert.Equal(t, "+refs/tags/*:refs/tags/*", getGitConfig(t, configPath, "remote.origin.fetch")) +} + +func changePullMirrorSource(t *testing.T, sourceRepo *repo_model.Repository, sourceRepoSha string) string { + user2 := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 2}) + resp, err := files_service.ChangeRepoFiles(git.DefaultContext, sourceRepo, user2, + &files_service.ChangeRepoFilesOptions{ + Files: []*files_service.ChangeRepoFile{ + { + Operation: "update", + TreePath: "docs.md", + ContentReader: strings.NewReader(uuid.NewString()), + }, + }, + Message: "add files", + OldBranch: "main", + NewBranch: "main", + Author: &files_service.IdentityOptions{ + Name: user2.Name, + Email: user2.Email, + }, + Committer: &files_service.IdentityOptions{ + Name: user2.Name, + Email: user2.Email, + }, + Dates: &files_service.CommitDateOptions{ + Author: time.Now(), + Committer: time.Now(), + }, + LastCommitID: sourceRepoSha, + }) require.NoError(t, err) + assert.NotEmpty(t, resp) + return resp.Commit.SHA +} - ok := mirror_service.SyncPullMirror(ctx, mirror.ID) - assert.True(t, ok) +func renamePullMirrorSourceRepo(t *testing.T, sourceRepo *repo_model.Repository) *repo_model.Repository { + session := loginUser(t, "user2") + apiToken := getTokenForLoggedInUser(t, session, auth.AccessTokenScopeWriteRepository) - count, err := db.Count[repo_model.Release](db.DefaultContext, findOptions) - require.NoError(t, err) - assert.Equal(t, initCount+1, count) + newName := uuid.NewString() + session.MakeRequest(t, + NewRequestWithJSON(t, "PATCH", fmt.Sprintf("/api/v1/repos/user2/%s", sourceRepo.Name), + &structs.EditRepoOption{ + Name: &newName, + }).AddTokenAuth(apiToken), + http.StatusOK) - release, err := repo_model.GetRelease(db.DefaultContext, repo.ID, "v0.2") - require.NoError(t, err) - require.NoError(t, release_service.DeleteReleaseByID(ctx, repo, release, user, true)) - - ok = mirror_service.SyncPullMirror(ctx, mirror.ID) - assert.True(t, ok) - - count, err = db.Count[repo_model.Release](db.DefaultContext, findOptions) - require.NoError(t, err) - assert.Equal(t, initCount, count) + newRepo := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: sourceRepo.ID}) + assert.Equal(t, newRepo.Name, newName) + assert.NotEqual(t, newRepo.CloneLink().HTTPS, sourceRepo.CloneLink().HTTPS) // should have changed to new name + return newRepo } func TestPullMirrorRedactCredentials(t *testing.T) { From 4ca6b703af64cc9bdb457e827a5d16f683c2b052 Mon Sep 17 00:00:00 2001 From: forgejo-backport-action Date: Sat, 4 Apr 2026 19:16:35 +0200 Subject: [PATCH 23/82] [v15.0/forgejo] feat: support `timezone` in scheduled workflows (#11986) **Backport:** https://codeberg.org/forgejo/forgejo/pulls/11851 GitHub recently added the ability to [specify a time zone for scheduled workflows](https://docs.github.com/en/actions/reference/workflows-and-actions/workflow-syntax#onschedule), thereby making it possible to run scheduled workflows at a certain local time, no matter whether daylight saving time (DST) is currently active or not. Example copied from GitHub's documentation: ```yaml on: schedule: - cron: '30 5 * * 1-5' timezone: "America/New_York" ``` The workflow would run at 05:30 each morning in the America/New_York timezone every Monday through Friday. `timezone` accepts IANA time zone names. If `timezone` is absent, `Etc/UTC` is used. GitHub runs workflows that were scheduled during DST jumps forward, for example, between 2 o'clock and 3 o'clock, directly after the clock jumped forward. In this case, that would be 3 o'clock. Forgejo already supports time zones by prepending cron schedules with `TZ=` or `CRON_TZ=`: ```yaml on: schedule: - cron: 'CRON_TZ=America/New_York 30 5 * * 1-5' ``` However, that capability is not documented. Workflows that are scheduled to run during DST changes are skipped when the clock jumps forward and run twice when it jumps backward. This two-part PR adds support for `timezone` to improve compatibility with GitHub. `TZ` and `CRON_TZ` continue working. When both `timezone` and `TZ` or `CRON_TZ` are present, `timezone` takes precedence. When neither `timezone` nor `TZ` nor `CRON_TZ` are present, `Etc/UTC` is used as before. Because `TZ` and `CRON_TZ` were already supported by Forgejo before GitHub introduced `timezone`, `timezone` behaves during DST changes as previous versions of Forgejo, thereby deviating from GitHub. That means that workflows that are scheduled to run during DST changes are skipped when the clock jumps forward. And they run twice when it jumps backwards. However, it is generally recommended not to schedule workflows during the time of day when DST changes occur. This part of the PR integrates the [workflow validation and parsing of the `timezone` field](https://code.forgejo.org/forgejo/runner/pulls/1454) supplied by Forgejo Runner. ## Checklist The [contributor guide](https://forgejo.org/docs/next/contributor/) contains information that will be helpful to first time contributors. All work and communication must conform to Forgejo's [AI Agreement](https://codeberg.org/forgejo/governance/src/branch/main/AIAgreement.md). There also are a few [conditions for merging Pull Requests in Forgejo repositories](https://codeberg.org/forgejo/governance/src/branch/main/PullRequestsAgreement.md). You are also welcome to join the [Forgejo development chatroom](https://matrix.to/#/#forgejo-development:matrix.org). ### Tests for Go changes (can be removed for JavaScript changes) - I added test coverage for Go changes... - [x] in their respective `*_test.go` for unit tests. - [x] in the `tests/integration` directory if it involves interactions with a live Forgejo server. - I ran... - [x] `make pr-go` before pushing ### Tests for JavaScript changes (can be removed for Go changes) - I added test coverage for JavaScript changes... - [ ] in `web_src/js/*.test.js` if it can be unit tested. - [ ] in `tests/e2e/*.test.e2e.js` if it requires interactions with a live Forgejo server (see also the [developer guide for JavaScript testing](https://codeberg.org/forgejo/forgejo/src/branch/forgejo/tests/e2e/README.md#end-to-end-tests)). ### Documentation - [x] I created a pull request [to the documentation](https://codeberg.org/forgejo/docs) to explain to Forgejo users how to use this change. - https://codeberg.org/forgejo/docs/pulls/1853 - [ ] I did not document these changes and I do not expect someone else to do it. ### Release notes - [x] This change will be noticed by a Forgejo user or admin (feature, bug fix, performance, etc.). I suggest to include a release note for this change. - [ ] This change is not visible to a Forgejo user or admin (refactor, dependency upgrade, etc.). I think there is no need to add a release note for this change. *The decision if the pull request will be shown in the release notes is up to the mergers / release team.* The content of the `release-notes/.md` file will serve as the basis for the release notes. If the file does not exist, the title of the pull request will be used instead. ## Release notes - Features - [PR](https://codeberg.org/forgejo/forgejo/pulls/11851): support `timezone` in scheduled workflows Co-authored-by: Andreas Ahlenstorf Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/11986 Reviewed-by: Mathieu Fenniak Co-authored-by: forgejo-backport-action Co-committed-by: forgejo-backport-action --- go.mod | 2 +- go.sum | 4 +- models/actions/schedule.go | 22 +--- models/actions/schedule_spec.go | 44 ++++++-- models/actions/schedule_spec_test.go | 79 ++++++++++++-- models/actions/schedule_test.go | 102 ++++++++++++++++++ .../v15c_add_schedule_spec_time_zones.go | 31 ++++++ .../action_schedule.yml | 4 - services/actions/notifier_helper.go | 14 ++- services/actions/schedule_tasks_test.go | 30 ++++-- tests/integration/actions_trigger_test.go | 49 ++++++++- 11 files changed, 323 insertions(+), 58 deletions(-) create mode 100644 models/actions/schedule_test.go create mode 100644 models/forgejo_migrations/v15c_add_schedule_spec_time_zones.go diff --git a/go.mod b/go.mod index 9f751ccbb1..b37a7eb424 100644 --- a/go.mod +++ b/go.mod @@ -11,7 +11,7 @@ require ( code.forgejo.org/forgejo/go-rpmutils v1.0.0 code.forgejo.org/forgejo/levelqueue v1.0.0 code.forgejo.org/forgejo/reply v1.0.2 - code.forgejo.org/forgejo/runner/v12 v12.7.3 + code.forgejo.org/forgejo/runner/v12 v12.8.0 code.forgejo.org/go-chi/binding v1.0.1 code.forgejo.org/go-chi/cache v1.0.1 code.forgejo.org/go-chi/captcha v1.0.2 diff --git a/go.sum b/go.sum index 3e4ee526b9..76e13223a8 100644 --- a/go.sum +++ b/go.sum @@ -30,8 +30,8 @@ code.forgejo.org/forgejo/levelqueue v1.0.0 h1:9krYpU6BM+j/1Ntj6m+VCAIu0UNnne1/Uf code.forgejo.org/forgejo/levelqueue v1.0.0/go.mod h1:fmG6zhVuqim2rxSFOoasgXO8V2W/k9U31VVYqLIRLhQ= code.forgejo.org/forgejo/reply v1.0.2 h1:dMhQCHV6/O3L5CLWNTol+dNzDAuyCK88z4J/lCdgFuQ= code.forgejo.org/forgejo/reply v1.0.2/go.mod h1:RyZUfzQLc+fuLIGjTSQWDAJWPiL4WtKXB/FifT5fM7U= -code.forgejo.org/forgejo/runner/v12 v12.7.3 h1:+thSawVfLeAZaWB6sYeUPvLj4lxYjCIDt/ktvkfX5Rs= -code.forgejo.org/forgejo/runner/v12 v12.7.3/go.mod h1:OO+Vy9Dww6WNV7GG/6VUWo/0WwXY+ASGlINmAfEA9Ws= +code.forgejo.org/forgejo/runner/v12 v12.8.0 h1:/MqOseYbsGaQ2qzepaZr3VyuqpESvSP/ZnC2aKfmU3g= +code.forgejo.org/forgejo/runner/v12 v12.8.0/go.mod h1:sgDAYfO4NJI1kUzGuD7klHuoFLQzWmZPw0erg7QlbJU= code.forgejo.org/forgejo/ssh v0.0.0-20241211213324-5fc306ca0616 h1:kEZL84+02jY9RxXM4zHBWZ3Fml0B09cmP1LGkDsCfIA= code.forgejo.org/forgejo/ssh v0.0.0-20241211213324-5fc306ca0616/go.mod h1:zpHEXBstFnQYtGnB8k8kQLol82umzn/2/snG7alWVD8= code.forgejo.org/go-chi/binding v1.0.1 h1:coKNI+X1NzRN7X85LlrpvBRqk0TXpJ+ja28vusQWEuY= diff --git a/models/actions/schedule.go b/models/actions/schedule.go index 05c9f15d38..8c410b9d38 100644 --- a/models/actions/schedule.go +++ b/models/actions/schedule.go @@ -5,7 +5,6 @@ package actions import ( "context" - "time" "forgejo.org/models/db" repo_model "forgejo.org/models/repo" @@ -21,7 +20,7 @@ import ( type ActionSchedule struct { ID int64 Title string - Specs []string + Specs []*ActionScheduleSpec `xorm:"-"` RepoID int64 `xorm:"index"` Repo *repo_model.Repository `xorm:"-"` OwnerID int64 `xorm:"index"` @@ -73,25 +72,12 @@ func CreateScheduleTask(ctx context.Context, rows []*ActionSchedule) error { return err } - // Loop through each schedule spec and create a new spec row - now := time.Now() - for _, spec := range row.Specs { - specRow := &ActionScheduleSpec{ - RepoID: row.RepoID, - ScheduleID: row.ID, - Spec: spec, - } - // Parse the spec and check for errors - schedule, err := specRow.Parse() - if err != nil { - continue // skip to the next spec if there's an error - } - - specRow.Next = timeutil.TimeStamp(schedule.Next(now).Unix()) + spec.ScheduleID = row.ID + spec.RepoID = row.RepoID // Insert the new schedule spec row - if err = db.Insert(ctx, specRow); err != nil { + if err = db.Insert(ctx, spec); err != nil { return err } } diff --git a/models/actions/schedule_spec.go b/models/actions/schedule_spec.go index 83bdceb850..bcaee8bd6f 100644 --- a/models/actions/schedule_spec.go +++ b/models/actions/schedule_spec.go @@ -10,6 +10,7 @@ import ( "forgejo.org/models/db" repo_model "forgejo.org/models/repo" + "forgejo.org/modules/optional" "forgejo.org/modules/timeutil" "github.com/robfig/cron/v3" @@ -27,13 +28,28 @@ type ActionScheduleSpec struct { // started or this entry's schedule is unsatisfiable Next timeutil.TimeStamp `xorm:"index"` // Prev is the last time this job was run, or the zero time if never. - Prev timeutil.TimeStamp - Spec string + Prev timeutil.TimeStamp + Spec string + TimeZone optional.Option[string] Created timeutil.TimeStamp `xorm:"created"` Updated timeutil.TimeStamp `xorm:"updated"` } +func NewActionScheduleSpec(cron string, tz optional.Option[string], referenceTime time.Time) (*ActionScheduleSpec, error) { + spec := &ActionScheduleSpec{ + Spec: cron, + TimeZone: tz, + } + cronSchedule, err := spec.Parse() + if err != nil { + return nil, err + } + + spec.Next = timeutil.TimeStamp(cronSchedule.Next(referenceTime).Unix()) + return spec, nil +} + // Parse parses the spec and returns a cron.Schedule // Unlike the default cron parser, Parse uses UTC timezone as the default if none is specified. func (s *ActionScheduleSpec) Parse() (cron.Schedule, error) { @@ -43,19 +59,29 @@ func (s *ActionScheduleSpec) Parse() (cron.Schedule, error) { return nil, err } - // If the spec has specified a timezone, use it - if strings.HasPrefix(s.Spec, "TZ=") || strings.HasPrefix(s.Spec, "CRON_TZ=") { - return schedule, nil - } - specSchedule, ok := schedule.(*cron.SpecSchedule) // If it's not a spec schedule, like "@every 5m", timezone is not relevant if !ok { return schedule, nil } - // Set the timezone to UTC - specSchedule.Location = time.UTC + // If `timezone` is not defined in the workflow, but the spec includes a timezone, use it. + if !s.TimeZone.Has() && (strings.HasPrefix(s.Spec, "TZ=") || strings.HasPrefix(s.Spec, "CRON_TZ=")) { + return schedule, nil + } + + var location *time.Location + if present, tz := s.TimeZone.Get(); present { + location, err = time.LoadLocation(tz) + if err != nil { + return nil, err + } + } else { + // UTC is the default time zone. + location = time.UTC + } + + specSchedule.Location = location return specSchedule, nil } diff --git a/models/actions/schedule_spec_test.go b/models/actions/schedule_spec_test.go index 0c26fce4b2..eb3a83d0a6 100644 --- a/models/actions/schedule_spec_test.go +++ b/models/actions/schedule_spec_test.go @@ -7,6 +7,8 @@ import ( "testing" "time" + "forgejo.org/modules/optional" + "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) @@ -21,50 +23,105 @@ func TestActionScheduleSpec_Parse(t *testing.T) { }() time.Local = tz - now, err := time.Parse(time.RFC3339, "2024-07-31T15:47:55+08:00") - require.NoError(t, err) - tests := []struct { - name string - spec string - want string - wantErr assert.ErrorAssertionFunc + name string + refTime time.Time + spec string + timeZone string + want string + wantErr assert.ErrorAssertionFunc }{ { name: "regular", + refTime: time.Date(2024, 7, 31, 15, 47, 55, 0, time.Local), spec: "0 10 * * *", want: "2024-07-31T10:00:00Z", wantErr: assert.NoError, }, { name: "invalid", + refTime: time.Date(2024, 7, 31, 15, 47, 55, 0, time.Local), spec: "0 10 * *", want: "", wantErr: assert.Error, }, { - name: "with timezone", + name: "with TZ in cron schedule", + refTime: time.Date(2024, 7, 31, 15, 47, 55, 0, time.Local), spec: "TZ=America/New_York 0 10 * * *", want: "2024-07-31T14:00:00Z", wantErr: assert.NoError, }, { - name: "timezone irrelevant", + name: "with CRON_TZ in cron schedule", + refTime: time.Date(2024, 7, 31, 15, 47, 55, 0, time.Local), + spec: "CRON_TZ=America/New_York 0 10 * * *", + want: "2024-07-31T14:00:00Z", + wantErr: assert.NoError, + }, + { + name: "with separate time zone", + refTime: time.Date(2024, 7, 31, 15, 47, 55, 0, time.Local), + spec: "0 10 * * *", + timeZone: "America/New_York", + want: "2024-07-31T14:00:00Z", + wantErr: assert.NoError, + }, + { + name: "separate time zone takes precedence over inlined time zone", + refTime: time.Date(2024, 7, 31, 15, 47, 55, 0, time.Local), + spec: "CRON_TZ=Europe/Berlin 0 10 * * *", + timeZone: "America/New_York", + want: "2024-07-31T14:00:00Z", + wantErr: assert.NoError, + }, + { + name: "time zone irrelevant", + refTime: time.Date(2024, 7, 31, 15, 47, 55, 0, time.Local), spec: "@every 5m", want: "2024-07-31T07:52:55Z", wantErr: assert.NoError, }, + { + // The various cron implementations handle the DST jump forwards differently. The most popular approaches + // are (a) scheduling all jobs at 3 o'clock that were supposed to run between 2 and 3 o'clock, or (b) + // skipping the execution on that day because any time between 2 and 3 o'clock never happened. Forgejo uses + // option B because the code it inherited already did that and was exposed to users. + name: "skips execution during DST jump forwards", + refTime: time.Date(2025, 3, 30, 1, 5, 0, 0, time.UTC), + spec: "10 2 * * *", // The clock jumps at 2 o'clock to 3 o'clock. + timeZone: "Europe/Berlin", + want: "2025-03-31T00:10:00Z", + wantErr: assert.NoError, + }, + { + name: "executes a first time before DST jump backwards", + refTime: time.Date(2025, 10, 26, 0, 5, 0, 0, time.UTC), + spec: "10 2 * * *", // The clock jumps at 3 o'clock to 2 o'clock. + timeZone: "Europe/Berlin", + want: "2025-10-26T00:10:00Z", + wantErr: assert.NoError, + }, + { + name: "executes a second time after DST jump backwards", + refTime: time.Date(2025, 10, 26, 1, 5, 0, 0, time.UTC), + spec: "10 2 * * *", // The clock jumps at 3 o'clock to 2 o'clock. + timeZone: "Europe/Berlin", + want: "2025-10-26T01:10:00Z", + wantErr: assert.NoError, + }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { s := &ActionScheduleSpec{ - Spec: tt.spec, + Spec: tt.spec, + TimeZone: optional.FromNonDefault(tt.timeZone), } got, err := s.Parse() tt.wantErr(t, err) if err == nil { - assert.Equal(t, tt.want, got.Next(now).UTC().Format(time.RFC3339)) + assert.Equal(t, tt.want, got.Next(tt.refTime).UTC().Format(time.RFC3339)) } }) } diff --git a/models/actions/schedule_test.go b/models/actions/schedule_test.go new file mode 100644 index 0000000000..016185cb42 --- /dev/null +++ b/models/actions/schedule_test.go @@ -0,0 +1,102 @@ +// Copyright 2026 The Forgejo Authors. All rights reserved. +// SPDX-License-Identifier: GPL-3.0-or-later + +package actions + +import ( + "testing" + "time" + + "forgejo.org/models/db" + "forgejo.org/models/repo" + "forgejo.org/models/unittest" + "forgejo.org/models/user" + "forgejo.org/modules/optional" + "forgejo.org/modules/timeutil" + "forgejo.org/modules/webhook" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func TestScheduleCreateScheduleTask(t *testing.T) { + require.NoError(t, unittest.PrepareTestDatabase()) + + user2 := unittest.AssertExistsAndLoadBean(t, &user.User{ID: 2}) + repo62 := unittest.AssertExistsAndLoadBean(t, &repo.Repository{ID: 62, Name: "test_workflows", OwnerID: user2.ID}) + + content := ` +on: + push: + schedule: + - cron: "2 13 * * *" + - cron: "03 13 * * *" + timezone: Europe/Paris +jobs: + test: + runs-on: debian + steps: + - run: | + echo "OK" +` + + referenceTime := time.Date(2026, 3, 27, 17, 41, 21, 0, time.UTC) + + specWithoutTZ, err := NewActionScheduleSpec("2 13 * * *", optional.None[string](), referenceTime) + require.NoError(t, err) + + specWithTZ, err := NewActionScheduleSpec("3 13 * * *", optional.Some("Europe/Paris"), referenceTime) + require.NoError(t, err) + + schedule := &ActionSchedule{ + Title: ".forgejo/workflows/test.yaml", + Specs: []*ActionScheduleSpec{specWithoutTZ, specWithTZ}, + RepoID: repo62.ID, + OwnerID: user2.ID, + WorkflowID: "test.yaml", + WorkflowDirectory: ".forgejo/workflows", + TriggerUserID: -2, + Ref: "main", + CommitSHA: "6af834a5bc97c1a337eb3a21d26903c5cdceca0c", + Event: webhook.HookEventPush, + EventPayload: "{\"action\":\"schedule\"}", + Content: []byte(content), + } + + err = CreateScheduleTask(t.Context(), []*ActionSchedule{schedule}) + require.NoError(t, err) + + schedules, err := db.Find[ActionSchedule](t.Context(), FindScheduleOptions{OwnerID: user2.ID, RepoID: repo62.ID}) + require.NoError(t, err) + require.Len(t, schedules, 1) + + assert.NotZero(t, schedules[0].ID) + assert.Equal(t, ".forgejo/workflows/test.yaml", schedules[0].Title) + assert.Equal(t, "test.yaml", schedules[0].WorkflowID) + assert.Equal(t, ".forgejo/workflows", schedules[0].WorkflowDirectory) + assert.Equal(t, int64(-2), schedules[0].TriggerUserID) + assert.Equal(t, "main", schedules[0].Ref) + assert.Equal(t, "6af834a5bc97c1a337eb3a21d26903c5cdceca0c", schedules[0].CommitSHA) + assert.Equal(t, webhook.HookEventPush, schedules[0].Event) + assert.JSONEq(t, "{\"action\":\"schedule\"}", schedules[0].EventPayload) + assert.Equal(t, []byte(content), schedules[0].Content) + + specs, total, err := FindSpecs(t.Context(), FindSpecOptions{RepoID: repo62.ID}) + require.NoError(t, err) + + assert.Equal(t, int64(2), total) + + assert.NotZero(t, specs[0].ID) + assert.Equal(t, schedules[0].ID, specs[0].ScheduleID) + assert.Equal(t, timeutil.TimeStamp(1774699380), specs[0].Next) + assert.Equal(t, "3 13 * * *", specs[0].Spec) + assert.Equal(t, optional.Some("Europe/Paris"), specs[0].TimeZone) + assert.Zero(t, specs[0].Prev) + + assert.NotZero(t, specs[1].ID) + assert.Equal(t, schedules[0].ID, specs[1].ScheduleID) + assert.Equal(t, timeutil.TimeStamp(1774702920), specs[1].Next) + assert.Equal(t, "2 13 * * *", specs[1].Spec) + assert.Equal(t, optional.None[string](), specs[1].TimeZone) + assert.Zero(t, specs[1].Prev) +} diff --git a/models/forgejo_migrations/v15c_add_schedule_spec_time_zones.go b/models/forgejo_migrations/v15c_add_schedule_spec_time_zones.go new file mode 100644 index 0000000000..d72d585725 --- /dev/null +++ b/models/forgejo_migrations/v15c_add_schedule_spec_time_zones.go @@ -0,0 +1,31 @@ +// Copyright 2026 The Forgejo Authors. All rights reserved. +// SPDX-License-Identifier: GPL-3.0-or-later + +package forgejo_migrations + +import ( + "forgejo.org/modules/optional" + + "xorm.io/xorm" +) + +func init() { + registerMigration(&Migration{ + Description: "add time zone support to action_schedule_spec", + Upgrade: addActionScheduleSpecTimeZone, + }) +} + +func addActionScheduleSpecTimeZone(x *xorm.Engine) error { + type ActionScheduleSpec struct { + TimeZone optional.Option[string] + } + + _, err := x.SyncWithOptions(xorm.SyncOptions{IgnoreDropIndices: true}, new(ActionScheduleSpec)) + if err != nil { + return err + } + + _, err = x.Exec("ALTER TABLE action_schedule DROP COLUMN `specs`") + return err +} diff --git a/services/actions/TestServiceActions_startTask/action_schedule.yml b/services/actions/TestServiceActions_startTask/action_schedule.yml index d0e7234475..8102e3f9e3 100644 --- a/services/actions/TestServiceActions_startTask/action_schedule.yml +++ b/services/actions/TestServiceActions_startTask/action_schedule.yml @@ -2,8 +2,6 @@ - id: 1 title: schedule_title1 - specs: - - '* * * * *' repo_id: 4 owner_id: 2 workflow_id: 'workflow1.yml' @@ -23,8 +21,6 @@ - id: 2 title: schedule_title2 - specs: - - '* * * * *' repo_id: 4 owner_id: 2 workflow_id: 'workflow2.yml' diff --git a/services/actions/notifier_helper.go b/services/actions/notifier_helper.go index c6af9b5b82..5e048a83ad 100644 --- a/services/actions/notifier_helper.go +++ b/services/actions/notifier_helper.go @@ -10,6 +10,7 @@ import ( "fmt" "slices" "strings" + "time" actions_model "forgejo.org/models/actions" "forgejo.org/models/db" @@ -24,6 +25,7 @@ import ( "forgejo.org/modules/gitrepo" "forgejo.org/modules/json" "forgejo.org/modules/log" + "forgejo.org/modules/optional" "forgejo.org/modules/setting" api "forgejo.org/modules/structs" "forgejo.org/modules/util" @@ -574,6 +576,16 @@ func handleSchedules( continue } + now := time.Now() + specs := make([]*actions_model.ActionScheduleSpec, 0, len(schedules)) + for _, schedule := range schedules { + scheduleSpec, err := actions_model.NewActionScheduleSpec(schedule.Cron, optional.FromNonDefault(schedule.TimeZone), now) + if err != nil { + return err + } + specs = append(specs, scheduleSpec) + } + title := workflow.Name if len(title) < 1 { title = dwf.GetWorkflowPath() @@ -590,7 +602,7 @@ func handleSchedules( CommitSHA: commit.ID.String(), Event: input.Event, EventPayload: string(p), - Specs: schedules, + Specs: specs, Content: dwf.Content, } crons = append(crons, run) diff --git a/services/actions/schedule_tasks_test.go b/services/actions/schedule_tasks_test.go index 9bf964fd90..57ee6b955a 100644 --- a/services/actions/schedule_tasks_test.go +++ b/services/actions/schedule_tasks_test.go @@ -6,12 +6,14 @@ package actions import ( "context" "testing" + "time" actions_model "forgejo.org/models/actions" "forgejo.org/models/db" repo_model "forgejo.org/models/repo" "forgejo.org/models/unit" "forgejo.org/models/unittest" + "forgejo.org/modules/optional" "forgejo.org/modules/test" "forgejo.org/modules/timeutil" webhook_module "forgejo.org/modules/webhook" @@ -29,6 +31,9 @@ func TestServiceActions_startTask(t *testing.T) { // Load fixtures that are corrupted and create one valid scheduled workflow repo := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: 4}) + spec, err := actions_model.NewActionScheduleSpec("* * * * *", optional.None[string](), time.Now()) + require.NoError(t, err) + workflowID := "some.yml" schedules := []*actions_model.ActionSchedule{ { @@ -42,7 +47,7 @@ func TestServiceActions_startTask(t *testing.T) { CommitSHA: "fakeSHA", Event: webhook_module.HookEventSchedule, EventPayload: "fakepayload", - Specs: []string{"* * * * *"}, + Specs: []*actions_model.ActionScheduleSpec{spec}, Content: []byte( ` jobs: @@ -57,7 +62,7 @@ jobs: require.Equal(t, 2, unittest.GetCount(t, actions_model.ActionScheduleSpec{})) require.NoError(t, actions_model.CreateScheduleTask(t.Context(), schedules)) require.Equal(t, 3, unittest.GetCount(t, actions_model.ActionScheduleSpec{})) - _, err := db.GetEngine(db.DefaultContext).Exec("UPDATE `action_schedule_spec` SET next = 1") + _, err = db.GetEngine(db.DefaultContext).Exec("UPDATE `action_schedule_spec` SET next = 1") require.NoError(t, err) // After running startTasks an ActionRun row is created for the valid scheduled workflow @@ -291,6 +296,9 @@ func TestServiceActions_DynamicMatrix(t *testing.T) { // Load fixtures that are corrupted and create one valid scheduled workflow repo := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: 4}) + spec, err := actions_model.NewActionScheduleSpec("* * * * *", optional.None[string](), time.Now()) + require.NoError(t, err) + workflowID := "some.yml" schedules := []*actions_model.ActionSchedule{ { @@ -304,7 +312,7 @@ func TestServiceActions_DynamicMatrix(t *testing.T) { CommitSHA: "fakeSHA", Event: webhook_module.HookEventSchedule, EventPayload: "fakepayload", - Specs: []string{"* * * * *"}, + Specs: []*actions_model.ActionScheduleSpec{spec}, Content: []byte( ` jobs: @@ -322,7 +330,7 @@ jobs: require.Equal(t, 2, unittest.GetCount(t, actions_model.ActionScheduleSpec{})) require.NoError(t, actions_model.CreateScheduleTask(t.Context(), schedules)) require.Equal(t, 3, unittest.GetCount(t, actions_model.ActionScheduleSpec{})) - _, err := db.GetEngine(db.DefaultContext).Exec("UPDATE `action_schedule_spec` SET next = 1") + _, err = db.GetEngine(db.DefaultContext).Exec("UPDATE `action_schedule_spec` SET next = 1") require.NoError(t, err) // After running startTasks an ActionRun row is created for the valid scheduled workflow @@ -354,6 +362,9 @@ func TestServiceActions_RunsOnNeeds(t *testing.T) { // Load fixtures that are corrupted and create one valid scheduled workflow repo := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: 4}) + spec, err := actions_model.NewActionScheduleSpec("* * * * *", optional.None[string](), time.Now()) + require.NoError(t, err) + workflowID := "some.yml" schedules := []*actions_model.ActionSchedule{ { @@ -366,7 +377,7 @@ func TestServiceActions_RunsOnNeeds(t *testing.T) { CommitSHA: "fakeSHA", Event: webhook_module.HookEventSchedule, EventPayload: "fakepayload", - Specs: []string{"* * * * *"}, + Specs: []*actions_model.ActionScheduleSpec{spec}, Content: []byte( ` jobs: @@ -381,7 +392,7 @@ jobs: require.Equal(t, 2, unittest.GetCount(t, actions_model.ActionScheduleSpec{})) require.NoError(t, actions_model.CreateScheduleTask(t.Context(), schedules)) require.Equal(t, 3, unittest.GetCount(t, actions_model.ActionScheduleSpec{})) - _, err := db.GetEngine(db.DefaultContext).Exec("UPDATE `action_schedule_spec` SET next = 1") + _, err = db.GetEngine(db.DefaultContext).Exec("UPDATE `action_schedule_spec` SET next = 1") require.NoError(t, err) // After running startTasks an ActionRun row is created for the valid scheduled workflow @@ -440,6 +451,9 @@ func TestServiceActions_ExpandReusableWorkflow(t *testing.T) { // Load fixtures that are corrupted and create one valid scheduled workflow repo := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: 4}) + spec, err := actions_model.NewActionScheduleSpec("* * * * *", optional.None[string](), time.Now()) + require.NoError(t, err) + workflowID := "some.yml" schedules := []*actions_model.ActionSchedule{ { @@ -452,7 +466,7 @@ func TestServiceActions_ExpandReusableWorkflow(t *testing.T) { CommitSHA: "fakeSHA", Event: webhook_module.HookEventSchedule, EventPayload: "fakepayload", - Specs: []string{"* * * * *"}, + Specs: []*actions_model.ActionScheduleSpec{spec}, Content: []byte( ` jobs: @@ -467,7 +481,7 @@ jobs: require.Equal(t, 2, unittest.GetCount(t, actions_model.ActionScheduleSpec{})) require.NoError(t, actions_model.CreateScheduleTask(t.Context(), schedules)) require.Equal(t, 3, unittest.GetCount(t, actions_model.ActionScheduleSpec{})) - _, err := db.GetEngine(db.DefaultContext).Exec("UPDATE `action_schedule_spec` SET next = 1") + _, err = db.GetEngine(db.DefaultContext).Exec("UPDATE `action_schedule_spec` SET next = 1") require.NoError(t, err) // After running startTasks an ActionRun row is created for the valid scheduled workflow diff --git a/tests/integration/actions_trigger_test.go b/tests/integration/actions_trigger_test.go index 4de8d625ab..914acadf14 100644 --- a/tests/integration/actions_trigger_test.go +++ b/tests/integration/actions_trigger_test.go @@ -7,6 +7,7 @@ import ( "fmt" "net/http" "net/url" + "slices" "strings" "testing" "time" @@ -23,6 +24,7 @@ import ( actions_module "forgejo.org/modules/actions" "forgejo.org/modules/git" "forgejo.org/modules/gitrepo" + "forgejo.org/modules/optional" "forgejo.org/modules/setting" api "forgejo.org/modules/structs" "forgejo.org/modules/test" @@ -1136,13 +1138,18 @@ func TestActionsWorkflowDispatchConcurrencyGroup(t *testing.T) { } func TestActionsScheduledWorkflow(t *testing.T) { + type expectedSpec struct { + cron string + timeZone optional.Option[string] + } + testCases := []struct { name string workflowID string workflowDirectory string workflowContent string expectedWorkflowTitle string - expectedCronSpecs []string + expectedCronSpecs []expectedSpec }{ { name: "GitHub", @@ -1158,7 +1165,7 @@ jobs: - run: echo OK `, expectedWorkflowTitle: ".github/workflows/scheduled.yml", - expectedCronSpecs: []string{"30 5,17 * * *"}, + expectedCronSpecs: []expectedSpec{{cron: "30 5,17 * * *", timeZone: optional.None[string]()}}, }, { name: "Gitea", @@ -1175,7 +1182,28 @@ jobs: - run: echo OK `, expectedWorkflowTitle: "My scheduled workflow", - expectedCronSpecs: []string{"* * * * *"}, + expectedCronSpecs: []expectedSpec{{cron: "* * * * *", timeZone: optional.None[string]()}}, + }, + { + name: "Forgejo with time zone", + workflowID: "tz.yml", + workflowDirectory: ".forgejo/workflows", + workflowContent: ` +on: + schedule: + - cron: "44 10 * * *" + - cron: "25 19 * * *" + timezone: Europe/Madrid +jobs: + test: + steps: + - run: echo OK +`, + expectedWorkflowTitle: ".forgejo/workflows/tz.yml", + expectedCronSpecs: []expectedSpec{ + {cron: "44 10 * * *", timeZone: optional.None[string]()}, + {cron: "25 19 * * *", timeZone: optional.Some("Europe/Madrid")}, + }, }, } onApplicationRun(t, func(t *testing.T, u *url.URL) { @@ -1201,7 +1229,6 @@ jobs: require.Len(t, schedules, 1) assert.Equal(t, testCase.expectedWorkflowTitle, schedules[0].Title) - assert.Equal(t, testCase.expectedCronSpecs, schedules[0].Specs) assert.Equal(t, repo.ID, schedules[0].RepoID) assert.Equal(t, repo.OwnerID, schedules[0].OwnerID) assert.Equal(t, testCase.workflowID, schedules[0].WorkflowID) @@ -1210,6 +1237,20 @@ jobs: assert.Equal(t, sha, schedules[0].CommitSHA) assert.Equal(t, webhook_module.HookEventPush, schedules[0].Event) assert.Equal(t, []byte(testCase.workflowContent), schedules[0].Content) + + specs, total, err := actions_model.FindSpecs(t.Context(), actions_model.FindSpecOptions{RepoID: repo.ID}) + require.NoError(t, err) + + assert.Equal(t, int64(len(testCase.expectedCronSpecs)), total) + + // The query to return cron specs orders by `id DESC`. + slices.Reverse(testCase.expectedCronSpecs) + + for i, expected := range testCase.expectedCronSpecs { + assert.Equal(t, schedules[0].ID, specs[i].ScheduleID) + assert.Equal(t, expected.cron, specs[i].Spec) + assert.Equal(t, expected.timeZone, specs[i].TimeZone) + } }) } }) From 397c8755f2165effbf9696934e30fbcdf5d3ba57 Mon Sep 17 00:00:00 2001 From: forgejo-backport-action Date: Sun, 5 Apr 2026 17:30:39 +0200 Subject: [PATCH 24/82] [v15.0/forgejo] perf: bulk load resolvers & reactions on pull request comments (#11995) **Backport:** https://codeberg.org/forgejo/forgejo/pulls/11988 Optimize loading pull request review comments, which currently perform separate database queries for each comment in order to load the resolver of the comment, and the reactions on that comment, and the users on each reaction of the comments. I stumbled across this ugly code, which enticed me to look into this: https://codeberg.org/forgejo/forgejo/src/commit/80d840c1284e4f44b9efac208811b9ed26455ade/routers/web/repo/pull.go#L1107-L1120 It appeared to load the attachments from each comment on the pull request review page in separate database queries. It turned out to be a noop, as the attachments are already loaded in bulk: https://codeberg.org/forgejo/forgejo/src/commit/80d840c1284e4f44b9efac208811b9ed26455ade/models/issues/comment_code.go#L120-L122 but the `findCodeComments` method loads the "resolver doer" and the reactions one-by-one for each comment. So I fixed that instead, and removed the ineffective deeply nested for loop. ## Checklist The [contributor guide](https://forgejo.org/docs/next/contributor/) contains information that will be helpful to first time contributors. All work and communication must conform to Forgejo's [AI Agreement](https://codeberg.org/forgejo/governance/src/branch/main/AIAgreement.md). There also are a few [conditions for merging Pull Requests in Forgejo repositories](https://codeberg.org/forgejo/governance/src/branch/main/PullRequestsAgreement.md). You are also welcome to join the [Forgejo development chatroom](https://matrix.to/#/#forgejo-development:matrix.org). ### Tests for Go changes - I added test coverage for Go changes... - [x] in their respective `*_test.go` for unit tests. - [ ] in the `tests/integration` directory if it involves interactions with a live Forgejo server. - I ran... - [x] `make pr-go` before pushing ### Documentation - [ ] I created a pull request [to the documentation](https://codeberg.org/forgejo/docs) to explain to Forgejo users how to use this change. - [x] I did not document these changes and I do not expect someone else to do it. ### Release notes - [ ] This change will be noticed by a Forgejo user or admin (feature, bug fix, performance, etc.). I suggest to include a release note for this change. - [x] This change is not visible to a Forgejo user or admin (refactor, dependency upgrade, etc.). I think there is no need to add a release note for this change. Co-authored-by: Mathieu Fenniak Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/11995 Reviewed-by: Mathieu Fenniak Co-authored-by: forgejo-backport-action Co-committed-by: forgejo-backport-action --- models/issues/comment.go | 15 ---- models/issues/comment_code.go | 22 +++--- models/issues/comment_list.go | 79 ++++++++++++++++++++ models/issues/comment_list_test.go | 108 +++++++++++++++++++++++++++ models/issues/comment_test.go | 25 ++++++- models/issues/reaction.go | 31 ++++++++ routers/api/v1/repo/issue_comment.go | 5 ++ routers/web/repo/issue.go | 9 ++- routers/web/repo/pull.go | 15 ---- services/convert/issue_comment.go | 6 -- 10 files changed, 261 insertions(+), 54 deletions(-) diff --git a/models/issues/comment.go b/models/issues/comment.go index fd0f595945..bfad3935fb 100644 --- a/models/issues/comment.go +++ b/models/issues/comment.go @@ -663,21 +663,6 @@ func (c *Comment) LoadAssigneeUserAndTeam(ctx context.Context) error { return nil } -// LoadResolveDoer if comment.Type is CommentTypeCode and ResolveDoerID not zero, then load resolveDoer -func (c *Comment) LoadResolveDoer(ctx context.Context) (err error) { - if c.ResolveDoerID == 0 || c.Type != CommentTypeCode { - return nil - } - c.ResolveDoer, err = user_model.GetUserByID(ctx, c.ResolveDoerID) - if err != nil { - if user_model.IsErrUserNotExist(err) { - c.ResolveDoer = user_model.NewGhostUser() - err = nil - } - } - return err -} - // IsResolved check if an code comment is resolved func (c *Comment) IsResolved() bool { return c.ResolveDoerID != 0 && c.Type == CommentTypeCode diff --git a/models/issues/comment_code.go b/models/issues/comment_code.go index 3c87a1b41a..800d1e830e 100644 --- a/models/issues/comment_code.go +++ b/models/issues/comment_code.go @@ -133,7 +133,7 @@ func findCodeComments(ctx context.Context, opts FindCommentsOptions, issue *Issu return nil, err } - n := 0 + readyComments := make(CommentList, 0, len(comments)) for _, comment := range comments { if re, ok := reviews[comment.ReviewID]; ok && re != nil { // If the review is pending only the author can see the comments (except if the review is set) @@ -143,17 +143,18 @@ func findCodeComments(ctx context.Context, opts FindCommentsOptions, issue *Issu } comment.Review = re } - comments[n] = comment - n++ + readyComments = append(readyComments, comment) + } - if err := comment.LoadResolveDoer(ctx); err != nil { - return nil, err - } + if err := readyComments.LoadResolveDoers(ctx); err != nil { + return nil, err + } - if err := comment.LoadReactions(ctx, issue.Repo); err != nil { - return nil, err - } + if err := readyComments.LoadReactions(ctx, issue.Repo); err != nil { + return nil, err + } + for _, comment := range readyComments { var err error if comment.RenderedContent, err = markdown.RenderString(&markup.RenderContext{ Ctx: ctx, @@ -165,7 +166,8 @@ func findCodeComments(ctx context.Context, opts FindCommentsOptions, issue *Issu return nil, err } } - return comments[:n], nil + + return readyComments, nil } // FetchCodeConversation fetches the code conversation of a given comment (same review, treePath and line number) diff --git a/models/issues/comment_list.go b/models/issues/comment_list.go index 53903bea31..9a5c22244b 100644 --- a/models/issues/comment_list.go +++ b/models/issues/comment_list.go @@ -5,6 +5,7 @@ package issues import ( "context" + "errors" "forgejo.org/models/db" repo_model "forgejo.org/models/repo" @@ -390,6 +391,84 @@ func (comments CommentList) LoadAttachments(ctx context.Context) (err error) { return nil } +func (comments CommentList) LoadResolveDoers(ctx context.Context) (err error) { + relevant := func(c *Comment) bool { + return c.ResolveDoerID != 0 && c.Type == CommentTypeCode + } + userIDs := make(container.Set[int64]) + for _, comment := range comments { + if relevant(comment) { + userIDs.Add(comment.ResolveDoerID) + } + } + + if len(userIDs) == 0 { + return nil + } + + userMap := make(map[int64]*user_model.User) + users, err := user_model.GetUsersByIDs(ctx, userIDs.Slice()) + if err != nil { + return err + } + for _, user := range users { + userMap[user.ID] = user + } + + for _, comment := range comments { + if !relevant(comment) { + continue + } + resolveDoer, ok := userMap[comment.ResolveDoerID] + if !ok { + comment.ResolveDoer = user_model.NewGhostUser() + } else { + comment.ResolveDoer = resolveDoer + } + } + + return nil +} + +func (comments CommentList) LoadReactions(ctx context.Context, repo *repo_model.Repository) (err error) { + loadIssueID := int64(0) + loadCommentIDs := make([]int64, 0, len(comments)) + + for _, comment := range comments { + if loadIssueID == 0 { + loadIssueID = comment.IssueID + } else if loadIssueID != comment.IssueID { + return errors.New("unable to load reactions from comments on different issues than each other") + } + if comment.Reactions == nil { + loadCommentIDs = append(loadCommentIDs, comment.ID) + } + } + + if loadIssueID == 0 { + return nil + } + + reactions, err := getReactionsForComments(ctx, loadIssueID, loadCommentIDs) + if err != nil { + return err + } + + allReactions := make(ReactionList, 0, len(reactions)) + for _, comment := range comments { + if comment.Reactions == nil { + comment.Reactions = reactions[comment.ID] + allReactions = append(allReactions, comment.Reactions...) + } + } + + if _, err := allReactions.LoadUsers(ctx, repo); err != nil { + return err + } + + return nil +} + func (comments CommentList) getReviewIDs() []int64 { return container.FilterSlice(comments, func(comment *Comment) (int64, bool) { return comment.ReviewID, comment.ReviewID > 0 diff --git a/models/issues/comment_list_test.go b/models/issues/comment_list_test.go index 062a710b84..12a9144722 100644 --- a/models/issues/comment_list_test.go +++ b/models/issues/comment_list_test.go @@ -84,3 +84,111 @@ func TestCommentListLoadUser(t *testing.T) { }) } } + +func TestCommentListLoadResolveDoers(t *testing.T) { + require.NoError(t, unittest.PrepareTestDatabase()) + + issue := unittest.AssertExistsAndLoadBean(t, &Issue{}) + repo := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: issue.RepoID}) + doer := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: repo.OwnerID}) + + empty := CommentList{} + require.NoError(t, empty.LoadResolveDoers(t.Context())) + + comment1, err := CreateComment(db.DefaultContext, &CreateCommentOptions{ + Type: CommentTypeCode, + Doer: doer, + Repo: repo, + Issue: issue, + Content: "Hello", + }) + require.NoError(t, err) + require.NoError(t, MarkConversation(t.Context(), comment1, doer, true)) + comment1 = unittest.AssertExistsAndLoadBean(t, &Comment{ID: comment1.ID}) // reload after change + comment1List := CommentList{comment1} + require.NoError(t, comment1List.LoadResolveDoers(t.Context())) + require.NotNil(t, comment1.ResolveDoer) + assert.Equal(t, doer.ID, comment1.ResolveDoer.ID) + + comment2, err := CreateComment(db.DefaultContext, &CreateCommentOptions{ + Type: CommentTypeCode, + Doer: doer, + Repo: repo, + Issue: issue, + Content: "Hello again", + }) + require.NoError(t, err) + require.NoError(t, MarkConversation(t.Context(), comment2, user_model.NewGhostUser(), true)) + + // Reload for fresh objects + comment1 = unittest.AssertExistsAndLoadBean(t, &Comment{ID: comment1.ID}) + comment2 = unittest.AssertExistsAndLoadBean(t, &Comment{ID: comment2.ID}) + + comment2List := CommentList{comment1, comment2} + require.NoError(t, comment2List.LoadResolveDoers(t.Context())) + require.NotNil(t, comment1.ResolveDoer) + assert.Equal(t, doer.ID, comment1.ResolveDoer.ID) + require.NotNil(t, comment2.ResolveDoer) + assert.EqualValues(t, -1, comment2.ResolveDoer.ID) +} + +func TestCommentListLoadReactions(t *testing.T) { + require.NoError(t, unittest.PrepareTestDatabase()) + + issue := unittest.AssertExistsAndLoadBean(t, &Issue{}) + repo := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: issue.RepoID}) + doer := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: repo.OwnerID}) + + empty := CommentList{} + require.NoError(t, empty.LoadReactions(t.Context(), repo)) + + comment1, err := CreateComment(db.DefaultContext, &CreateCommentOptions{ + Type: CommentTypeCode, + Doer: doer, + Repo: repo, + Issue: issue, + Content: "Hello", + }) + require.NoError(t, err) + _, err = CreateReaction(t.Context(), &ReactionOptions{ + Type: "eyes", + DoerID: doer.ID, + IssueID: issue.ID, + CommentID: comment1.ID, + }) + require.NoError(t, err) + + comment1 = unittest.AssertExistsAndLoadBean(t, &Comment{ID: comment1.ID}) // reload after change + comment1List := CommentList{comment1} + require.NoError(t, comment1List.LoadReactions(t.Context(), repo)) + require.Len(t, comment1.Reactions, 1) + assert.Equal(t, "eyes", comment1.Reactions[0].Type) + assert.NotNil(t, comment1.Reactions[0].User) + + comment2, err := CreateComment(db.DefaultContext, &CreateCommentOptions{ + Type: CommentTypeCode, + Doer: doer, + Repo: repo, + Issue: issue, + Content: "Hello again", + }) + require.NoError(t, err) + _, err = CreateReaction(t.Context(), &ReactionOptions{ + Type: "rocket", + DoerID: doer.ID, + IssueID: issue.ID, + CommentID: comment2.ID, + }) + require.NoError(t, err) + + // Reload for fresh objects + comment1 = unittest.AssertExistsAndLoadBean(t, &Comment{ID: comment1.ID}) + comment2 = unittest.AssertExistsAndLoadBean(t, &Comment{ID: comment2.ID}) + + comment2List := CommentList{comment1, comment2} + require.NoError(t, comment2List.LoadReactions(t.Context(), repo)) + require.Len(t, comment1.Reactions, 1) + require.Len(t, comment2.Reactions, 1) + assert.Equal(t, "rocket", comment2.Reactions[0].Type) + assert.NotNil(t, comment2.Reactions[0].User) +} diff --git a/models/issues/comment_test.go b/models/issues/comment_test.go index c7adf6f62e..da87b8ec2f 100644 --- a/models/issues/comment_test.go +++ b/models/issues/comment_test.go @@ -52,12 +52,29 @@ func TestFetchCodeConversations(t *testing.T) { issue := unittest.AssertExistsAndLoadBean(t, &issues_model.Issue{ID: 2}) user := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 1}) + _, err := issues_model.CreateReaction(t.Context(), &issues_model.ReactionOptions{ + Type: "eyes", + DoerID: 2, + IssueID: issue.ID, + CommentID: 4, + }) + require.NoError(t, err) + require.NoError(t, issues_model.MarkConversation(t.Context(), + unittest.AssertExistsAndLoadBean(t, &issues_model.Comment{ID: 4}), + user, true)) + res, err := issues_model.FetchCodeConversations(db.DefaultContext, issue, user, false) require.NoError(t, err) - assert.Contains(t, res, "README.md") - assert.Contains(t, res["README.md"], int64(4)) - assert.Len(t, res["README.md"][4], 1) - assert.Equal(t, int64(4), res["README.md"][4][0][0].ID) + require.Contains(t, res, "README.md") + require.Contains(t, res["README.md"], int64(4)) + require.Len(t, res["README.md"][4], 1) + require.Len(t, res["README.md"][4][0], 1) + comment := res["README.md"][4][0][0] + assert.Equal(t, int64(4), comment.ID) + assert.NotNil(t, comment.ResolveDoer) + require.Len(t, comment.Reactions, 1) + r := comment.Reactions[0] + assert.NotNil(t, r.User) user2 := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 2}) res, err = issues_model.FetchCodeConversations(db.DefaultContext, issue, user2, false) diff --git a/models/issues/reaction.go b/models/issues/reaction.go index 522040c022..9a277a8c12 100644 --- a/models/issues/reaction.go +++ b/models/issues/reaction.go @@ -176,6 +176,37 @@ func FindReactions(ctx context.Context, opts FindReactionsOptions) (ReactionList return reactions, count, err } +func getReactionsForComments(ctx context.Context, issueID int64, commentIDs []int64) (map[int64]ReactionList, error) { + reactions := make(map[int64]ReactionList, len(commentIDs)) + left := len(commentIDs) + for left > 0 { + limit := min(left, db.DefaultMaxInSize) + rows, err := db.GetEngine(ctx). + Where(builder.Eq{"issue_id": issueID}). + In("reaction.`type`", setting.UI.Reactions). + In("comment_id", commentIDs[:limit]). + Rows(&Reaction{}) + if err != nil { + return nil, err + } + + for rows.Next() { + var reaction Reaction + err = rows.Scan(&reaction) + if err != nil { + _ = rows.Close() + return nil, err + } + reactions[reaction.CommentID] = append(reactions[reaction.CommentID], &reaction) + } + + _ = rows.Close() + left -= limit + commentIDs = commentIDs[limit:] + } + return reactions, nil +} + func createReaction(ctx context.Context, opts *ReactionOptions) (*Reaction, error) { reaction := &Reaction{ Type: opts.Type, diff --git a/routers/api/v1/repo/issue_comment.go b/routers/api/v1/repo/issue_comment.go index ceebb41f9e..3f58e4e271 100644 --- a/routers/api/v1/repo/issue_comment.go +++ b/routers/api/v1/repo/issue_comment.go @@ -215,6 +215,11 @@ func ListIssueCommentsAndTimeline(ctx *context.APIContext) { return } + if err := comments.LoadResolveDoers(ctx); err != nil { + ctx.Error(http.StatusInternalServerError, "LoadResolveDoers", err) + return + } + var apiComments []*api.TimelineComment for _, comment := range comments { if comment.Type != issues_model.CommentTypeCode && isXRefCommentAccessible(ctx, ctx.Doer, comment, issue.RepoID, ctx.Reducer) { diff --git a/routers/web/repo/issue.go b/routers/web/repo/issue.go index 3852c4c18f..bb4185f785 100644 --- a/routers/web/repo/issue.go +++ b/routers/web/repo/issue.go @@ -1663,6 +1663,11 @@ func ViewIssue(ctx *context.Context) { return } + if err := issue.Comments.LoadResolveDoers(ctx); err != nil { + ctx.ServerError("LoadResolveDoers", err) + return + } + for commentIdx, comment = range issue.Comments { comment.Issue = issue metas := ctx.Repo.Repository.ComposeMetas(ctx) @@ -1801,10 +1806,6 @@ func ViewIssue(ctx *context.Context) { } } } - if err = comment.LoadResolveDoer(ctx); err != nil { - ctx.ServerError("LoadResolveDoer", err) - return - } } else if comment.Type == issues_model.CommentTypePullRequestPush { participants = addParticipant(comment.Poster, participants) if err = comment.LoadPushCommits(ctx); err != nil { diff --git a/routers/web/repo/pull.go b/routers/web/repo/pull.go index c4b8583e2d..e9cdfc5ac7 100644 --- a/routers/web/repo/pull.go +++ b/routers/web/repo/pull.go @@ -1104,21 +1104,6 @@ func viewPullFiles(ctx *context.Context, specifiedStartCommit, specifiedEndCommi return } - for _, file := range diff.Files { - for _, section := range file.Sections { - for _, line := range section.Lines { - for _, comments := range line.Conversations { - for _, comment := range comments { - if err := comment.LoadAttachments(ctx); err != nil { - ctx.ServerError("LoadAttachments", err) - return - } - } - } - } - } - } - pb, err := git_model.GetFirstMatchProtectedBranchRule(ctx, pull.BaseRepoID, pull.BaseBranch) if err != nil { ctx.ServerError("LoadProtectedBranch", err) diff --git a/services/convert/issue_comment.go b/services/convert/issue_comment.go index 9ea315aee6..2f9af9be7c 100644 --- a/services/convert/issue_comment.go +++ b/services/convert/issue_comment.go @@ -43,12 +43,6 @@ func ToTimelineComment(ctx context.Context, repo *repo_model.Repository, c *issu return nil } - err = c.LoadResolveDoer(ctx) - if err != nil { - log.Error("LoadResolveDoer: %v", err) - return nil - } - err = c.LoadDepIssueDetails(ctx) if err != nil { log.Error("LoadDepIssueDetails: %v", err) From 1176b58f28a73c0f78359919c2c6026d47163728 Mon Sep 17 00:00:00 2001 From: forgejo-backport-action Date: Sun, 5 Apr 2026 22:53:13 +0200 Subject: [PATCH 25/82] [v15.0/forgejo] refactor: reduce code duplication when accessing `DefaultMaxInSize` (#12000) **Backport:** https://codeberg.org/forgejo/forgejo/pulls/11999 `DefaultMaxInSize` is an internal parameter for limiting the size of `field IN (...)` clauses in DB queries, which is a reasonable thing to do -- in addition to the errors noted when [originally introduced](https://github.com/go-gitea/gitea/pull/4594), there are technical limits that apply to each of PostgreSQL, MySQL, and SQLite which would prevent an unbounded size for a query like this. However: the size is incredibly small at 50, and, the implementation of `DefaultMaxInSize` is really wasteful with copy-and-paste coding. This PR: - introduces `GetByIDs` which fetches a `map[int64]*Model` from the database for an array of ID values, while respecting `IN` clause size limits - introduces `GetByFieldIn` which fetches a `map[int64][]*Model` from the database for an array of field values, while respecting `IN` clause size limits - uses `slices.Chunk` for other locations where queries are too complex for these implementations - bumps the `DefaultMaxInSize` parameter from 50 to 500, a conservative increase well under known limits, but 10x the current value: - PostgreSQL supports up to 1GB query text size with 65,535 parameters, but I've experienced performance degradation at high value counts - MySQL supports 64MB query text size without known limits of parameter count - SQLite supports 32,766 parameters in a query ## Checklist The [contributor guide](https://forgejo.org/docs/next/contributor/) contains information that will be helpful to first time contributors. All work and communication must conform to Forgejo's [AI Agreement](https://codeberg.org/forgejo/governance/src/branch/main/AIAgreement.md). There also are a few [conditions for merging Pull Requests in Forgejo repositories](https://codeberg.org/forgejo/governance/src/branch/main/PullRequestsAgreement.md). You are also welcome to join the [Forgejo development chatroom](https://matrix.to/#/#forgejo-development:matrix.org). ### Tests for Go changes - I added test coverage for Go changes... - [x] in their respective `*_test.go` for unit tests. - Refactored functions are assumed to be covered by existing tests to some extent; that assumption is probably wrong but the changes here are relatively easily reviewed for correctness as well. - [ ] in the `tests/integration` directory if it involves interactions with a live Forgejo server. - I ran... - [x] `make pr-go` before pushing ### Documentation - [ ] I created a pull request [to the documentation](https://codeberg.org/forgejo/docs) to explain to Forgejo users how to use this change. - [x] I did not document these changes and I do not expect someone else to do it. ### Release notes - [ ] This change will be noticed by a Forgejo user or admin (feature, bug fix, performance, etc.). I suggest to include a release note for this change. - [x] This change is not visible to a Forgejo user or admin (refactor, dependency upgrade, etc.). I think there is no need to add a release note for this change. Co-authored-by: Mathieu Fenniak Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/12000 Reviewed-by: Mathieu Fenniak Co-authored-by: forgejo-backport-action Co-committed-by: forgejo-backport-action --- models/activities/notification_list.go | 112 ++--------------- models/db/context.go | 86 +++++++++++++ models/db/list.go | 2 +- models/issues/comment_list.go | 168 ++++--------------------- models/issues/issue_list.go | 151 ++++------------------ models/issues/reaction.go | 10 +- tests/integration/db_query_test.go | 64 ++++++++++ 7 files changed, 215 insertions(+), 378 deletions(-) create mode 100644 tests/integration/db_query_test.go diff --git a/models/activities/notification_list.go b/models/activities/notification_list.go index bf6356021e..3f3a48eaa5 100644 --- a/models/activities/notification_list.go +++ b/models/activities/notification_list.go @@ -210,31 +210,9 @@ func (nl NotificationList) LoadRepos(ctx context.Context) (repo_model.Repository } repoIDs := nl.getPendingRepoIDs() - repos := make(map[int64]*repo_model.Repository, len(repoIDs)) - left := len(repoIDs) - for left > 0 { - limit := min(left, db.DefaultMaxInSize) - rows, err := db.GetEngine(ctx). - In("id", repoIDs[:limit]). - Rows(new(repo_model.Repository)) - if err != nil { - return nil, nil, err - } - - for rows.Next() { - var repo repo_model.Repository - err = rows.Scan(&repo) - if err != nil { - rows.Close() - return nil, nil, err - } - - repos[repo.ID] = &repo - } - _ = rows.Close() - - left -= limit - repoIDs = repoIDs[limit:] + repos, err := db.GetByIDs(ctx, "id", repoIDs, &repo_model.Repository{}) + if err != nil { + return nil, nil, err } failed := []int{} @@ -281,31 +259,9 @@ func (nl NotificationList) LoadIssues(ctx context.Context) ([]int, error) { } issueIDs := nl.getPendingIssueIDs() - issues := make(map[int64]*issues_model.Issue, len(issueIDs)) - left := len(issueIDs) - for left > 0 { - limit := min(left, db.DefaultMaxInSize) - rows, err := db.GetEngine(ctx). - In("id", issueIDs[:limit]). - Rows(new(issues_model.Issue)) - if err != nil { - return nil, err - } - - for rows.Next() { - var issue issues_model.Issue - err = rows.Scan(&issue) - if err != nil { - rows.Close() - return nil, err - } - - issues[issue.ID] = &issue - } - _ = rows.Close() - - left -= limit - issueIDs = issueIDs[limit:] + issues, err := db.GetByIDs(ctx, "id", issueIDs, &issues_model.Issue{}) + if err != nil { + return nil, err } failures := []int{} @@ -373,31 +329,9 @@ func (nl NotificationList) LoadUsers(ctx context.Context) ([]int, error) { } userIDs := nl.getUserIDs() - users := make(map[int64]*user_model.User, len(userIDs)) - left := len(userIDs) - for left > 0 { - limit := min(left, db.DefaultMaxInSize) - rows, err := db.GetEngine(ctx). - In("id", userIDs[:limit]). - Rows(new(user_model.User)) - if err != nil { - return nil, err - } - - for rows.Next() { - var user user_model.User - err = rows.Scan(&user) - if err != nil { - rows.Close() - return nil, err - } - - users[user.ID] = &user - } - _ = rows.Close() - - left -= limit - userIDs = userIDs[limit:] + users, err := db.GetByIDs(ctx, "id", userIDs, &user_model.User{}) + if err != nil { + return nil, err } failures := []int{} @@ -421,31 +355,9 @@ func (nl NotificationList) LoadComments(ctx context.Context) ([]int, error) { } commentIDs := nl.getPendingCommentIDs() - comments := make(map[int64]*issues_model.Comment, len(commentIDs)) - left := len(commentIDs) - for left > 0 { - limit := min(left, db.DefaultMaxInSize) - rows, err := db.GetEngine(ctx). - In("id", commentIDs[:limit]). - Rows(new(issues_model.Comment)) - if err != nil { - return nil, err - } - - for rows.Next() { - var comment issues_model.Comment - err = rows.Scan(&comment) - if err != nil { - rows.Close() - return nil, err - } - - comments[comment.ID] = &comment - } - _ = rows.Close() - - left -= limit - commentIDs = commentIDs[limit:] + comments, err := db.GetByIDs(ctx, "id", commentIDs, &issues_model.Comment{}) + if err != nil { + return nil, err } failures := []int{} diff --git a/models/db/context.go b/models/db/context.go index f098b40a32..18237bb2a2 100644 --- a/models/db/context.go +++ b/models/db/context.go @@ -8,6 +8,7 @@ import ( "database/sql" "errors" "fmt" + "slices" "xorm.io/builder" "xorm.io/xorm" @@ -270,6 +271,91 @@ func GetByID[T any](ctx context.Context, id int64) (object *T, exist bool, err e return &bean, true, nil } +// Retrieves multiple objects with database queries similar to an xorm `.In(idField, idList)`. idField must be a unique +// field on the database table, as a map[id]obj is returned and the usage of a non-unique field would result in objects +// being overwritten in the map. +// +// The length of the IN list is constrained to DefaultMaxInSize for each database query, resulting in multiple database +// queries if the length of the idList exceeds that setting; this constraint prevents exceeding bind parameter +// limitations or query length limitations in the database engine. +func GetByIDs[Bean any, Id comparable](ctx context.Context, idField string, idList []Id, bean *Bean) (map[Id]*Bean, error) { + retval := make(map[Id]*Bean, len(idList)) + if len(idList) == 0 { + return retval, nil + } + + table, err := TableInfo(bean) + if err != nil { + return nil, fmt.Errorf("unable to fetch table info for bean %v: %w", bean, err) + } + + var structFieldName string + for _, c := range table.Columns() { + if c.Name == idField { + structFieldName = c.FieldName + break + } + } + if structFieldName == "" { + return nil, fmt.Errorf("unable to identify struct field for id field %s", idField) + } + + for idChunk := range slices.Chunk(idList, DefaultMaxInSize) { + beans := make([]*Bean, 0, len(idChunk)) + if err := GetEngine(ctx).In(idField, idChunk).Find(&beans); err != nil { + return nil, err + } + for _, bean := range beans { + retval[extractFieldValue(bean, structFieldName).(Id)] = bean + } + } + + return retval, nil +} + +// Retrieves multiple objects with database queries similar to an xorm `.In(field, valueList)`. Similar to GetByIDs, +// except that a map[Id][]*Bean is returned as the field value is not assumed to be a unique value -- if there are +// multiple rows in the table for each value, all of them are returned. +// +// The length of the IN list is constrained to DefaultMaxInSize for each database query, resulting in multiple database +// queries if the length of the idList exceeds that setting; this constraint prevents exceeding bind parameter +// limitations or query length limitations in the database engine. +func GetByFieldIn[Bean any, Id comparable](ctx context.Context, field string, valueList []Id, bean *Bean) (map[Id][]*Bean, error) { + retval := make(map[Id][]*Bean, len(valueList)) + if len(valueList) == 0 { + return retval, nil + } + + table, err := TableInfo(bean) + if err != nil { + return nil, fmt.Errorf("unable to fetch table info for bean %v: %w", bean, err) + } + + var structFieldName string + for _, c := range table.Columns() { + if c.Name == field { + structFieldName = c.FieldName + break + } + } + if structFieldName == "" { + return nil, fmt.Errorf("unable to identify struct field for field %s", field) + } + + for idChunk := range slices.Chunk(valueList, DefaultMaxInSize) { + beans := make([]*Bean, 0, len(idChunk)) + if err := GetEngine(ctx).In(field, idChunk).Find(&beans); err != nil { + return nil, err + } + for _, bean := range beans { + fieldValue := extractFieldValue(bean, structFieldName).(Id) + retval[fieldValue] = append(retval[fieldValue], bean) + } + } + + return retval, nil +} + func Exist[T any](ctx context.Context, cond builder.Cond) (bool, error) { if !cond.IsValid() { panic("cond is invalid in db.Exist(ctx, cond). This should not be possible.") diff --git a/models/db/list.go b/models/db/list.go index 057221936c..71e9a0b1d2 100644 --- a/models/db/list.go +++ b/models/db/list.go @@ -14,7 +14,7 @@ import ( const ( // DefaultMaxInSize represents default variables number on IN () in SQL - DefaultMaxInSize = 50 + DefaultMaxInSize = 500 defaultFindSliceSize = 10 ) diff --git a/models/issues/comment_list.go b/models/issues/comment_list.go index 9a5c22244b..b218f11dfa 100644 --- a/models/issues/comment_list.go +++ b/models/issues/comment_list.go @@ -52,29 +52,9 @@ func (comments CommentList) loadLabels(ctx context.Context) error { } labelIDs := comments.getLabelIDs() - commentLabels := make(map[int64]*Label, len(labelIDs)) - left := len(labelIDs) - for left > 0 { - limit := min(left, db.DefaultMaxInSize) - rows, err := db.GetEngine(ctx). - In("id", labelIDs[:limit]). - Rows(new(Label)) - if err != nil { - return err - } - - for rows.Next() { - var label Label - err = rows.Scan(&label) - if err != nil { - _ = rows.Close() - return err - } - commentLabels[label.ID] = &label - } - _ = rows.Close() - left -= limit - labelIDs = labelIDs[limit:] + commentLabels, err := db.GetByIDs(ctx, "id", labelIDs, &Label{}) + if err != nil { + return err } for _, comment := range comments { @@ -99,18 +79,9 @@ func (comments CommentList) loadMilestones(ctx context.Context) error { return nil } - milestones := make(map[int64]*Milestone, len(milestoneIDs)) - left := len(milestoneIDs) - for left > 0 { - limit := min(left, db.DefaultMaxInSize) - err := db.GetEngine(ctx). - In("id", milestoneIDs[:limit]). - Find(&milestones) - if err != nil { - return err - } - left -= limit - milestoneIDs = milestoneIDs[limit:] + milestones, err := db.GetByIDs(ctx, "id", milestoneIDs, &Milestone{}) + if err != nil { + return err } for _, comment := range comments { @@ -135,18 +106,9 @@ func (comments CommentList) loadOldMilestones(ctx context.Context) error { return nil } - milestones := make(map[int64]*Milestone, len(milestoneIDs)) - left := len(milestoneIDs) - for left > 0 { - limit := min(left, db.DefaultMaxInSize) - err := db.GetEngine(ctx). - In("id", milestoneIDs[:limit]). - Find(&milestones) - if err != nil { - return err - } - left -= limit - milestoneIDs = milestoneIDs[limit:] + milestones, err := db.GetByIDs(ctx, "id", milestoneIDs, &Milestone{}) + if err != nil { + return err } for _, comment := range comments { @@ -167,31 +129,9 @@ func (comments CommentList) loadAssignees(ctx context.Context) error { } assigneeIDs := comments.getAssigneeIDs() - assignees := make(map[int64]*user_model.User, len(assigneeIDs)) - left := len(assigneeIDs) - for left > 0 { - limit := min(left, db.DefaultMaxInSize) - rows, err := db.GetEngine(ctx). - In("id", assigneeIDs[:limit]). - Rows(new(user_model.User)) - if err != nil { - return err - } - - for rows.Next() { - var user user_model.User - err = rows.Scan(&user) - if err != nil { - rows.Close() - return err - } - - assignees[user.ID] = &user - } - _ = rows.Close() - - left -= limit - assigneeIDs = assigneeIDs[limit:] + assignees, err := db.GetByIDs(ctx, "id", assigneeIDs, &user_model.User{}) + if err != nil { + return err } for _, comment := range comments { @@ -232,31 +172,9 @@ func (comments CommentList) LoadIssues(ctx context.Context) error { } issueIDs := comments.getIssueIDs() - issues := make(map[int64]*Issue, len(issueIDs)) - left := len(issueIDs) - for left > 0 { - limit := min(left, db.DefaultMaxInSize) - rows, err := db.GetEngine(ctx). - In("id", issueIDs[:limit]). - Rows(new(Issue)) - if err != nil { - return err - } - - for rows.Next() { - var issue Issue - err = rows.Scan(&issue) - if err != nil { - rows.Close() - return err - } - - issues[issue.ID] = &issue - } - _ = rows.Close() - - left -= limit - issueIDs = issueIDs[limit:] + issues, err := db.GetByIDs(ctx, "id", issueIDs, &Issue{}) + if err != nil { + return err } for _, comment := range comments { @@ -281,33 +199,10 @@ func (comments CommentList) loadDependentIssues(ctx context.Context) error { return nil } - e := db.GetEngine(ctx) issueIDs := comments.getDependentIssueIDs() - issues := make(map[int64]*Issue, len(issueIDs)) - left := len(issueIDs) - for left > 0 { - limit := min(left, db.DefaultMaxInSize) - rows, err := e. - In("id", issueIDs[:limit]). - Rows(new(Issue)) - if err != nil { - return err - } - - for rows.Next() { - var issue Issue - err = rows.Scan(&issue) - if err != nil { - _ = rows.Close() - return err - } - - issues[issue.ID] = &issue - } - _ = rows.Close() - - left -= limit - issueIDs = issueIDs[limit:] + issues, err := db.GetByIDs(ctx, "id", issueIDs, &Issue{}) + if err != nil { + return err } for _, comment := range comments { @@ -358,31 +253,10 @@ func (comments CommentList) LoadAttachments(ctx context.Context) (err error) { return nil } - attachments := make(map[int64][]*repo_model.Attachment, len(comments)) commentsIDs := comments.getAttachmentCommentIDs() - left := len(commentsIDs) - for left > 0 { - limit := min(left, db.DefaultMaxInSize) - rows, err := db.GetEngine(ctx). - In("comment_id", commentsIDs[:limit]). - Rows(new(repo_model.Attachment)) - if err != nil { - return err - } - - for rows.Next() { - var attachment repo_model.Attachment - err = rows.Scan(&attachment) - if err != nil { - _ = rows.Close() - return err - } - attachments[attachment.CommentID] = append(attachments[attachment.CommentID], &attachment) - } - - _ = rows.Close() - left -= limit - commentsIDs = commentsIDs[limit:] + attachments, err := db.GetByFieldIn(ctx, "comment_id", commentsIDs, &repo_model.Attachment{}) + if err != nil { + return err } for _, comment := range comments { diff --git a/models/issues/issue_list.go b/models/issues/issue_list.go index 34cfe35475..e4fd9eef2b 100644 --- a/models/issues/issue_list.go +++ b/models/issues/issue_list.go @@ -6,6 +6,7 @@ package issues import ( "context" "fmt" + "slices" "forgejo.org/models/db" project_model "forgejo.org/models/project" @@ -40,18 +41,9 @@ func (issues IssueList) LoadRepositories(ctx context.Context) (repo_model.Reposi } repoIDs := issues.getRepoIDs() - repoMaps := make(map[int64]*repo_model.Repository, len(repoIDs)) - left := len(repoIDs) - for left > 0 { - limit := min(left, db.DefaultMaxInSize) - err := db.GetEngine(ctx). - In("id", repoIDs[:limit]). - Find(&repoMaps) - if err != nil { - return nil, fmt.Errorf("find repository: %w", err) - } - left -= limit - repoIDs = repoIDs[limit:] + repoMaps, err := db.GetByIDs(ctx, "id", repoIDs, &repo_model.Repository{}) + if err != nil { + return nil, fmt.Errorf("find repository: %w", err) } for _, issue := range issues { @@ -93,18 +85,9 @@ func (issues IssueList) LoadPosters(ctx context.Context) error { } func getPostersByIDs(ctx context.Context, posterIDs []int64) (map[int64]*user_model.User, error) { - posterMaps := make(map[int64]*user_model.User, len(posterIDs)) - left := len(posterIDs) - for left > 0 { - limit := min(left, db.DefaultMaxInSize) - err := db.GetEngine(ctx). - In("id", posterIDs[:limit]). - Find(&posterMaps) - if err != nil { - return nil, err - } - left -= limit - posterIDs = posterIDs[limit:] + posterMaps, err := db.GetByIDs(ctx, "id", posterIDs, &user_model.User{}) + if err != nil { + return nil, err } return posterMaps, nil } @@ -129,18 +112,15 @@ func (issues IssueList) LoadLabels(ctx context.Context) error { issueLabels := make(map[int64][]*Label, len(issues)*3) issueIDs := issues.getIssueIDs() - left := len(issueIDs) - for left > 0 { - limit := min(left, db.DefaultMaxInSize) + for issueIDChunk := range slices.Chunk(issueIDs, db.DefaultMaxInSize) { rows, err := db.GetEngine(ctx).Table("label"). Join("LEFT", "issue_label", "issue_label.label_id = label.id"). - In("issue_label.issue_id", issueIDs[:limit]). + In("issue_label.issue_id", issueIDChunk). Asc("label.name"). Rows(new(LabelIssue)) if err != nil { return err } - for rows.Next() { var labelIssue LabelIssue err = rows.Scan(&labelIssue) @@ -157,8 +137,6 @@ func (issues IssueList) LoadLabels(ctx context.Context) error { if err1 := rows.Close(); err1 != nil { return fmt.Errorf("IssueList.LoadLabels: Close: %w", err1) } - left -= limit - issueIDs = issueIDs[limit:] } for _, issue := range issues { @@ -180,18 +158,9 @@ func (issues IssueList) LoadMilestones(ctx context.Context) error { return nil } - milestoneMaps := make(map[int64]*Milestone, len(milestoneIDs)) - left := len(milestoneIDs) - for left > 0 { - limit := min(left, db.DefaultMaxInSize) - err := db.GetEngine(ctx). - In("id", milestoneIDs[:limit]). - Find(&milestoneMaps) - if err != nil { - return err - } - left -= limit - milestoneIDs = milestoneIDs[limit:] + milestoneMaps, err := db.GetByIDs(ctx, "id", milestoneIDs, &Milestone{}) + if err != nil { + return err } for _, issue := range issues { @@ -204,22 +173,19 @@ func (issues IssueList) LoadMilestones(ctx context.Context) error { func (issues IssueList) LoadProjects(ctx context.Context) error { issueIDs := issues.getIssueIDs() projectMaps := make(map[int64]*project_model.Project, len(issues)) - left := len(issueIDs) type projectWithIssueID struct { *project_model.Project `xorm:"extends"` IssueID int64 } - for left > 0 { - limit := min(left, db.DefaultMaxInSize) - - projects := make([]*projectWithIssueID, 0, limit) + for issueIDChunk := range slices.Chunk(issueIDs, db.DefaultMaxInSize) { + projects := make([]*projectWithIssueID, 0, len(issueIDChunk)) err := db.GetEngine(ctx). Table("project"). Select("project.*, project_issue.issue_id"). Join("INNER", "project_issue", "project.id = project_issue.project_id"). - In("project_issue.issue_id", issueIDs[:limit]). + In("project_issue.issue_id", issueIDChunk). Find(&projects) if err != nil { return err @@ -227,8 +193,6 @@ func (issues IssueList) LoadProjects(ctx context.Context) error { for _, project := range projects { projectMaps[project.IssueID] = project.Project } - left -= limit - issueIDs = issueIDs[limit:] } for _, issue := range issues { @@ -249,12 +213,10 @@ func (issues IssueList) LoadAssignees(ctx context.Context) error { assignees := make(map[int64][]*user_model.User, len(issues)) issueIDs := issues.getIssueIDs() - left := len(issueIDs) - for left > 0 { - limit := min(left, db.DefaultMaxInSize) + for issueIDChunk := range slices.Chunk(issueIDs, db.DefaultMaxInSize) { rows, err := db.GetEngine(ctx).Table("issue_assignees"). Join("INNER", "`user`", "`user`.id = `issue_assignees`.assignee_id"). - In("`issue_assignees`.issue_id", issueIDs[:limit]).OrderBy(user_model.GetOrderByName()). + In("`issue_assignees`.issue_id", issueIDChunk).OrderBy(user_model.GetOrderByName()). Rows(new(AssigneeIssue)) if err != nil { return err @@ -275,8 +237,6 @@ func (issues IssueList) LoadAssignees(ctx context.Context) error { if err1 := rows.Close(); err1 != nil { return fmt.Errorf("IssueList.loadAssignees: Close: %w", err1) } - left -= limit - issueIDs = issueIDs[limit:] } for _, issue := range issues { @@ -306,33 +266,9 @@ func (issues IssueList) LoadPullRequests(ctx context.Context) error { return nil } - pullRequestMaps := make(map[int64]*PullRequest, len(issuesIDs)) - left := len(issuesIDs) - for left > 0 { - limit := min(left, db.DefaultMaxInSize) - rows, err := db.GetEngine(ctx). - In("issue_id", issuesIDs[:limit]). - Rows(new(PullRequest)) - if err != nil { - return err - } - - for rows.Next() { - var pr PullRequest - err = rows.Scan(&pr) - if err != nil { - if err1 := rows.Close(); err1 != nil { - return fmt.Errorf("IssueList.loadPullRequests: Close: %w", err1) - } - return err - } - pullRequestMaps[pr.IssueID] = &pr - } - if err1 := rows.Close(); err1 != nil { - return fmt.Errorf("IssueList.loadPullRequests: Close: %w", err1) - } - left -= limit - issuesIDs = issuesIDs[limit:] + pullRequestMaps, err := db.GetByIDs(ctx, "issue_id", issuesIDs, &PullRequest{}) + if err != nil { + return err } for _, issue := range issues { @@ -350,34 +286,10 @@ func (issues IssueList) LoadAttachments(ctx context.Context) (err error) { return nil } - attachments := make(map[int64][]*repo_model.Attachment, len(issues)) issuesIDs := issues.getIssueIDs() - left := len(issuesIDs) - for left > 0 { - limit := min(left, db.DefaultMaxInSize) - rows, err := db.GetEngine(ctx). - In("issue_id", issuesIDs[:limit]). - Rows(new(repo_model.Attachment)) - if err != nil { - return err - } - - for rows.Next() { - var attachment repo_model.Attachment - err = rows.Scan(&attachment) - if err != nil { - if err1 := rows.Close(); err1 != nil { - return fmt.Errorf("IssueList.loadAttachments: Close: %w", err1) - } - return err - } - attachments[attachment.IssueID] = append(attachments[attachment.IssueID], &attachment) - } - if err1 := rows.Close(); err1 != nil { - return fmt.Errorf("IssueList.loadAttachments: Close: %w", err1) - } - left -= limit - issuesIDs = issuesIDs[limit:] + attachments, err := db.GetByFieldIn(ctx, "issue_id", issuesIDs, &repo_model.Attachment{}) + if err != nil { + return err } for _, issue := range issues { @@ -394,12 +306,10 @@ func (issues IssueList) loadComments(ctx context.Context, cond builder.Cond) (er comments := make(map[int64][]*Comment, len(issues)) issuesIDs := issues.getIssueIDs() - left := len(issuesIDs) - for left > 0 { - limit := min(left, db.DefaultMaxInSize) + for issueIDChunk := range slices.Chunk(issuesIDs, db.DefaultMaxInSize) { rows, err := db.GetEngine(ctx).Table("comment"). Join("INNER", "issue", "issue.id = comment.issue_id"). - In("issue.id", issuesIDs[:limit]). + In("issue.id", issueIDChunk). Where(cond). Rows(new(Comment)) if err != nil { @@ -420,8 +330,6 @@ func (issues IssueList) loadComments(ctx context.Context, cond builder.Cond) (er if err1 := rows.Close(); err1 != nil { return fmt.Errorf("IssueList.loadComments: Close: %w", err1) } - left -= limit - issuesIDs = issuesIDs[limit:] } for _, issue := range issues { @@ -457,15 +365,12 @@ func (issues IssueList) loadTotalTrackedTimes(ctx context.Context) (err error) { } } - left := len(ids) - for left > 0 { - limit := min(left, db.DefaultMaxInSize) - + for idChunk := range slices.Chunk(ids, db.DefaultMaxInSize) { // select issue_id, sum(time) from tracked_time where issue_id in () group by issue_id rows, err := db.GetEngine(ctx).Table("tracked_time"). Where("deleted = ?", false). Select("issue_id, sum(time) as time"). - In("issue_id", ids[:limit]). + In("issue_id", idChunk). GroupBy("issue_id"). Rows(new(totalTimesByIssue)) if err != nil { @@ -486,8 +391,6 @@ func (issues IssueList) loadTotalTrackedTimes(ctx context.Context) (err error) { if err1 := rows.Close(); err1 != nil { return fmt.Errorf("IssueList.loadTotalTrackedTimes: Close: %w", err1) } - left -= limit - ids = ids[limit:] } for _, issue := range issues { diff --git a/models/issues/reaction.go b/models/issues/reaction.go index 9a277a8c12..21975c6b00 100644 --- a/models/issues/reaction.go +++ b/models/issues/reaction.go @@ -7,6 +7,7 @@ import ( "bytes" "context" "fmt" + "slices" "forgejo.org/models/db" repo_model "forgejo.org/models/repo" @@ -178,13 +179,12 @@ func FindReactions(ctx context.Context, opts FindReactionsOptions) (ReactionList func getReactionsForComments(ctx context.Context, issueID int64, commentIDs []int64) (map[int64]ReactionList, error) { reactions := make(map[int64]ReactionList, len(commentIDs)) - left := len(commentIDs) - for left > 0 { - limit := min(left, db.DefaultMaxInSize) + + for commentIDChunk := range slices.Chunk(commentIDs, db.DefaultMaxInSize) { rows, err := db.GetEngine(ctx). Where(builder.Eq{"issue_id": issueID}). In("reaction.`type`", setting.UI.Reactions). - In("comment_id", commentIDs[:limit]). + In("comment_id", commentIDChunk). Rows(&Reaction{}) if err != nil { return nil, err @@ -201,8 +201,6 @@ func getReactionsForComments(ctx context.Context, issueID int64, commentIDs []in } _ = rows.Close() - left -= limit - commentIDs = commentIDs[limit:] } return reactions, nil } diff --git a/tests/integration/db_query_test.go b/tests/integration/db_query_test.go new file mode 100644 index 0000000000..799d3219e8 --- /dev/null +++ b/tests/integration/db_query_test.go @@ -0,0 +1,64 @@ +// Copyright 2026 The Forgejo Authors. All rights reserved. +// SPDX-License-Identifier: GPL-3.0-or-later + +package integration + +import ( + "fmt" + "testing" + + actions_model "forgejo.org/models/actions" + "forgejo.org/models/db" + "forgejo.org/tests" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +// These are basically unit tests, but by running them in the integration test suite they are tested against all +// supported database types. + +func TestDatabaseDefaultMaxInSize(t *testing.T) { + defer tests.PrepareTestEnv(t)() + + // Ensure there are more than db.DefaultMaxInSize objects in a table: + targetCount := db.DefaultMaxInSize * 2 + for i := range targetCount { + _, err := actions_model.InsertVariable(t.Context(), 2, 2, fmt.Sprintf("VAR_%d", i), fmt.Sprintf("Value %d", i)) + require.NoError(t, err) + } + + t.Run("GetByIDs", func(t *testing.T) { + defer tests.PrintCurrentTest(t)() + + allActionVariables := make([]*actions_model.ActionVariable, 0, targetCount) + err := db.GetEngine(t.Context()).Find(&allActionVariables) + require.NoError(t, err) + + allIDs := make([]int64, len(allActionVariables)) + for i := range allActionVariables { + allIDs[i] = allActionVariables[i].ID + } + + allActionVariablesAgain, err := db.GetByIDs(t.Context(), "id", allIDs, &actions_model.ActionVariable{}) + require.NoError(t, err) + assert.Len(t, allActionVariablesAgain, len(allActionVariables)) + }) + + t.Run("GetByFieldIn", func(t *testing.T) { + defer tests.PrintCurrentTest(t)() + + allActionVariables := make([]*actions_model.ActionVariable, 0, targetCount) + err := db.GetEngine(t.Context()).Find(&allActionVariables) + require.NoError(t, err) + + allIDs := make([]int64, len(allActionVariables)) + for i := range allActionVariables { + allIDs[i] = allActionVariables[i].ID + } + + allActionVariablesAgain, err := db.GetByFieldIn(t.Context(), "id", allIDs, &actions_model.ActionVariable{}) + require.NoError(t, err) + assert.Len(t, allActionVariablesAgain, len(allActionVariables)) + }) +} From d4f7b536bc11515feb553de0dd29e9020f20879b Mon Sep 17 00:00:00 2001 From: forgejo-backport-action Date: Mon, 6 Apr 2026 02:44:05 +0200 Subject: [PATCH 26/82] [v15.0/forgejo] fix: missing syntax dialog rounded corners (#11987) **Backport:** https://codeberg.org/forgejo/forgejo/pulls/11945 Fixes #11299 Co-authored-by: grangelouis Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/11987 Reviewed-by: 0ko <0ko@noreply.codeberg.org> Co-authored-by: forgejo-backport-action Co-committed-by: forgejo-backport-action --- web_src/css/modules/dialog.css | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/web_src/css/modules/dialog.css b/web_src/css/modules/dialog.css index 897eb838cf..3f9a1a44bf 100644 --- a/web_src/css/modules/dialog.css +++ b/web_src/css/modules/dialog.css @@ -88,6 +88,11 @@ dialog .actions { padding: 1rem; } +dialog footer { + border-bottom-left-radius: var(--border-radius); + border-bottom-right-radius: var(--border-radius); +} + /* positive/negative action buttons */ dialog .actions .ui.button { display: inline-flex; From a4ec71067c817d9db0605b6a2f99ebbbcaf80580 Mon Sep 17 00:00:00 2001 From: Mathieu Fenniak Date: Mon, 6 Apr 2026 02:45:39 +0200 Subject: [PATCH 27/82] [v15.0/forgejo] chore(deps): bump xorm to v1.3.9-forgejo.10 (#11992) (#11996) Backport #11992. As I'm intending to fix the intermittently failing test (#11968), I'd like to backport that so we don't have an intermittent failing test in the LTS, and this is a requirement. Brings [deadlock error type](https://code.forgejo.org/xorm/xorm/pulls/95), which should allow fixing #11968. (cherry picked from commit 15b4c5efe81ba47b6415a9c188981813aec7f775) Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/11992 Reviewed-by: Andreas Ahlenstorf Co-authored-by: Mathieu Fenniak Co-committed-by: Mathieu Fenniak Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/11996 Reviewed-by: Otto --- go.mod | 4 ++-- go.sum | 8 ++++---- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/go.mod b/go.mod index b37a7eb424..1bc6acec46 100644 --- a/go.mod +++ b/go.mod @@ -75,7 +75,7 @@ require ( github.com/klauspost/cpuid/v2 v2.2.11 github.com/markbates/goth v1.82.0 github.com/mattn/go-isatty v0.0.20 - github.com/mattn/go-sqlite3 v1.14.37 + github.com/mattn/go-sqlite3 v1.14.40 github.com/meilisearch/meilisearch-go v0.36.0 github.com/mholt/archives v0.1.5 github.com/microcosm-cc/bluemonday v1.0.27 @@ -274,4 +274,4 @@ replace github.com/gliderlabs/ssh => code.forgejo.org/forgejo/ssh v0.0.0-2024121 replace git.sr.ht/~mariusor/go-xsd-duration => code.forgejo.org/forgejo/go-xsd-duration v0.0.0-20220703122237-02e73435a078 -replace xorm.io/xorm v1.3.9 => code.forgejo.org/xorm/xorm v1.3.9-forgejo.9 +replace xorm.io/xorm v1.3.9 => code.forgejo.org/xorm/xorm v1.3.9-forgejo.10 diff --git a/go.sum b/go.sum index 76e13223a8..54ee91755c 100644 --- a/go.sum +++ b/go.sum @@ -42,8 +42,8 @@ code.forgejo.org/go-chi/captcha v1.0.2 h1:vyHDPXkpjDv8bLO9NqtWzZayzstD/WpJ5xwEkA code.forgejo.org/go-chi/captcha v1.0.2/go.mod h1:lxiPLcJ76UCZHoH31/Wbum4GUi2NgjfFZLrJkKv1lLE= code.forgejo.org/go-chi/session v1.0.3 h1:ByJ9c/UC0AU57hxiGl53TXh+NdBOBwK/bhZ9jyadEwE= code.forgejo.org/go-chi/session v1.0.3/go.mod h1:xzGtFrV/agCJoZCUhFDlqAr1he6BrAdqlaprKOB1W90= -code.forgejo.org/xorm/xorm v1.3.9-forgejo.9 h1:hzEXDa53opdp5nrGG4F6y8HzFzrGXd5GIvFyUHcvGmI= -code.forgejo.org/xorm/xorm v1.3.9-forgejo.9/go.mod h1:A7sFd3BFmRp20h6drSsCXgQRQdF8Vz8HuCSrzFS3m90= +code.forgejo.org/xorm/xorm v1.3.9-forgejo.10 h1:DCProZz7GP10ue7NoVr1vreuADhH9tEImYFye2+aDG8= +code.forgejo.org/xorm/xorm v1.3.9-forgejo.10/go.mod h1:ly5tUt9l3b+y7HdXDM1UucQXuS58ahNxB9tPM5/6LfM= code.gitea.io/sdk/gitea v0.21.0 h1:69n6oz6kEVHRo1+APQQyizkhrZrLsTLXey9142pfkD4= code.gitea.io/sdk/gitea v0.21.0/go.mod h1:tnBjVhuKJCn8ibdyyhvUyxrR1Ca2KHEoTWoukNhXQPA= code.superseriousbusiness.org/exif-terminator v0.11.1 h1:qnujLH4/Yk/CFtFMmtjozbdV6Ry5G3Q/E/mLlWm/gQI= @@ -520,8 +520,8 @@ github.com/mattn/go-runewidth v0.0.17 h1:78v8ZlW0bP43XfmAfPsdXcoNCelfMHsDmd/pkEN github.com/mattn/go-runewidth v0.0.17/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w= github.com/mattn/go-shellwords v1.0.12 h1:M2zGm7EW6UQJvDeQxo4T51eKPurbeFbe8WtebGE2xrk= github.com/mattn/go-shellwords v1.0.12/go.mod h1:EZzvwXDESEeg03EKmM+RmDnNOPKG4lLtQsUlTZDWQ8Y= -github.com/mattn/go-sqlite3 v1.14.37 h1:3DOZp4cXis1cUIpCfXLtmlGolNLp2VEqhiB/PARNBIg= -github.com/mattn/go-sqlite3 v1.14.37/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y= +github.com/mattn/go-sqlite3 v1.14.40 h1:f7+saIsbq4EF86mUqe0uiecQOJYMOdfi5uATADmUG94= +github.com/mattn/go-sqlite3 v1.14.40/go.mod h1:pjEuOr8IwzLJP2MfGeTb0A35jauH+C2kbHKBr7yXKVQ= github.com/meilisearch/meilisearch-go v0.36.0 h1:N1etykTektXt5KPcSbhBO0d5Xx5NaKj4pJWEM7WA5dI= github.com/meilisearch/meilisearch-go v0.36.0/go.mod h1:HBfHzKMxcSbTOvqdfuRA/yf6Vk9IivcwKocWRuW7W78= github.com/mholt/acmez/v3 v3.1.2 h1:auob8J/0FhmdClQicvJvuDavgd5ezwLBfKuYmynhYzc= From 825c2a174491ec4dda4146f8c6f7b1cbff76d170 Mon Sep 17 00:00:00 2001 From: forgejo-backport-action Date: Mon, 6 Apr 2026 03:39:53 +0200 Subject: [PATCH 28/82] [v15.0/forgejo] test: fix intermittent test failure in TestPackageDebianConcurrent (#11998) **Backport:** https://codeberg.org/forgejo/forgejo/pulls/11997 Fixes #11968. Adds deadlocks to the package `RetryTx` operations, and bumps the attempt count to 3. Technically this affects production code, not just test code, but the resulting failure is only likely to occur in highly concurrent operations when uploading packages to the debian registry for the first time for a user, which is more of a test artifact than a production likelihood. Manually tested by modifying the `Makefile` to add the `-test.count=25` option to the test command. This failed consistently on my dev system before this change, failed consistently after the deadlock err was added, and then succeeded consistently (multiple runs) after both changes were combined, giving me confidence that the intermittent failure is squashed. ## Checklist The [contributor guide](https://forgejo.org/docs/next/contributor/) contains information that will be helpful to first time contributors. All work and communication must conform to Forgejo's [AI Agreement](https://codeberg.org/forgejo/governance/src/branch/main/AIAgreement.md). There also are a few [conditions for merging Pull Requests in Forgejo repositories](https://codeberg.org/forgejo/governance/src/branch/main/PullRequestsAgreement.md). You are also welcome to join the [Forgejo development chatroom](https://matrix.to/#/#forgejo-development:matrix.org). ### Tests for Go changes - I added test coverage for Go changes... - [ ] in their respective `*_test.go` for unit tests. - [x] in the `tests/integration` directory if it involves interactions with a live Forgejo server. - Fixing a test failure, so no new tests added, but they already failed. - I ran... - [x] `make pr-go` before pushing ### Documentation - [ ] I created a pull request [to the documentation](https://codeberg.org/forgejo/docs) to explain to Forgejo users how to use this change. - [x] I did not document these changes and I do not expect someone else to do it. ### Release notes - [ ] This change will be noticed by a Forgejo user or admin (feature, bug fix, performance, etc.). I suggest to include a release note for this change. - [x] This change is not visible to a Forgejo user or admin (refactor, dependency upgrade, etc.). I think there is no need to add a release note for this change. Co-authored-by: Mathieu Fenniak Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/11998 Reviewed-by: Mathieu Fenniak Co-authored-by: forgejo-backport-action Co-committed-by: forgejo-backport-action --- services/packages/debian/repository.go | 7 ++++--- services/packages/packages.go | 21 ++++++++++++--------- 2 files changed, 16 insertions(+), 12 deletions(-) diff --git a/services/packages/debian/repository.go b/services/packages/debian/repository.go index ab8c4fdc45..e84ae45ebe 100644 --- a/services/packages/debian/repository.go +++ b/services/packages/debian/repository.go @@ -318,9 +318,10 @@ func buildReleaseFiles(ctx context.Context, ownerID int64, repoVersion *packages // way. var priv string err = db.RetryTx(ctx, db.RetryConfig{ - // A single retry is sufficient the user/org's key pair would have been created by the first successful tx. - AttemptCount: 2, - ErrorIs: []error{xorm.ErrUniqueConstraintViolation}, + // A single retry is sufficient the user/org's key pair would have been created by the first successful tx; an + // additional retry may be necessary if a deadlock occurs with concurrent updates. + AttemptCount: 3, + ErrorIs: []error{xorm.ErrUniqueConstraintViolation, xorm.ErrDeadlock}, }, func(ctx context.Context) error { priv, _, err = GetOrCreateKeyPair(ctx, ownerID) return err diff --git a/services/packages/packages.go b/services/packages/packages.go index a1772cc1d9..c75b4559c8 100644 --- a/services/packages/packages.go +++ b/services/packages/packages.go @@ -95,9 +95,10 @@ func createPackageAndAddFile(ctx context.Context, pvci *PackageCreationInfo, pfc // causes such a recovery from error to panic. So, we retry the entire modification transaction if // ErrUniqueConstraintViolation is encountered. err := db.RetryTx(ctx, db.RetryConfig{ - // A single retry is sufficient as any package index that was concurrently modified should now be present: - AttemptCount: 2, - ErrorIs: []error{xorm.ErrUniqueConstraintViolation}, + // A single retry is sufficient as any package index that was concurrently modified should now be present; an + // additional retry may be necessary if a deadlock occurs with concurrent updates. + AttemptCount: 3, + ErrorIs: []error{xorm.ErrUniqueConstraintViolation, xorm.ErrDeadlock}, }, func(ctx context.Context) error { var err error var pb *packages_model.PackageBlob @@ -237,9 +238,10 @@ func addFileToPackageWrapper(ctx context.Context, fn func(ctx context.Context) ( // See comment in createPackageAndAddFile which explains why RetryTx is used with ErrUniqueConstraintViolation. err := db.RetryTx(ctx, db.RetryConfig{ - // A single retry is sufficient as any package index that was concurrently modified should now be present: - AttemptCount: 2, - ErrorIs: []error{xorm.ErrUniqueConstraintViolation}, + // A single retry is sufficient as any package index that was concurrently modified should now be present; an + // additional retry may be necessary if a deadlock occurs with concurrent updates. + AttemptCount: 3, + ErrorIs: []error{xorm.ErrUniqueConstraintViolation, xorm.ErrDeadlock}, }, func(ctx context.Context) error { var err error var blobCreated bool @@ -468,9 +470,10 @@ func GetOrCreateInternalPackageVersion(ctx context.Context, ownerID int64, packa // See comment in createPackageAndAddFile which explains why RetryTx is used with ErrUniqueConstraintViolation. return pv, db.RetryTx(ctx, db.RetryConfig{ - // A single retry is sufficient as any package index that was concurrently modified should now be present: - AttemptCount: 2, - ErrorIs: []error{xorm.ErrUniqueConstraintViolation}, + // A single retry is sufficient as any package index that was concurrently modified should now be present; an + // additional retry may be necessary if a deadlock occurs with concurrent updates. + AttemptCount: 3, + ErrorIs: []error{xorm.ErrUniqueConstraintViolation, xorm.ErrDeadlock}, }, func(ctx context.Context) error { p := &packages_model.Package{ OwnerID: ownerID, From 72c9acee107ac7cc0e253fb50606e9dd4c94a5d5 Mon Sep 17 00:00:00 2001 From: Renovate Bot Date: Wed, 8 Apr 2026 15:34:17 +0200 Subject: [PATCH 29/82] Update dependency go to v1.26.2 (v15.0/forgejo) (#12029) Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/12029 Reviewed-by: Michael Kriese Co-authored-by: Renovate Bot Co-committed-by: Renovate Bot --- go.mod | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/go.mod b/go.mod index 1bc6acec46..886854c713 100644 --- a/go.mod +++ b/go.mod @@ -2,7 +2,7 @@ module forgejo.org go 1.25.0 -toolchain go1.26.1 +toolchain go1.26.2 require ( code.forgejo.org/f3/gof3/v3 v3.11.15 From 437aa7f4a169c12878e7b53e7fdda02c288148c5 Mon Sep 17 00:00:00 2001 From: forgejo-backport-action Date: Wed, 8 Apr 2026 17:08:17 +0200 Subject: [PATCH 30/82] [v15.0/forgejo] fix: prevent actions workflows from generating OIDC tokens if not authorized in workflow (#12038) **Backport:** https://codeberg.org/forgejo/forgejo/pulls/12030 When using Forgejo's `enable-openid-connect: true`, a URL is generated into the actions under `$ACTIONS_ID_TOKEN_REQUEST_URL` that can be used to generate a JWT for accessing third-party resources authenticated as the action executing in this server on this repo. However, the endpoint of that url (`.../idtoken`) had unintentionally missed a `return` on an internal server error, and was missing a check that the action actually had `enable-openid-connect: true` on it. As a result, it was possible to generate a JWT for accessing third-party resources from an action that wasn't expected to be generating JWTs. In terms of real-world vulnerability, the most likely risk is that the JWT could be generated from a forked pull request. By not using the `$ACTIONS_ID_TOKEN_REQUEST_URL` and instead going directly to the `.../idtoken` endpoint, and parsing a generated JWT response that will be mixed with an error response, it's possible to retrieve a JWT in a forked pull request. It would require a slight misconfiguration on a third-party system to allow that JWT access, but it's a plausible risk. As this is a feature in Forgejo 15 that hasn't been released, it will be fixed in-public. ## Checklist The [contributor guide](https://forgejo.org/docs/next/contributor/) contains information that will be helpful to first time contributors. All work and communication must conform to Forgejo's [AI Agreement](https://codeberg.org/forgejo/governance/src/branch/main/AIAgreement.md). There also are a few [conditions for merging Pull Requests in Forgejo repositories](https://codeberg.org/forgejo/governance/src/branch/main/PullRequestsAgreement.md). You are also welcome to join the [Forgejo development chatroom](https://matrix.to/#/#forgejo-development:matrix.org). ### Tests for Go changes (can be removed for JavaScript changes) - I added test coverage for Go changes... - [ ] in their respective `*_test.go` for unit tests. - [x] in the `tests/integration` directory if it involves interactions with a live Forgejo server. - I ran... - [x] `make pr-go` before pushing ### Documentation - [ ] I created a pull request [to the documentation](https://codeberg.org/forgejo/docs) to explain to Forgejo users how to use this change. - [x] I did not document these changes and I do not expect someone else to do it. ### Release notes - [ ] This change will be noticed by a Forgejo user or admin (feature, bug fix, performance, etc.). I suggest to include a release note for this change. - [x] This change is not visible to a Forgejo user or admin (refactor, dependency upgrade, etc.). I think there is no need to add a release note for this change. - Feature is not yet released. Co-authored-by: Mathieu Fenniak Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/12038 Reviewed-by: Mathieu Fenniak Co-authored-by: forgejo-backport-action Co-committed-by: forgejo-backport-action --- routers/api/actions/id_token.go | 9 +++++++++ tests/integration/api_actions_id_token_test.go | 9 +++++++++ 2 files changed, 18 insertions(+) diff --git a/routers/api/actions/id_token.go b/routers/api/actions/id_token.go index 5eacc77fc9..ab9b2c8d22 100644 --- a/routers/api/actions/id_token.go +++ b/routers/api/actions/id_token.go @@ -6,6 +6,7 @@ package actions import ( "fmt" "net/http" + "slices" "strings" "time" @@ -72,6 +73,7 @@ func IDTokenContexter() func(next http.Handler) http.Handler { if err != nil { log.Error("Error runner api parsing custom claims: %v", err) ctx.Error(http.StatusInternalServerError, "Error runner api parsing custom claims") + return } task, err := actions.GetTaskByID(req.Context(), authorizationTokenClaims.TaskID) @@ -99,6 +101,13 @@ func IDTokenContexter() func(next http.Handler) http.Handler { return } + generateIDTokenScp := fmt.Sprintf("generate_id_token:%d:%d", task.Job.RunID, task.Job.ID) + scp := strings.Split(authorizationTokenClaims.Scp, " ") + if !slices.Contains(scp, generateIDTokenScp) { + ctx.Error(http.StatusForbidden, "missing scp generate_id_token") + return + } + audience := req.URL.Query().Get("audience") if audience == "" { // Default to organization that owns the repo if no audience is provided diff --git a/tests/integration/api_actions_id_token_test.go b/tests/integration/api_actions_id_token_test.go index d5c7301ad1..53a0a5b269 100644 --- a/tests/integration/api_actions_id_token_test.go +++ b/tests/integration/api_actions_id_token_test.go @@ -47,6 +47,8 @@ func TestActionsIDToken(t *testing.T) { token, err := actions_service.CreateAuthorizationToken(task, gitCtx, true) require.NoError(t, err) + tokenWithoutOIDCAccess, err := actions_service.CreateAuthorizationToken(task, gitCtx, false) + require.NoError(t, err) // get JWKs information req := NewRequest(t, "GET", "/api/actions/.well-known/keys") @@ -118,6 +120,13 @@ func TestActionsIDToken(t *testing.T) { doAssertions("testingAud", claims) }) + t.Run("with token that doesn't support OIDC", func(t *testing.T) { + req = NewRequest(t, "GET", "/api/actions/_apis/pipelines/workflows/792/idtoken?placeholder=true").AddTokenAuth(tokenWithoutOIDCAccess) + resp = MakeRequest(t, req, http.StatusInternalServerError) + assert.Contains(t, resp.Body.String(), "Error runner api parsing custom claims") + assert.NotContains(t, resp.Body.String(), "value") // must not leak an actual `getTokenResponse` + }) + t.Run("with no auth header", func(t *testing.T) { req = NewRequest(t, "GET", "/api/actions/_apis/pipelines/workflows/792/idtoken?placeholder=true&audience=testingAud") resp = MakeRequest(t, req, http.StatusUnauthorized) From 3e17afc2663ae8ecd20600d182f1e5520aad4e78 Mon Sep 17 00:00:00 2001 From: forgejo-backport-action Date: Wed, 8 Apr 2026 21:09:40 +0200 Subject: [PATCH 31/82] [v15.0/forgejo] fix(doctor): remove broken mergebase check (#12040) **Backport:** https://codeberg.org/forgejo/forgejo/pulls/12023 Fixes https://codeberg.org/forgejo/forgejo/issues/6163 Fixes https://codeberg.org/forgejo/forgejo/issues/3343 The merge base doctor check & fix was broken and could introduce irreversible "fixes" to wrong merge bases for PRs using the `fast-forward` and `rebase-and-merge` strategies. The mergebase fix was originally introduced in a migration [0] to fix an existing issue [1] in the merge code in 2020. Later added as a doctor command without explanation [2]. We decided to remove this check, as there is no apparent reason for it to still be necessary or any PR merge base state being out of sync with the current implementation. It does more harm to keep the code in and there is no way to fix `fast-forward` and `rebase-and-merge` PRs, due to their merge implementation. `fast-forward`: The git state inherently cannot reconstruct a merge base in this scenario by design. `rebase-and-merge`: Is rebased on a temporary repository clone and thus might receive a different merge base, depending on how far the target branch is ahead. [0]: https://codeberg.org/forgejo/forgejo/commit/4a2b76d9c8e00769d81eed2f149b6da20cc2d339 [1]: https://codeberg.org/forgejo/forgejo/commit/4a2b76d9c8e00769d81eed2f149b6da20cc2d339 [2]: https://codeberg.org/forgejo/forgejo/commit/d26885e2bf0a53af1c5c97c9d062f329250d8d20#diff-84d6d60112991392d6ba2cae4cd919fb3ee8afb8 ## Checklist The [contributor guide](https://forgejo.org/docs/next/contributor/) contains information that will be helpful to first time contributors. All work and communication must conform to Forgejo's [AI Agreement](https://codeberg.org/forgejo/governance/src/branch/main/AIAgreement.md). There also are a few [conditions for merging Pull Requests in Forgejo repositories](https://codeberg.org/forgejo/governance/src/branch/main/PullRequestsAgreement.md). You are also welcome to join the [Forgejo development chatroom](https://matrix.to/#/#forgejo-development:matrix.org). ### Documentation - [ ] I created a pull request [to the documentation](https://codeberg.org/forgejo/docs) to explain to Forgejo users how to use this change. - [x] I did not document these changes and I do not expect someone else to do it. ### Release notes - [x] This change will be noticed by a Forgejo user or admin (feature, bug fix, performance, etc.). I suggest to include a release note for this change. - [ ] This change is not visible to a Forgejo user or admin (refactor, dependency upgrade, etc.). I think there is no need to add a release note for this change. *The decision if the pull request will be shown in the release notes is up to the mergers / release team.* The content of the `release-notes/.md` file will serve as the basis for the release notes. If the file does not exist, the title of the pull request will be used instead. Co-authored-by: Saibotk Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/12040 Reviewed-by: Mathieu Fenniak Co-authored-by: forgejo-backport-action Co-committed-by: forgejo-backport-action --- services/doctor/mergebase.go | 114 ----------------------------------- 1 file changed, 114 deletions(-) delete mode 100644 services/doctor/mergebase.go diff --git a/services/doctor/mergebase.go b/services/doctor/mergebase.go deleted file mode 100644 index bebde30bee..0000000000 --- a/services/doctor/mergebase.go +++ /dev/null @@ -1,114 +0,0 @@ -// Copyright 2020 The Gitea Authors. All rights reserved. -// SPDX-License-Identifier: MIT - -package doctor - -import ( - "context" - "fmt" - "strings" - - "forgejo.org/models/db" - issues_model "forgejo.org/models/issues" - repo_model "forgejo.org/models/repo" - "forgejo.org/modules/git" - "forgejo.org/modules/log" - - "xorm.io/builder" -) - -func iteratePRs(ctx context.Context, repo *repo_model.Repository, each func(*repo_model.Repository, *issues_model.PullRequest) error) error { - return db.Iterate( - ctx, - builder.Eq{"base_repo_id": repo.ID}, - func(ctx context.Context, bean *issues_model.PullRequest) error { - return each(repo, bean) - }, - ) -} - -func checkPRMergeBase(ctx context.Context, logger log.Logger, autofix bool) error { - numRepos := 0 - numPRs := 0 - numPRsUpdated := 0 - err := iterateRepositories(ctx, func(repo *repo_model.Repository) error { - numRepos++ - return iteratePRs(ctx, repo, func(repo *repo_model.Repository, pr *issues_model.PullRequest) error { - numPRs++ - pr.BaseRepo = repo - repoPath := repo.RepoPath() - - oldMergeBase := pr.MergeBase - - if !pr.HasMerged { - var err error - pr.MergeBase, _, err = git.NewCommand(ctx, "merge-base").AddDashesAndList(pr.BaseBranch, pr.GetGitRefName()).RunStdString(&git.RunOpts{Dir: repoPath}) - if err != nil { - var err2 error - pr.MergeBase, _, err2 = git.NewCommand(ctx, "rev-parse").AddDynamicArguments(git.BranchPrefix + pr.BaseBranch).RunStdString(&git.RunOpts{Dir: repoPath}) - if err2 != nil { - logger.Warn("Unable to get merge base for PR ID %d, #%d onto %s in %s/%s. Error: %v & %v", pr.ID, pr.Index, pr.BaseBranch, pr.BaseRepo.OwnerName, pr.BaseRepo.Name, err, err2) - return nil - } - } - } else { - parentsString, _, err := git.NewCommand(ctx, "rev-list", "--parents", "-n", "1").AddDynamicArguments(pr.MergedCommitID).RunStdString(&git.RunOpts{Dir: repoPath}) - if err != nil { - logger.Warn("Unable to get parents for merged PR ID %d, #%d onto %s in %s/%s. Error: %v", pr.ID, pr.Index, pr.BaseBranch, pr.BaseRepo.OwnerName, pr.BaseRepo.Name, err) - return nil - } - parents := strings.Split(strings.TrimSpace(parentsString), " ") - if len(parents) < 2 { - return nil - } - - refs := append([]string{}, parents[1:]...) - refs = append(refs, pr.GetGitRefName()) - cmd := git.NewCommand(ctx, "merge-base").AddDashesAndList(refs...) - pr.MergeBase, _, err = cmd.RunStdString(&git.RunOpts{Dir: repoPath}) - if err != nil { - logger.Warn("Unable to get merge base for merged PR ID %d, #%d onto %s in %s/%s. Error: %v", pr.ID, pr.Index, pr.BaseBranch, pr.BaseRepo.OwnerName, pr.BaseRepo.Name, err) - return nil - } - } - pr.MergeBase = strings.TrimSpace(pr.MergeBase) - if pr.MergeBase != oldMergeBase { - if autofix { - if err := pr.UpdateCols(ctx, "merge_base"); err != nil { - logger.Critical("Failed to update merge_base. ERROR: %v", err) - return fmt.Errorf("Failed to update merge_base. ERROR: %w", err) - } - } else { - logger.Info("#%d onto %s in %s/%s: MergeBase should be %s but is %s", pr.Index, pr.BaseBranch, pr.BaseRepo.OwnerName, pr.BaseRepo.Name, oldMergeBase, pr.MergeBase) - } - numPRsUpdated++ - } - return nil - }) - }) - - if autofix { - logger.Info("%d PR mergebases updated of %d PRs total in %d repos", numPRsUpdated, numPRs, numRepos) - } else { - if numPRsUpdated == 0 { - logger.Info("All %d PRs in %d repos have a correct mergebase", numPRs, numRepos) - } else if err == nil { - logger.Critical("%d PRs with incorrect mergebases of %d PRs total in %d repos", numPRsUpdated, numPRs, numRepos) - return fmt.Errorf("%d PRs with incorrect mergebases of %d PRs total in %d repos", numPRsUpdated, numPRs, numRepos) - } else { - logger.Warn("%d PRs with incorrect mergebases of %d PRs total in %d repos", numPRsUpdated, numPRs, numRepos) - } - } - - return err -} - -func init() { - Register(&Check{ - Title: "Recalculate merge bases", - Name: "recalculate-merge-bases", - IsDefault: false, - Run: checkPRMergeBase, - Priority: 7, - }) -} From 50a30eb54f4a8bbbc279088a7f31f9fa3cece2b5 Mon Sep 17 00:00:00 2001 From: forgejo-backport-action Date: Wed, 8 Apr 2026 21:12:56 +0200 Subject: [PATCH 32/82] [v15.0/forgejo] fix: incorrect identification of outdated run attempts (#12044) **Backport:** https://codeberg.org/forgejo/forgejo/pulls/12021 Since https://codeberg.org/forgejo/forgejo/pulls/11750, the attempt number of a Forgejo Actions job is set eagerly. When an job is ultimately not run, for example, because its `needs` weren't satisfied, it leads to discontinuous attempt numbers of completed attempts that the component for viewing action logs could not handle. This has been rectified by actually determining the number of the last attempt. Resolves https://codeberg.org/forgejo/forgejo/issues/11994. ## Checklist The [contributor guide](https://forgejo.org/docs/next/contributor/) contains information that will be helpful to first time contributors. All work and communication must conform to Forgejo's [AI Agreement](https://codeberg.org/forgejo/governance/src/branch/main/AIAgreement.md). There also are a few [conditions for merging Pull Requests in Forgejo repositories](https://codeberg.org/forgejo/governance/src/branch/main/PullRequestsAgreement.md). You are also welcome to join the [Forgejo development chatroom](https://matrix.to/#/#forgejo-development:matrix.org). ### Tests for Go changes (can be removed for JavaScript changes) - I added test coverage for Go changes... - [ ] in their respective `*_test.go` for unit tests. - [ ] in the `tests/integration` directory if it involves interactions with a live Forgejo server. - I ran... - [x] `make pr-go` before pushing ### Tests for JavaScript changes (can be removed for Go changes) - I added test coverage for JavaScript changes... - [x] in `web_src/js/*.test.js` if it can be unit tested. - [ ] in `tests/e2e/*.test.e2e.js` if it requires interactions with a live Forgejo server (see also the [developer guide for JavaScript testing](https://codeberg.org/forgejo/forgejo/src/branch/forgejo/tests/e2e/README.md#end-to-end-tests)). ### Documentation - [ ] I created a pull request [to the documentation](https://codeberg.org/forgejo/docs) to explain to Forgejo users how to use this change. - [x] I did not document these changes and I do not expect someone else to do it. ### Release notes - [ ] This change will be noticed by a Forgejo user or admin (feature, bug fix, performance, etc.). I suggest to include a release note for this change. - [x] This change is not visible to a Forgejo user or admin (refactor, dependency upgrade, etc.). I think there is no need to add a release note for this change. *The decision if the pull request will be shown in the release notes is up to the mergers / release team.* The content of the `release-notes/.md` file will serve as the basis for the release notes. If the file does not exist, the title of the pull request will be used instead. Co-authored-by: Andreas Ahlenstorf Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/12044 Reviewed-by: Andreas Ahlenstorf Co-authored-by: forgejo-backport-action Co-committed-by: forgejo-backport-action --- web_src/js/components/RepoActionView.test.js | 13 ++++++------ web_src/js/components/RepoActionView.vue | 22 +++++++++++--------- 2 files changed, 19 insertions(+), 16 deletions(-) diff --git a/web_src/js/components/RepoActionView.test.js b/web_src/js/components/RepoActionView.test.js index 12c201916f..fb6af25dd0 100644 --- a/web_src/js/components/RepoActionView.test.js +++ b/web_src/js/components/RepoActionView.test.js @@ -183,7 +183,8 @@ function configureForMultipleAttemptTests({viewHistorical}) { }, ], allAttempts: [ - {number: 2, time_since_started_html: 'yesterday', status: 'success', status_diagnostics: ['Success']}, + {number: 3, time_since_started_html: 'yesterday', status: 'success', status_diagnostics: ['Success']}, + // Omit one attempt to simulate the case when a job isn't run because a `needs:` failed. {number: 1, time_since_started_html: 'two days ago', status: 'failure', status_diagnostics: ['Failure']}, ], }, @@ -218,7 +219,7 @@ function configureForMultipleAttemptTests({viewHistorical}) { props: { ...defaultTestProps, runIndex: '123', - attemptNumber: viewHistorical ? '1' : '2', + attemptNumber: viewHistorical ? '1' : '3', actionsURL: toAbsoluteUrl('/user1/repo2/actions'), initialJobData: {...minimalInitialJobData, state: myJobState}, }, @@ -243,7 +244,7 @@ test('display baseline with most-recent attempt', async () => { // Attempt selector dropdown... expect(wrapper.findAll('.job-attempt-dropdown').length).toEqual(1); expect(wrapper.findAll('.job-attempt-dropdown .svg.octicon-check-circle-fill.text.green').length).toEqual(1); - expect(wrapper.get('.job-attempt-dropdown .ui.dropdown').text()).toEqual('Run attempt 2 yesterday'); + expect(wrapper.get('.job-attempt-dropdown .ui.dropdown').text()).toEqual('Run attempt 3 yesterday'); // Attempt status expect(wrapper.get('.job-info-header h3').text()).toEqual('test'); @@ -302,7 +303,7 @@ test('historical attempt dropdown interactions', async () => { const attemptsExpanded = () => { expect(wrapper.findAll('.job-attempt-dropdown .action-job-menu').length).toEqual(1); expect(wrapper.get('.job-attempt-dropdown .action-job-menu').isVisible()).toBe(true); - expect(wrapper.findAll('.job-attempt-dropdown .action-job-menu a').filter((a) => a.text() === 'Run attempt 2 yesterday').length).toEqual(1); + expect(wrapper.findAll('.job-attempt-dropdown .action-job-menu a').filter((a) => a.text() === 'Run attempt 3 yesterday').length).toEqual(1); expect(wrapper.findAll('.job-attempt-dropdown .action-job-menu a').filter((a) => a.text() === 'Run attempt 1 two days ago').length).toEqual(1); }; attemptsExpanded(); @@ -332,8 +333,8 @@ test('historical attempt dropdown interactions', async () => { attemptsExpanded(); // Click on the other option in the dropdown to verify it navigates to the target attempt - wrapper.findAll('.job-attempt-dropdown .action-job-menu a').find((a) => a.text() === 'Run attempt 2 yesterday').trigger('click'); - expect(window.location.href).toEqual(toAbsoluteUrl('/user1/repo2/actions/runs/123/jobs/1/attempt/2')); + wrapper.findAll('.job-attempt-dropdown .action-job-menu a').find((a) => a.text() === 'Run attempt 3 yesterday').trigger('click'); + expect(window.location.href).toEqual(toAbsoluteUrl('/user1/repo2/actions/runs/123/jobs/1/attempt/3')); }); test('run approval interaction', async () => { diff --git a/web_src/js/components/RepoActionView.vue b/web_src/js/components/RepoActionView.vue index e42ae52fe7..5c00eecc6b 100644 --- a/web_src/js/components/RepoActionView.vue +++ b/web_src/js/components/RepoActionView.vue @@ -124,10 +124,10 @@ export default { ], // All available attempts for the job we're currently viewing. // - // initial value here is configured so that currentingViewingMostRecentAttempt() -> true on the default `data()`, so that the + // initial value here is configured so that currentlyViewingMostRecentAttempt() -> true on the default `data()`, so that the // initial render (before `loadJob`'s first execution is complete) doesn't display "You are viewing an // out-of-date run..." - allAttempts: new Array(parseInt(this.attemptNumber)).fill({index: 0, time_since_started_html: '', status: 'success', status_diagnostics: []}), + allAttempts: [], }, }; }, @@ -138,19 +138,19 @@ export default { }, displayOtherJobs() { - return this.currentingViewingMostRecentAttempt; + return this.currentlyViewingMostRecentAttempt; }, canApprove() { - return this.currentingViewingMostRecentAttempt && this.run.canApprove; + return this.currentlyViewingMostRecentAttempt && this.run.canApprove; }, canCancel() { - return this.currentingViewingMostRecentAttempt && this.run.canCancel; + return this.currentlyViewingMostRecentAttempt && this.run.canCancel; }, canRerun() { - return this.currentingViewingMostRecentAttempt && this.run.canRerun; + return this.currentlyViewingMostRecentAttempt && this.run.canRerun; }, viewingAttemptNumber() { @@ -167,11 +167,13 @@ export default { return attempt || fallback; }, - currentingViewingMostRecentAttempt() { - if (!this.currentJob.allAttempts) { + currentlyViewingMostRecentAttempt() { + if (!this.currentJob.allAttempts || this.currentJob.allAttempts.length === 0) { return true; } - return this.viewingAttemptNumber === this.currentJob.allAttempts.length; + + const mostRecentAttemptNumber = this.currentJob.allAttempts[0].number; + return this.viewingAttemptNumber === mostRecentAttemptNumber; }, displayGearDropdown() { @@ -452,7 +454,7 @@ export default {