Compare commits

..

44 commits

Author SHA1 Message Date
784c53032e gitea.nulo.in: Use NPM cache
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-01-18 15:15:27 -03:00
aa4cf41247 gitea.nulo.in: Use APK cache 2023-01-18 15:15:27 -03:00
a677d56548 gitea.nulo.in: Use script to upgrade 2023-01-18 15:15:27 -03:00
5c23058f0a gitea.nulo.in: Use Alpine 3.16 2023-01-18 15:15:27 -03:00
ab14c47087 Add .woodpecker.yml for gitea.nulo.in 2023-01-18 15:15:27 -03:00
John Olheiser
6992e72647
chore: changelog 1.18.1 (#22471)
Signed-off-by: jolheiser <john.olheiser@gmail.com>
2023-01-17 10:40:47 -06:00
KN4CK3R
1bbf490926
Update github.com/zeripath/zapx/v15 (#22485)
Fixes #22481

_Originally posted by @zeripath in
https://github.com/go-gitea/gitea/issues/22481#issuecomment-1385188703_
2023-01-17 14:51:24 +00:00
Yarden Shoham
45bdeac730
Fix pull request API field closed_at always being null (#22482) (#22483)
Backport #22482

Fix #22480
2023-01-17 11:41:43 +00:00
Haruo Kinoshita
a32700d0fd
Fix migration from GitBucket (#22465)
Migration from GitBucket does not work due to a access for "Reviews" API
on GitBucket that makes 404 response.
This PR has following changes.
1. Made to stop access for Reviews API while migrating from GitBucket.
2. Added support for custom URL (e.g.
`http://example.com/gitbucket/owner/repository`)
3. Made to accept for git checkout URL
(`http://example.com/git/owner/repository.git`)

Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-01-17 16:57:17 +08:00
John Olheiser
a9400ba7a3
Fix container blob mount (#22226) (#22476)
Backport #22226

Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
2023-01-17 14:50:45 +08:00
zeripath
9a6d78eaa8
Fix error when calculate the repository size (#22392) (#22474)
Backport #22392

Fix #22386

`GetDirectorySize` moved as `getDirectorySize` because it becomes a
special function which should not be put in `util`.

Co-authored-by: Jason Song <i@wolfogre.com>
2023-01-16 16:07:06 -06:00
zeripath
af8151cbb9
Fix Operator does not exist bug on explore page with ONLY_SHOW_RELEVANT_REPOS (#22454) (#22472)
Backport #22454

There is a mistake in the code for SearchRepositoryCondition where it
tests topics as a string. This is incorrect for postgres where topics is
cast and stored as json. topics needs to be cast to text for this to
work. (For some reason JSON_ARRAY_LENGTH does not work, so I have taken
the simplest solution of casting to text and doing a string comparison.)

Ref https://github.com/go-gitea/gitea/pull/21962#issuecomment-1379584057

Signed-off-by: Andrew Thornton <art27@cantab.net>
2023-01-16 14:17:22 -06:00
zeripath
ee37edc465
Fix environments for KaTeX and error reporting (#22453) (#22473)
Backport #22453

In #22447 it was noticed that display environments were not working
correctly. This was due to the setting displayMode not being set.

Further it was noticed that the error was not being displayed correctly.

This PR fixes both of these issues by forcibly setting the displayMode
setting and corrects an error in displayError.

Fix #22447

Signed-off-by: Andrew Thornton <art27@cantab.net>
2023-01-16 13:34:50 -06:00
wxiaoguang
29bbfcc118
Remove the netgo tag for Windows build (#22467) (#22468)
Backport #22467

Fix #22370 and more.

Before Go 1.19, the `netgo` tag for Windows does nothing.

But Go 1.19 rewrite the net package code for Windows DNS, and there is a
bug:

* https://github.com/golang/go/issues/57757

This PR just removes the `netgo` tag for Windows build, then the Gitea
for Windows can have the old DNS behavior.
2023-01-16 13:05:12 +00:00
zeripath
f430050d24
Fix leaving organization bug on user settings -> orgs (#21983) (#22438)
Backport #21983

Fix #21772

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>

Co-authored-by: 花墨 <shanee@live.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
2023-01-16 01:29:27 +02:00
Jimmy Praet
510c811574
Restore previous official review when an official review is deleted (#22449) (#22460)
Backport #22449

Co-authored-by: Lauris BH <lauris@nix.lv>
2023-01-15 20:47:54 +01:00
zeripath
f93522ddae
Prevent panic on looking at api "git" endpoints for empty repos (#22457) (#22458)
Backport #22457

The API endpoints for "git" can panic if they are called on an empty
repo. We can simply allow empty repos for these endpoints without worry
as they should just work.

Fix #22452

Signed-off-by: Andrew Thornton <art27@cantab.net>
2023-01-15 14:35:56 +00:00
zeripath
10c9f96a1e
Fixed colour transparency regex matching in project board sorting (#22092) (#22437)
Backport #22092

As described in the linked issue (#22091), semi-transparent UI elements
would result in JS errors due to the fact that the CSS `backgroundColor`
element was being matched by the pattern
`^rgb\((\d+),\s*(\d+),\s*(\d+)\)$`, which does not take the alpha
channel into account.

I changed the pattern to `^rgba?\((\d+),\s*(\d+),\s*(\d+).*\)$`. This
new pattern accepts both `rgb` and `rgba` tuples, and ignores the alpha
channel (that little `.*` at the end) from the sorting criteria. The
reason why I chose to ignore alpha is because when it comes to kanban
colour sorting, only the hue is important; the order of the panels
should stay the same, even if some of them are transparent.

Alternative solutions were discussed in the bug report and are included
here for completeness:
1. Change the regex from ^rgb\((\d+),\s*(\d+),\s*(\d+)\)$ to
^rgba?\((\d+),\s*(\d+),\s*(\d+)(,\s*(\d+(\.\d+)?))?\)$ (alpha channel is
a float or NaN on 5th group) and include the alpha channel in the
sorting criteria.
2. Rethink on why you're reading colours out of the CSS in the first
place, then reformat this sorting procedure.

Fix #22091

Co-authored-by: MisterCavespider <deler.urist@tutanota.de>
2023-01-15 12:05:04 +00:00
Jonathan Tran
7b60d47c3c
Log STDERR of external renderer when it fails (#22442) (#22444)
Backport #22442.
2023-01-14 23:14:27 +00:00
zeripath
265d438a6e
fix: PR status layout on mobile (#21547) (#22441)
Backport #21547

This PR fixes the layout of PR status layouts on mobile. For longer
status context names or on very small screens the text would overflow
and push the "Details" and "Required" badges out of the container.

Before:

![Screen Shot 2022-10-22 at 12 27

46](https://user-images.githubusercontent.com/13721712/197335454-e4decf09-4778-43e8-be88-9188fabbec23.png)

After:

![Screen Shot 2022-10-22 at 12 53

24](https://user-images.githubusercontent.com/13721712/197335449-2c731a6c-7fd6-4b97-be0e-704a99fd3d32.png)

Co-authored-by: kolaente <k@knt.li>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-01-14 16:56:44 +08:00
zeripath
93e907de41
Fix wechatwork webhook sends empty content in PR review (#21762) (#22440)
Backport #21762

Wechatwork webhook is sending the following string for pull request
reviews:

``` markdown
>
```

This commit fixes this problem.

Co-authored-by: Jim Kirisame <jim@lotlab.org>
2023-01-14 11:37:18 +08:00
zeripath
f3034b1fd9
Remove duplicate "Actions" label in mobile view (#21974) (#22439)
Backport #21974

Closes #21973.

The "Actions" button on the commit view page is labelled twice in mobile
view. No other buttons on the page have a `mobile-only` extra label, so
this PR removes it.

Before:


![before](https://user-images.githubusercontent.com/6496999/204540002-75baa08a-6c06-4b39-847b-34272e09d71e.PNG)

After:


![after](https://user-images.githubusercontent.com/6496999/204539991-a0607765-d5e2-4b1a-84c9-a3e16cbc674e.PNG)

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>

Co-authored-by: Mark Ormesher <me@markormesher.co.uk>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: John Olheiser <john.olheiser@gmail.com>
2023-01-13 23:23:39 +00:00
zeripath
d0c74dd2d2
Prepend refs/heads/ to issue template refs (#20461) (#22427)
Backport #20461

Signed-off-by: Andrew Thornton <art27@cantab.net>
2023-01-13 16:33:35 -06:00
zeripath
2f91a12143
Continue GCing other repos on error in one repo (#22422) (#22425)
Backport #22422

The current code propagates all errors up to the iteration step meaning
that a single malformed repo will prevent GC of other repos.

This PR simply stops that propagation.

Fix #21605

Signed-off-by: Andrew Thornton <art27@cantab.net>
2023-01-13 15:29:16 -06:00
zeripath
3ad62127df
Correctly handle select on multiple channels in Queues (#22146) (#22428)
Backport #22146

There are a few places in FlushQueueWithContext which make an incorrect
assumption about how `select` on multiple channels works.

The problem is best expressed by looking at the following example:

```go
package main

import "fmt"

func main() {
    closedChan := make(chan struct{})
    close(closedChan)
    toClose := make(chan struct{})
    count := 0

    for {
        select {
        case <-closedChan:
            count++
            fmt.Println(count)
            if count == 2 {
                close(toClose)
            }
        case <-toClose:
            return
        }
    }
}
```

This PR double-checks that the contexts are closed outside of checking
if there is data in the dataChan. It also rationalises the WorkerPool
FlushWithContext because the previous implementation failed to handle
pausing correctly. This will probably fix the underlying problem in
 #22145

Fix #22145

Signed-off-by: Andrew Thornton <art27@cantab.net>
2023-01-13 20:42:42 +00:00
Lunny Xiao
37e23c982f
Remove test session cache to reduce possible concurrent problem (#22199) (#22429)
backport #22199
2023-01-13 18:54:58 +00:00
zeripath
421d87933b
Restore function to "Show more" buttons (#22399) (#22426)
Backport #22399

There was a serious regression in #21012 which broke the Show More
button on the diff page, and the show more button was also broken on the
file tree too.

This PR fixes this by resetting the pageData.diffFiles as the vue
watched value and reattachs a function to the show more button outside
of the file tree view.

Fix #22380

Signed-off-by: Andrew Thornton <art27@cantab.net>
2023-01-13 17:29:10 +08:00
Lunny Xiao
426c0ad14c
Allow HOST has no port (#22280) (#22409)
Fix #22274
Backport #22280 

This PR will allow `HOST` without port. Then a default port will be
given in future steps.
2023-01-12 09:57:03 +08:00
John Olheiser
41a06d2e82
fix: omit avatar_url in discord payload when empty (#22393) (#22394)
Backport #22393

Signed-off-by: jolheiser <john.olheiser@gmail.com>
2023-01-10 13:44:18 -06:00
Yarden Shoham
885082f7a7
Don't display stop watch top bar icon when disabled and hidden when click other place (#22374) (#22387)
Backport #22374

Fix #22286

When timetracking is disabled, the stop watch top bar icon should be
hidden. When the stop watch recording popup, it should be allowed to
hide with some operation. Now click any place on this page will hide the
popup window.
2023-01-10 09:21:29 +00:00
Lunny Xiao
32999e2511
Don't lookup mail server when using sendmail (#22300) (#22383)
Fix #22287
backport #22300
2023-01-09 12:18:03 -05:00
Lunny Xiao
16d7596635
Fix set system setting failure once it cached (#22334)
backport #22333
2023-01-09 10:04:44 +08:00
isla w
adc0bcaebb
Update Emoji dataset to Unicode 14 (#22342) (#22343)
Backport of #22342 to release/v1.18 as requested
2023-01-04 12:45:18 -06:00
Lunny Xiao
0cca1e079b
fix gravatar disable bug (#22337) 2023-01-04 21:17:59 +08:00
John Olheiser
55c6433fac
fix: update settings table on install (#22326) (#22327)
Backport #22326

Signed-off-by: jolheiser <john.olheiser@gmail.com>
2023-01-03 23:19:57 +01:00
Kyle D
5b8763476a
Add deprecated warning for DISABLE_GRAVATAR and ENABLE_FEDERATED_AVATAR (#22324)
Backport https://github.com/go-gitea/gitea/pull/22318
2023-01-03 11:11:00 -05:00
Jason Song
09c667eb45
Fix sitemap (#22272) (#22320)
Backport #22272.

Fix #22270.

Related to #18407.

The old code treated both sitemap and sitemap index as the format like:

```xml
...
<url>
  <loc>http://localhost:3000/explore/users/sitemap-1.xml</loc>
</url>
...
```

Actually, it's incorrect for sitemap index, it should be:

```xml
...
<sitemap>
  <loc>http://localhost:3000/explore/users/sitemap-1.xml</loc>
</sitemap>
...
```

See https://www.sitemaps.org/protocol.html

Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-01-03 22:03:56 +08:00
Lunny Xiao
791f290c26
Display error log when a modified template has an error so that it could recovery when the error fixed (#22261) (#22321)
backport #22261 

A drawback is the previous generated template has been cached, so you
cannot get error in the UI but only from log

Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
Co-authored-by: delvh <dev.lh@web.de>
2023-01-03 19:39:58 +08:00
John Olheiser
58e642c1d6
fix: code search title translation (#22285) (#22316)
Backport #22285

Signed-off-by: jolheiser <john.olheiser@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Lauris BH <lauris@nix.lv>
2023-01-03 11:33:55 +08:00
Yarden Shoham
72d1f9e63e
Fix due date rendering the wrong date in issue (#22302) (#22306)
Backport #22302

Previously, the last minute of the chosen date caused bad timezone
rendering.

For example, I chose January 4th, 2023.

### Before
```html
<time data-format="date" datetime="Wed, 04 Jan 2023 23:59:59 +0000">January 5, 2023</time>
```

### After
```html
<time data-format="date" datetime="2023-01-04">January 4, 2023</time>
```

---

Closes #21999

Signed-off-by: Yarden Shoham <hrsi88@gmail.com>
2023-01-02 20:42:39 +08:00
Lunny Xiao
0697075547
Fix get system setting bug when enabled redis cache (#22298)
backport #22295, fix #22281

Co-authored-by: Lauris BH <lauris@nix.lv>
2023-01-01 23:24:01 +08:00
Lunny Xiao
f1e07d8c87
Fix bug of DisableGravatar default value (#22297)
backport #22296

Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
2023-01-01 20:20:04 +08:00
Chongyi Zheng
443fd27a90
Add sync_on_commit option for push mirrors api (#22271) (#22292)
Backport of #22271
2022-12-31 19:46:14 +08:00
Gusted
75f128ebf8
Fix key signature error page (#22229) (#22230)
- Backport of #22229
- When the GPG key contains an error, such as an invalid signature or an
email address that does not match the user.A page will be shown that
says you must provide a signature for the token.
- This page had two errors: one had the wrong translation key and the
other tried to use an undefined variable
[`.PaddedKeyID`](e81ccc406b/models/asymkey/gpg_key.go (L65-L72)),
which is a function implemented on the `GPGKey` struct, given that we
don't have that, we use
[`KeyID`](e81ccc406b/routers/web/user/setting/keys.go (L102))
which is [the fingerprint of the
publickey](https://pkg.go.dev/golang.org/x/crypto/openpgp/packet#PublicKey.KeyIdString)
and is a valid way for opengpg to refer to a key.
2022-12-30 12:53:23 +08:00
73 changed files with 1371 additions and 713 deletions

View file

@ -4,6 +4,47 @@ This changelog goes through all the changes that have been made in each release
without substantial changes to our git log; to see the highlights of what has
been added to each release, please refer to the [blog](https://blog.gitea.io).
## [1.18.1](https://github.com/go-gitea/gitea/releases/tag/v1.18.1) - 2023-01-17
* API
* Add `sync_on_commit` option for push mirrors api (#22271) (#22292)
* BUGFIXES
* Update `github.com/zeripath/zapx/v15` (#22485)
* Fix pull request API field `closed_at` always being `null` (#22482) (#22483)
* Fix container blob mount (#22226) (#22476)
* Fix error when calculating repository size (#22392) (#22474)
* Fix Operator does not exist bug on explore page with ONLY_SHOW_RELEVANT_REPOS (#22454) (#22472)
* Fix environments for KaTeX and error reporting (#22453) (#22473)
* Remove the netgo tag for Windows build (#22467) (#22468)
* Fix migration from GitBucket (#22477) (#22465)
* Prevent panic on looking at api "git" endpoints for empty repos (#22457) (#22458)
* Fix PR status layout on mobile (#21547) (#22441)
* Fix wechatwork webhook sends empty content in PR review (#21762) (#22440)
* Remove duplicate "Actions" label in mobile view (#21974) (#22439)
* Fix leaving organization bug on user settings -> orgs (#21983) (#22438)
* Fixed colour transparency regex matching in project board sorting (#22092) (#22437)
* Correctly handle select on multiple channels in Queues (#22146) (#22428)
* Prepend refs/heads/ to issue template refs (#20461) (#22427)
* Restore function to "Show more" buttons (#22399) (#22426)
* Continue GCing other repos on error in one repo (#22422) (#22425)
* Allow HOST has no port (#22280) (#22409)
* Fix omit avatar_url in discord payload when empty (#22393) (#22394)
* Don't display stop watch top bar icon when disabled and hidden when click other place (#22374) (#22387)
* Don't lookup mail server when using sendmail (#22300) (#22383)
* Fix gravatar disable bug (#22337)
* Fix update settings table on install (#22326) (#22327)
* Fix sitemap (#22272) (#22320)
* Fix code search title translation (#22285) (#22316)
* Fix due date rendering the wrong date in issue (#22302) (#22306)
* Fix get system setting bug when enabled redis cache (#22298)
* Fix bug of DisableGravatar default value (#22297)
* Fix key signature error page (#22229) (#22230)
* TESTING
* Remove test session cache to reduce possible concurrent problem (#22199) (#22429)
* MISC
* Restore previous official review when an official review is deleted (#22449) (#22460)
* Log STDERR of external renderer when it fails (#22442) (#22444)
## [1.18.0](https://github.com/go-gitea/gitea/releases/tag/1.18.0) - 2022-12-22
* SECURITY

View file

@ -740,9 +740,9 @@ $(DIST_DIRS):
.PHONY: release-windows
release-windows: | $(DIST_DIRS)
CGO_CFLAGS="$(CGO_CFLAGS)" $(GO) run $(XGO_PACKAGE) -go $(XGO_VERSION) -buildmode exe -dest $(DIST)/binaries -tags 'netgo osusergo $(TAGS)' -ldflags '-linkmode external -extldflags "-static" $(LDFLAGS)' -targets 'windows/*' -out gitea-$(VERSION) .
CGO_CFLAGS="$(CGO_CFLAGS)" $(GO) run $(XGO_PACKAGE) -go $(XGO_VERSION) -buildmode exe -dest $(DIST)/binaries -tags 'osusergo $(TAGS)' -ldflags '-linkmode external -extldflags "-static" $(LDFLAGS)' -targets 'windows/*' -out gitea-$(VERSION) .
ifeq (,$(findstring gogit,$(TAGS)))
CGO_CFLAGS="$(CGO_CFLAGS)" $(GO) run $(XGO_PACKAGE) -go $(XGO_VERSION) -buildmode exe -dest $(DIST)/binaries -tags 'netgo osusergo gogit $(TAGS)' -ldflags '-linkmode external -extldflags "-static" $(LDFLAGS)' -targets 'windows/*' -out gitea-$(VERSION)-gogit .
CGO_CFLAGS="$(CGO_CFLAGS)" $(GO) run $(XGO_PACKAGE) -go $(XGO_VERSION) -buildmode exe -dest $(DIST)/binaries -tags 'osusergo gogit $(TAGS)' -ldflags '-linkmode external -extldflags "-static" $(LDFLAGS)' -targets 'windows/*' -out gitea-$(VERSION)-gogit .
endif
ifeq ($(CI),true)
cp /build/* $(DIST)/binaries

2
assets/emoji.json generated

File diff suppressed because one or more lines are too long

View file

@ -26,7 +26,7 @@ import (
const (
gemojiURL = "https://raw.githubusercontent.com/github/gemoji/master/db/emoji.json"
maxUnicodeVersion = 12
maxUnicodeVersion = 14
)
var flagOut = flag.String("o", "modules/emoji/emoji_data.go", "out")

View file

@ -735,9 +735,9 @@ and
- `GRAVATAR_SOURCE`: **gravatar**: Can be `gravatar`, `duoshuo` or anything like
`http://cn.gravatar.com/avatar/`.
- `DISABLE_GRAVATAR`: **false**: Enable this to use local avatars only.
- `DISABLE_GRAVATAR`: **false**: Enable this to use local avatars only. **DEPRECATED [v1.18+]** moved to database. Use admin panel to configure.
- `ENABLE_FEDERATED_AVATAR`: **false**: Enable support for federated avatars (see
[http://www.libravatar.org](http://www.libravatar.org)).
[http://www.libravatar.org](http://www.libravatar.org)). **DEPRECATED [v1.18+]** moved to database. Use admin panel to configure.
- `AVATAR_STORAGE_TYPE`: **default**: Storage type defined in `[storage.xxx]`. Default is `default` which will read `[storage]` if no section `[storage]` will be a type `local`.
- `AVATAR_UPLOAD_PATH`: **data/avatars**: Path to store user avatar image files.

2
go.mod
View file

@ -302,7 +302,7 @@ replace github.com/shurcooL/vfsgen => github.com/lunny/vfsgen v0.0.0-20220105142
replace github.com/satori/go.uuid v1.2.0 => github.com/gofrs/uuid v4.2.0+incompatible
replace github.com/blevesearch/zapx/v15 v15.3.6 => github.com/zeripath/zapx/v15 v15.3.6-alignment-fix
replace github.com/blevesearch/zapx/v15 v15.3.6 => github.com/zeripath/zapx/v15 v15.3.6-alignment-fix-2
exclude github.com/gofrs/uuid v3.2.0+incompatible

4
go.sum
View file

@ -1482,8 +1482,8 @@ github.com/yuin/goldmark-highlighting/v2 v2.0.0-20220924101305-151362477c87/go.m
github.com/yuin/goldmark-meta v1.1.0 h1:pWw+JLHGZe8Rk0EGsMVssiNb/AaPMHfSRszZeUeiOUc=
github.com/yuin/goldmark-meta v1.1.0/go.mod h1:U4spWENafuA7Zyg+Lj5RqK/MF+ovMYtBvXi1lBb2VP0=
github.com/zenazn/goji v0.9.0/go.mod h1:7S9M489iMyHBNxwZnk9/EHS098H4/F6TATF2mIxtB1Q=
github.com/zeripath/zapx/v15 v15.3.6-alignment-fix h1:fKZ9OxEDoJKgM0KBXRbSb5IgKUEXis6C3zEIiMtzzQ0=
github.com/zeripath/zapx/v15 v15.3.6-alignment-fix/go.mod h1:5DbhhDTGtuQSns1tS2aJxJLPc91boXCvjOMeCLD1saM=
github.com/zeripath/zapx/v15 v15.3.6-alignment-fix-2 h1:IRB+69BV7fTT5ccw35ca7TCBe2b7dm5Q5y5tUMQmCvU=
github.com/zeripath/zapx/v15 v15.3.6-alignment-fix-2/go.mod h1:5DbhhDTGtuQSns1tS2aJxJLPc91boXCvjOMeCLD1saM=
github.com/ziutek/mymysql v1.5.4/go.mod h1:LMSpPZ6DbqWFxNCHW77HeMg9I646SAhApZ/wKdgO/C0=
github.com/zmap/rc2 v0.0.0-20131011165748-24b9757f5521/go.mod h1:3YZ9o3WnatTIZhuOtot4IcUfzoKVjUHqu6WALIyI0nE=
github.com/zmap/zcertificate v0.0.0-20180516150559-0e3d58b1bac4/go.mod h1:5iU54tB79AMBcySS0R2XIyZBAVmeHranShAFELYx7is=

View file

@ -68,8 +68,16 @@ func (key *GPGKey) PaddedKeyID() string {
if len(key.KeyID) > 15 {
return key.KeyID
}
return PaddedKeyID(key.KeyID)
}
// PaddedKeyID show KeyID padded to 16 characters
func PaddedKeyID(keyID string) string {
if len(keyID) > 15 {
return keyID
}
zeros := "0000000000000000"
return zeros[0:16-len(key.KeyID)] + key.KeyID
return zeros[0:16-len(keyID)] + keyID
}
// ListGPGKeys returns a list of public keys belongs to given user.

View file

@ -154,8 +154,7 @@ func generateEmailAvatarLink(email string, size int, final bool) string {
return DefaultAvatarLink()
}
enableFederatedAvatarSetting, _ := system_model.GetSetting(system_model.KeyPictureEnableFederatedAvatar)
enableFederatedAvatar := enableFederatedAvatarSetting.GetValueBool()
enableFederatedAvatar := system_model.GetSettingBool(system_model.KeyPictureEnableFederatedAvatar)
var err error
if enableFederatedAvatar && system_model.LibravatarService != nil {
@ -176,9 +175,7 @@ func generateEmailAvatarLink(email string, size int, final bool) string {
return urlStr
}
disableGravatarSetting, _ := system_model.GetSetting(system_model.KeyPictureDisableGravatar)
disableGravatar := disableGravatarSetting.GetValueBool()
disableGravatar := system_model.GetSettingBool(system_model.KeyPictureDisableGravatar)
if !disableGravatar {
// copy GravatarSourceURL, because we will modify its Path.
avatarURLCopy := *system_model.GravatarSourceURL

View file

@ -24,7 +24,7 @@
fork_id: 0
is_template: false
template_id: 0
size: 0
size: 6708
is_fsck_enabled: true
close_issues_via_commit_in_any_branch: false

View file

@ -742,17 +742,9 @@ func RemoveReviewRequest(issue *Issue, reviewer, doer *user_model.User) (*Commen
if err != nil {
return nil, err
} else if official {
// recalculate the latest official review for reviewer
review, err := GetReviewByIssueIDAndUserID(ctx, issue.ID, reviewer.ID)
if err != nil && !IsErrReviewNotExist(err) {
if err := restoreLatestOfficialReview(ctx, issue.ID, reviewer.ID); err != nil {
return nil, err
}
if review != nil {
if _, err := db.Exec(ctx, "UPDATE `review` SET official=? WHERE id=?", true, review.ID); err != nil {
return nil, err
}
}
}
comment, err := CreateCommentCtx(ctx, &CreateCommentOptions{
@ -770,6 +762,22 @@ func RemoveReviewRequest(issue *Issue, reviewer, doer *user_model.User) (*Commen
return comment, committer.Commit()
}
// Recalculate the latest official review for reviewer
func restoreLatestOfficialReview(ctx context.Context, issueID, reviewerID int64) error {
review, err := GetReviewByIssueIDAndUserID(ctx, issueID, reviewerID)
if err != nil && !IsErrReviewNotExist(err) {
return err
}
if review != nil {
if _, err := db.Exec(ctx, "UPDATE `review` SET official=? WHERE id=?", true, review.ID); err != nil {
return err
}
}
return nil
}
// AddTeamReviewRequest add a review request from one team
func AddTeamReviewRequest(issue *Issue, reviewer *organization.Team, doer *user_model.User) (*Comment, error) {
ctx, committer, err := db.TxContext()
@ -988,6 +996,12 @@ func DeleteReview(r *Review) error {
return err
}
if r.Official {
if err := restoreLatestOfficialReview(ctx, r.IssueID, r.ReviewerID); err != nil {
return err
}
}
return committer.Commit()
}

View file

@ -201,3 +201,38 @@ func TestDismissReview(t *testing.T) {
assert.False(t, requestReviewExample.Dismissed)
assert.True(t, approveReviewExample.Dismissed)
}
func TestDeleteReview(t *testing.T) {
assert.NoError(t, unittest.PrepareTestDatabase())
issue := unittest.AssertExistsAndLoadBean(t, &issues_model.Issue{ID: 2})
user := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 1})
review1, err := issues_model.CreateReview(db.DefaultContext, issues_model.CreateReviewOptions{
Content: "Official rejection",
Type: issues_model.ReviewTypeReject,
Official: false,
Issue: issue,
Reviewer: user,
})
assert.NoError(t, err)
review2, err := issues_model.CreateReview(db.DefaultContext, issues_model.CreateReviewOptions{
Content: "Official approval",
Type: issues_model.ReviewTypeApprove,
Official: true,
Issue: issue,
Reviewer: user,
})
assert.NoError(t, err)
assert.NoError(t, issues_model.DeleteReview(review2))
_, err = issues_model.GetReviewByID(db.DefaultContext, review2.ID)
assert.Error(t, err)
assert.True(t, issues_model.IsErrReviewNotExist(err), "IsErrReviewNotExist")
review1, err = issues_model.GetReviewByID(db.DefaultContext, review1.ID)
assert.NoError(t, err)
assert.True(t, review1.Official)
}

View file

@ -26,6 +26,7 @@ type BlobSearchOptions struct {
Digest string
Tag string
IsManifest bool
Repository string
}
func (opts *BlobSearchOptions) toConds() builder.Cond {
@ -54,6 +55,15 @@ func (opts *BlobSearchOptions) toConds() builder.Cond {
cond = cond.And(builder.In("package_file.id", builder.Select("package_property.ref_id").Where(propsCond).From("package_property")))
}
if opts.Repository != "" {
var propsCond builder.Cond = builder.Eq{
"package_property.ref_type": packages.PropertyTypePackage,
"package_property.name": container_module.PropertyRepository,
"package_property.value": opts.Repository,
}
cond = cond.And(builder.In("package.id", builder.Select("package_property.ref_id").Where(propsCond).From("package_property")))
}
return cond
}

View file

@ -15,6 +15,7 @@ import (
"code.gitea.io/gitea/models/unit"
user_model "code.gitea.io/gitea/models/user"
"code.gitea.io/gitea/modules/container"
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/structs"
"code.gitea.io/gitea/modules/util"
@ -498,8 +499,12 @@ func SearchRepositoryCondition(opts *SearchRepoOptions) builder.Cond {
// Only show a repo that either has a topic or description.
subQueryCond := builder.NewCond()
// Topic checking. Topics is non-null.
subQueryCond = subQueryCond.Or(builder.And(builder.Neq{"topics": "null"}, builder.Neq{"topics": "[]"}))
// Topic checking. Topics are present.
if setting.Database.UsePostgreSQL { // postgres stores the topics as json and not as text
subQueryCond = subQueryCond.Or(builder.And(builder.NotNull{"topics"}, builder.Neq{"(topics)::text": "[]"}))
} else {
subQueryCond = subQueryCond.Or(builder.And(builder.Neq{"topics": "null"}, builder.Neq{"topics": "[]"}))
}
// Description checking. Description not empty.
subQueryCond = subQueryCond.Or(builder.Neq{"description": ""})

View file

@ -185,7 +185,7 @@ func ChangeRepositoryName(doer *user_model.User, repo *Repository, newRepoName s
return committer.Commit()
}
// UpdateRepoSize updates the repository size, calculating it using util.GetDirectorySize
// UpdateRepoSize updates the repository size, calculating it using getDirectorySize
func UpdateRepoSize(ctx context.Context, repoID, size int64) error {
_, err := db.GetEngine(ctx).ID(repoID).Cols("size").NoAutoTime().Update(&Repository{
Size: size,

View file

@ -13,7 +13,7 @@ import (
"code.gitea.io/gitea/models/db"
"code.gitea.io/gitea/modules/cache"
"code.gitea.io/gitea/modules/setting"
setting_module "code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/timeutil"
"strk.kbt.io/projects/go/libravatar"
@ -89,17 +89,17 @@ func GetSettingNoCache(key string) (*Setting, error) {
if len(v) == 0 {
return nil, ErrSettingIsNotExist{key}
}
return v[key], nil
return v[strings.ToLower(key)], nil
}
// GetSetting returns the setting value via the key
func GetSetting(key string) (*Setting, error) {
return cache.Get(genSettingCacheKey(key), func() (*Setting, error) {
func GetSetting(key string) (string, error) {
return cache.GetString(genSettingCacheKey(key), func() (string, error) {
res, err := GetSettingNoCache(key)
if err != nil {
return nil, err
return "", err
}
return res, nil
return res.SettingValue, nil
})
}
@ -107,7 +107,8 @@ func GetSetting(key string) (*Setting, error) {
// none existing keys and errors are ignored and result in false
func GetSettingBool(key string) bool {
s, _ := GetSetting(key)
return s.GetValueBool()
v, _ := strconv.ParseBool(s)
return v
}
// GetSettings returns specific settings
@ -131,7 +132,7 @@ func GetSettings(keys []string) (map[string]*Setting, error) {
type AllSettings map[string]*Setting
func (settings AllSettings) Get(key string) Setting {
if v, ok := settings[key]; ok {
if v, ok := settings[strings.ToLower(key)]; ok {
return *v
}
return Setting{}
@ -184,14 +185,17 @@ func SetSettingNoVersion(key, value string) error {
// SetSetting updates a users' setting for a specific key
func SetSetting(setting *Setting) error {
_, err := cache.Set(genSettingCacheKey(setting.SettingKey), func() (*Setting, error) {
return setting, upsertSettingValue(strings.ToLower(setting.SettingKey), setting.SettingValue, setting.Version)
})
if err != nil {
if err := upsertSettingValue(strings.ToLower(setting.SettingKey), setting.SettingValue, setting.Version); err != nil {
return err
}
setting.Version++
cc := cache.GetCache()
if cc != nil {
return cc.Put(genSettingCacheKey(setting.SettingKey), setting.SettingValue, setting_module.CacheService.TTLSeconds())
}
return nil
}
@ -243,7 +247,7 @@ func Init() error {
var disableGravatar bool
disableGravatarSetting, err := GetSettingNoCache(KeyPictureDisableGravatar)
if IsErrSettingIsNotExist(err) {
disableGravatar = setting.GetDefaultDisableGravatar()
disableGravatar = setting_module.GetDefaultDisableGravatar()
disableGravatarSetting = &Setting{SettingValue: strconv.FormatBool(disableGravatar)}
} else if err != nil {
return err
@ -254,7 +258,7 @@ func Init() error {
var enableFederatedAvatar bool
enableFederatedAvatarSetting, err := GetSettingNoCache(KeyPictureEnableFederatedAvatar)
if IsErrSettingIsNotExist(err) {
enableFederatedAvatar = setting.GetDefaultEnableFederatedAvatar(disableGravatar)
enableFederatedAvatar = setting_module.GetDefaultEnableFederatedAvatar(disableGravatar)
enableFederatedAvatarSetting = &Setting{SettingValue: strconv.FormatBool(enableFederatedAvatar)}
} else if err != nil {
return err
@ -262,20 +266,20 @@ func Init() error {
enableFederatedAvatar = disableGravatarSetting.GetValueBool()
}
if setting.OfflineMode {
if setting_module.OfflineMode {
disableGravatar = true
enableFederatedAvatar = false
}
if disableGravatar || !enableFederatedAvatar {
if enableFederatedAvatar || !disableGravatar {
var err error
GravatarSourceURL, err = url.Parse(setting.GravatarSource)
GravatarSourceURL, err = url.Parse(setting_module.GravatarSource)
if err != nil {
return fmt.Errorf("Failed to parse Gravatar URL(%s): %w", setting.GravatarSource, err)
return fmt.Errorf("Failed to parse Gravatar URL(%s): %w", setting_module.GravatarSource, err)
}
}
if enableFederatedAvatarSetting.GetValueBool() {
if GravatarSourceURL != nil && enableFederatedAvatarSetting.GetValueBool() {
LibravatarService = libravatar.New()
if GravatarSourceURL.Scheme == "https" {
LibravatarService.SetUseHTTPS(true)

View file

@ -34,10 +34,14 @@ func TestSettings(t *testing.T) {
assert.EqualValues(t, newSetting.SettingValue, settings[strings.ToLower(keyName)].SettingValue)
// updated setting
updatedSetting := &system.Setting{SettingKey: keyName, SettingValue: "100", Version: newSetting.Version}
updatedSetting := &system.Setting{SettingKey: keyName, SettingValue: "100", Version: settings[strings.ToLower(keyName)].Version}
err = system.SetSetting(updatedSetting)
assert.NoError(t, err)
value, err := system.GetSetting(keyName)
assert.NoError(t, err)
assert.EqualValues(t, updatedSetting.SettingValue, value)
// get all settings
settings, err = system.GetAllSettings()
assert.NoError(t, err)

View file

@ -68,9 +68,7 @@ func (u *User) AvatarLinkWithSize(size int) string {
useLocalAvatar := false
autoGenerateAvatar := false
disableGravatarSetting, _ := system_model.GetSetting(system_model.KeyPictureDisableGravatar)
disableGravatar := disableGravatarSetting.GetValueBool()
disableGravatar := system_model.GetSettingBool(system_model.KeyPictureDisableGravatar)
switch {
case u.UseCustomAvatar:

View file

@ -54,13 +54,13 @@ func genSettingCacheKey(userID int64, key string) string {
}
// GetSetting returns the setting value via the key
func GetSetting(uid int64, key string) (*Setting, error) {
return cache.Get(genSettingCacheKey(uid, key), func() (*Setting, error) {
func GetSetting(uid int64, key string) (string, error) {
return cache.GetString(genSettingCacheKey(uid, key), func() (string, error) {
res, err := GetSettingNoCache(uid, key)
if err != nil {
return nil, err
return "", err
}
return res, nil
return res.SettingValue, nil
})
}
@ -155,7 +155,7 @@ func SetUserSetting(userID int64, key, value string) error {
return err
}
_, err := cache.Set(genSettingCacheKey(userID, key), func() (string, error) {
_, err := cache.GetString(genSettingCacheKey(userID, key), func() (string, error) {
return value, upsertUserSettingValue(userID, key, value)
})

View file

@ -46,39 +46,6 @@ func GetCache() mc.Cache {
return conn
}
// Get returns the key value from cache with callback when no key exists in cache
func Get[V interface{}](key string, getFunc func() (V, error)) (V, error) {
if conn == nil || setting.CacheService.TTL == 0 {
return getFunc()
}
cached := conn.Get(key)
if value, ok := cached.(V); ok {
return value, nil
}
value, err := getFunc()
if err != nil {
return value, err
}
return value, conn.Put(key, value, setting.CacheService.TTLSeconds())
}
// Set updates and returns the key value in the cache with callback. The old value is only removed if the updateFunc() is successful
func Set[V interface{}](key string, valueFunc func() (V, error)) (V, error) {
if conn == nil || setting.CacheService.TTL == 0 {
return valueFunc()
}
value, err := valueFunc()
if err != nil {
return value, err
}
return value, conn.Put(key, value, setting.CacheService.TTLSeconds())
}
// GetString returns the key value from cache with callback when no key exists in cache
func GetString(key string, getFunc func() (string, error)) (string, error) {
if conn == nil || setting.CacheService.TTL == 0 {

View file

@ -1087,6 +1087,9 @@ func (ctx *Context) IssueTemplatesErrorsFromDefaultBranch() ([]*api.IssueTemplat
if it, err := template.UnmarshalFromEntry(entry, dirName); err != nil {
invalidFiles[fullName] = err
} else {
if !strings.HasPrefix(it.Ref, "refs/") { // Assume that the ref intended is always a branch - for tags users should use refs/tags/<ref>
it.Ref = git.BranchPrefix + it.Ref
}
issueTemplates = append(issueTemplates, it)
}
}

View file

@ -89,6 +89,10 @@ func ToAPIPullRequest(ctx context.Context, pr *issues_model.PullRequest, doer *u
},
}
if pr.Issue.ClosedUnix != 0 {
apiPullRequest.Closed = pr.Issue.ClosedUnix.AsTimePtr()
}
gitRepo, err := git.OpenRepository(ctx, pr.BaseRepo.RepoPath())
if err != nil {
log.Error("OpenRepository[%s]: %v", pr.BaseRepo.RepoPath(), err)

File diff suppressed because it is too large Load diff

View file

@ -100,6 +100,9 @@ func RefURL(repoURL, ref string) string {
return repoURL + "/src/branch/" + refName
case strings.HasPrefix(ref, TagPrefix):
return repoURL + "/src/tag/" + refName
case !IsValidSHAPattern(ref):
// assume they mean a branch
return repoURL + "/src/branch/" + refName
default:
return repoURL + "/src/commit/" + refName
}

View file

@ -5,6 +5,7 @@
package external
import (
"bytes"
"fmt"
"io"
"os"
@ -133,11 +134,13 @@ func (p *Renderer) Render(ctx *markup.RenderContext, input io.Reader, output io.
if !p.IsInputFile {
cmd.Stdin = input
}
var stderr bytes.Buffer
cmd.Stdout = output
cmd.Stderr = &stderr
process.SetSysProcAttribute(cmd)
if err := cmd.Run(); err != nil {
return fmt.Errorf("%s render run command %s %v failed: %w", p.Name(), commands[0], args, err)
return fmt.Errorf("%s render run command %s %v failed: %w\nStderr: %s", p.Name(), commands[0], args, err, stderr.String())
}
return nil
}

View file

@ -110,32 +110,6 @@ func (q *ChannelQueue) Flush(timeout time.Duration) error {
return q.FlushWithContext(ctx)
}
// FlushWithContext is very similar to CleanUp but it will return as soon as the dataChan is empty
func (q *ChannelQueue) FlushWithContext(ctx context.Context) error {
log.Trace("ChannelQueue: %d Flush", q.qid)
paused, _ := q.IsPausedIsResumed()
for {
select {
case <-paused:
return nil
case data, ok := <-q.dataChan:
if !ok {
return nil
}
if unhandled := q.handle(data); unhandled != nil {
log.Error("Unhandled Data whilst flushing queue %d", q.qid)
}
atomic.AddInt64(&q.numInQueue, -1)
case <-q.baseCtx.Done():
return q.baseCtx.Err()
case <-ctx.Done():
return ctx.Err()
default:
return nil
}
}
}
// Shutdown processing from this queue
func (q *ChannelQueue) Shutdown() {
q.lock.Lock()

View file

@ -9,7 +9,6 @@ import (
"fmt"
"runtime/pprof"
"sync"
"sync/atomic"
"time"
"code.gitea.io/gitea/modules/container"
@ -168,35 +167,6 @@ func (q *ChannelUniqueQueue) Flush(timeout time.Duration) error {
return q.FlushWithContext(ctx)
}
// FlushWithContext is very similar to CleanUp but it will return as soon as the dataChan is empty
func (q *ChannelUniqueQueue) FlushWithContext(ctx context.Context) error {
log.Trace("ChannelUniqueQueue: %d Flush", q.qid)
paused, _ := q.IsPausedIsResumed()
for {
select {
case <-paused:
return nil
default:
}
select {
case data, ok := <-q.dataChan:
if !ok {
return nil
}
if unhandled := q.handle(data); unhandled != nil {
log.Error("Unhandled Data whilst flushing queue %d", q.qid)
}
atomic.AddInt64(&q.numInQueue, -1)
case <-q.baseCtx.Done():
return q.baseCtx.Err()
case <-ctx.Done():
return ctx.Err()
default:
return nil
}
}
}
// Shutdown processing from this queue
func (q *ChannelUniqueQueue) Shutdown() {
log.Trace("ChannelUniqueQueue: %s Shutting down", q.name)

View file

@ -464,13 +464,43 @@ func (p *WorkerPool) IsEmpty() bool {
return atomic.LoadInt64(&p.numInQueue) == 0
}
// contextError returns either ctx.Done(), the base context's error or nil
func (p *WorkerPool) contextError(ctx context.Context) error {
select {
case <-p.baseCtx.Done():
return p.baseCtx.Err()
case <-ctx.Done():
return ctx.Err()
default:
return nil
}
}
// FlushWithContext is very similar to CleanUp but it will return as soon as the dataChan is empty
// NB: The worker will not be registered with the manager.
func (p *WorkerPool) FlushWithContext(ctx context.Context) error {
log.Trace("WorkerPool: %d Flush", p.qid)
paused, _ := p.IsPausedIsResumed()
for {
// Because select will return any case that is satisified at random we precheck here before looking at dataChan.
select {
case data := <-p.dataChan:
case <-paused:
// Ensure that even if paused that the cancelled error is still sent
return p.contextError(ctx)
case <-p.baseCtx.Done():
return p.baseCtx.Err()
case <-ctx.Done():
return ctx.Err()
default:
}
select {
case <-paused:
return p.contextError(ctx)
case data, ok := <-p.dataChan:
if !ok {
return nil
}
if unhandled := p.handle(data); unhandled != nil {
log.Error("Unhandled Data whilst flushing queue %d", p.qid)
}
@ -496,6 +526,7 @@ func (p *WorkerPool) doWork(ctx context.Context) {
paused, _ := p.IsPausedIsResumed()
data := make([]Data, 0, p.batchLength)
for {
// Because select will return any case that is satisified at random we precheck here before looking at dataChan.
select {
case <-paused:
log.Trace("Worker for Queue %d Pausing", p.qid)
@ -516,8 +547,19 @@ func (p *WorkerPool) doWork(ctx context.Context) {
log.Trace("Worker shutting down")
return
}
case <-ctx.Done():
if len(data) > 0 {
log.Trace("Handling: %d data, %v", len(data), data)
if unhandled := p.handle(data...); unhandled != nil {
log.Error("Unhandled Data in queue %d", p.qid)
}
atomic.AddInt64(&p.numInQueue, -1*int64(len(data)))
}
log.Trace("Worker shutting down")
return
default:
}
select {
case <-paused:
// go back around

View file

@ -9,6 +9,7 @@ import (
"fmt"
"os"
"path"
"path/filepath"
"strings"
"code.gitea.io/gitea/models"
@ -286,9 +287,36 @@ func CreateRepository(doer, u *user_model.User, opts CreateRepoOptions) (*repo_m
return repo, nil
}
// UpdateRepoSize updates the repository size, calculating it using util.GetDirectorySize
const notRegularFileMode = os.ModeSymlink | os.ModeNamedPipe | os.ModeSocket | os.ModeDevice | os.ModeCharDevice | os.ModeIrregular
// getDirectorySize returns the disk consumption for a given path
func getDirectorySize(path string) (int64, error) {
var size int64
err := filepath.WalkDir(path, func(_ string, info os.DirEntry, err error) error {
if err != nil {
if os.IsNotExist(err) { // ignore the error because the file maybe deleted during traversing.
return nil
}
return err
}
if info.IsDir() {
return nil
}
f, err := info.Info()
if err != nil {
return err
}
if (f.Mode() & notRegularFileMode) == 0 {
size += f.Size()
}
return err
})
return size, err
}
// UpdateRepoSize updates the repository size, calculating it using getDirectorySize
func UpdateRepoSize(ctx context.Context, repo *repo_model.Repository) error {
size, err := util.GetDirectorySize(repo.RepoPath())
size, err := getDirectorySize(repo.RepoPath())
if err != nil {
return fmt.Errorf("updateSize: %w", err)
}

View file

@ -169,3 +169,13 @@ func TestUpdateRepositoryVisibilityChanged(t *testing.T) {
assert.NoError(t, err)
assert.True(t, act.IsPrivate)
}
func TestGetDirectorySize(t *testing.T) {
assert.NoError(t, unittest.PrepareTestDatabase())
repo, err := repo_model.GetRepositoryByID(1)
assert.NoError(t, err)
size, err := getDirectorySize(repo.RepoPath())
assert.NoError(t, err)
assert.EqualValues(t, size, repo.Size)
}

View file

@ -13,6 +13,7 @@ import (
"code.gitea.io/gitea/modules/log"
shellquote "github.com/kballard/go-shellquote"
ini "gopkg.in/ini.v1"
)
// Mailer represents mail service.
@ -50,8 +51,8 @@ type Mailer struct {
// MailService the global mailer
var MailService *Mailer
func newMailService() {
sec := Cfg.Section("mailer")
func parseMailerConfig(rootCfg *ini.File) {
sec := rootCfg.Section("mailer")
// Check mailer setting.
if !sec.Key("ENABLED").MustBool() {
return
@ -71,9 +72,14 @@ func newMailService() {
if sec.HasKey("HOST") && !sec.HasKey("SMTP_ADDR") {
givenHost := sec.Key("HOST").String()
addr, port, err := net.SplitHostPort(givenHost)
if err != nil {
if err != nil && strings.Contains(err.Error(), "missing port in address") {
addr = givenHost
} else if err != nil {
log.Fatal("Invalid mailer.HOST (%s): %v", givenHost, err)
}
if addr == "" {
addr = "127.0.0.1"
}
sec.Key("SMTP_ADDR").MustString(addr)
sec.Key("SMTP_PORT").MustString(port)
}
@ -173,20 +179,34 @@ func newMailService() {
default:
log.Error("unable to infer unspecified mailer.PROTOCOL from mailer.SMTP_PORT = %q, assume using smtps", MailService.SMTPPort)
MailService.Protocol = "smtps"
if MailService.SMTPPort == "" {
MailService.SMTPPort = "465"
}
}
}
}
// we want to warn if users use SMTP on a non-local IP;
// we might as well take the opportunity to check that it has an IP at all
ips := tryResolveAddr(MailService.SMTPAddr)
if MailService.Protocol == "smtp" {
for _, ip := range ips {
if !ip.IsLoopback() {
log.Warn("connecting over insecure SMTP protocol to non-local address is not recommended")
break
// This check is not needed for sendmail
switch MailService.Protocol {
case "sendmail":
var err error
MailService.SendmailArgs, err = shellquote.Split(sec.Key("SENDMAIL_ARGS").String())
if err != nil {
log.Error("Failed to parse Sendmail args: '%s' with error %v", sec.Key("SENDMAIL_ARGS").String(), err)
}
case "smtp", "smtps", "smtp+starttls", "smtp+unix":
ips := tryResolveAddr(MailService.SMTPAddr)
if MailService.Protocol == "smtp" {
for _, ip := range ips {
if !ip.IsLoopback() {
log.Warn("connecting over insecure SMTP protocol to non-local address is not recommended")
break
}
}
}
case "dummy": // just mention and do nothing
}
if MailService.From != "" {
@ -215,14 +235,6 @@ func newMailService() {
MailService.EnvelopeFrom = parsed.Address
}
if MailService.Protocol == "sendmail" {
var err error
MailService.SendmailArgs, err = shellquote.Split(sec.Key("SENDMAIL_ARGS").String())
if err != nil {
log.Error("Failed to parse Sendmail args: %s with error %v", CustomConf, err)
}
}
log.Info("Mail Service Enabled")
}

View file

@ -0,0 +1,43 @@
// Copyright 2022 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package setting
import (
"testing"
"github.com/stretchr/testify/assert"
ini "gopkg.in/ini.v1"
)
func TestParseMailerConfig(t *testing.T) {
iniFile := ini.Empty()
kases := map[string]*Mailer{
"smtp.mydomain.com": {
SMTPAddr: "smtp.mydomain.com",
SMTPPort: "465",
},
"smtp.mydomain.com:123": {
SMTPAddr: "smtp.mydomain.com",
SMTPPort: "123",
},
":123": {
SMTPAddr: "127.0.0.1",
SMTPPort: "123",
},
}
for host, kase := range kases {
t.Run(host, func(t *testing.T) {
iniFile.DeleteSection("mailer")
sec := iniFile.Section("mailer")
sec.NewKey("ENABLED", "true")
sec.NewKey("HOST", host)
// Check mailer setting
parseMailerConfig(iniFile)
assert.EqualValues(t, kase.SMTPAddr, MailService.SMTPAddr)
assert.EqualValues(t, kase.SMTPPort, MailService.SMTPPort)
})
}
}

View file

@ -69,7 +69,7 @@ func newPictureService() {
}
func GetDefaultDisableGravatar() bool {
return !OfflineMode
return OfflineMode
}
func GetDefaultEnableFederatedAvatar(disableGravatar bool) bool {

View file

@ -1334,7 +1334,7 @@ func NewServices() {
newCacheService()
newSessionService()
newCORSService()
newMailService()
parseMailerConfig(Cfg)
newRegisterMailService()
newNotifyMailService()
newProxyService()
@ -1351,5 +1351,5 @@ func NewServices() {
// NewServicesForInstall initializes the services for install
func NewServicesForInstall() {
newService()
newMailService()
parseMailerConfig(Cfg)
}

View file

@ -12,48 +12,62 @@ import (
"time"
)
// sitemapFileLimit contains the maximum size of a sitemap file
const sitemapFileLimit = 50 * 1024 * 1024
const (
sitemapFileLimit = 50 * 1024 * 1024 // the maximum size of a sitemap file
urlsLimit = 50000
// Url represents a single sitemap entry
schemaURL = "http://www.sitemaps.org/schemas/sitemap/0.9"
urlsetName = "urlset"
sitemapindexName = "sitemapindex"
)
// URL represents a single sitemap entry
type URL struct {
URL string `xml:"loc"`
LastMod *time.Time `xml:"lastmod,omitempty"`
}
// SitemapUrl represents a sitemap
// Sitemap represents a sitemap
type Sitemap struct {
XMLName xml.Name
Namespace string `xml:"xmlns,attr"`
URLs []URL `xml:"url"`
URLs []URL `xml:"url"`
Sitemaps []URL `xml:"sitemap"`
}
// NewSitemap creates a sitemap
func NewSitemap() *Sitemap {
return &Sitemap{
XMLName: xml.Name{Local: "urlset"},
Namespace: "http://www.sitemaps.org/schemas/sitemap/0.9",
XMLName: xml.Name{Local: urlsetName},
Namespace: schemaURL,
}
}
// NewSitemap creates a sitemap index.
// NewSitemapIndex creates a sitemap index.
func NewSitemapIndex() *Sitemap {
return &Sitemap{
XMLName: xml.Name{Local: "sitemapindex"},
Namespace: "http://www.sitemaps.org/schemas/sitemap/0.9",
XMLName: xml.Name{Local: sitemapindexName},
Namespace: schemaURL,
}
}
// Add adds a URL to the sitemap
func (s *Sitemap) Add(u URL) {
s.URLs = append(s.URLs, u)
if s.XMLName.Local == sitemapindexName {
s.Sitemaps = append(s.Sitemaps, u)
} else {
s.URLs = append(s.URLs, u)
}
}
// Write writes the sitemap to a response
// WriteTo writes the sitemap to a response
func (s *Sitemap) WriteTo(w io.Writer) (int64, error) {
if len(s.URLs) > 50000 {
return 0, fmt.Errorf("The sitemap contains too many URLs: %d", len(s.URLs))
if l := len(s.URLs); l > urlsLimit {
return 0, fmt.Errorf("The sitemap contains %d URLs, but only %d are allowed", l, urlsLimit)
}
if l := len(s.Sitemaps); l > urlsLimit {
return 0, fmt.Errorf("The sitemap contains %d sub-sitemaps, but only %d are allowed", l, urlsLimit)
}
buf := bytes.NewBufferString(xml.Header)
if err := xml.NewEncoder(buf).Encode(s); err != nil {
@ -63,7 +77,7 @@ func (s *Sitemap) WriteTo(w io.Writer) (int64, error) {
return 0, err
}
if buf.Len() > sitemapFileLimit {
return 0, fmt.Errorf("The sitemap is too big: %d", buf.Len())
return 0, fmt.Errorf("The sitemap has %d bytes, but only %d are allowed", buf.Len(), sitemapFileLimit)
}
return buf.WriteTo(w)
}

View file

@ -7,7 +7,6 @@ package sitemap
import (
"bytes"
"encoding/xml"
"fmt"
"strings"
"testing"
"time"
@ -15,63 +14,154 @@ import (
"github.com/stretchr/testify/assert"
)
func TestOk(t *testing.T) {
testReal := func(s *Sitemap, name string, urls []URL, expected string) {
for _, url := range urls {
s.Add(url)
}
buf := &bytes.Buffer{}
_, err := s.WriteTo(buf)
assert.NoError(t, nil, err)
assert.Equal(t, xml.Header+"<"+name+" xmlns=\"http://www.sitemaps.org/schemas/sitemap/0.9\">"+expected+"</"+name+">\n", buf.String())
}
test := func(urls []URL, expected string) {
testReal(NewSitemap(), "urlset", urls, expected)
testReal(NewSitemapIndex(), "sitemapindex", urls, expected)
}
func TestNewSitemap(t *testing.T) {
ts := time.Unix(1651322008, 0).UTC()
test(
[]URL{},
"",
)
test(
[]URL{
{URL: "https://gitea.io/test1", LastMod: &ts},
tests := []struct {
name string
urls []URL
want string
wantErr string
}{
{
name: "empty",
urls: []URL{},
want: xml.Header + `<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">` +
"" +
"</urlset>\n",
},
"<url><loc>https://gitea.io/test1</loc><lastmod>2022-04-30T12:33:28Z</lastmod></url>",
)
test(
[]URL{
{URL: "https://gitea.io/test2", LastMod: nil},
{
name: "regular",
urls: []URL{
{URL: "https://gitea.io/test1", LastMod: &ts},
},
want: xml.Header + `<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">` +
"<url><loc>https://gitea.io/test1</loc><lastmod>2022-04-30T12:33:28Z</lastmod></url>" +
"</urlset>\n",
},
"<url><loc>https://gitea.io/test2</loc></url>",
)
test(
[]URL{
{URL: "https://gitea.io/test1", LastMod: &ts},
{URL: "https://gitea.io/test2", LastMod: nil},
{
name: "without lastmod",
urls: []URL{
{URL: "https://gitea.io/test1"},
},
want: xml.Header + `<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">` +
"<url><loc>https://gitea.io/test1</loc></url>" +
"</urlset>\n",
},
{
name: "multiple",
urls: []URL{
{URL: "https://gitea.io/test1", LastMod: &ts},
{URL: "https://gitea.io/test2", LastMod: nil},
},
want: xml.Header + `<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">` +
"<url><loc>https://gitea.io/test1</loc><lastmod>2022-04-30T12:33:28Z</lastmod></url>" +
"<url><loc>https://gitea.io/test2</loc></url>" +
"</urlset>\n",
},
{
name: "too many urls",
urls: make([]URL, 50001),
wantErr: "The sitemap contains 50001 URLs, but only 50000 are allowed",
},
{
name: "too big file",
urls: []URL{
{URL: strings.Repeat("b", 50*1024*1024+1)},
},
wantErr: "The sitemap has 52428932 bytes, but only 52428800 are allowed",
},
"<url><loc>https://gitea.io/test1</loc><lastmod>2022-04-30T12:33:28Z</lastmod></url>"+
"<url><loc>https://gitea.io/test2</loc></url>",
)
}
func TestTooManyURLs(t *testing.T) {
s := NewSitemap()
for i := 0; i < 50001; i++ {
s.Add(URL{URL: fmt.Sprintf("https://gitea.io/test%d", i)})
}
buf := &bytes.Buffer{}
_, err := s.WriteTo(buf)
assert.EqualError(t, err, "The sitemap contains too many URLs: 50001")
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s := NewSitemap()
for _, url := range tt.urls {
s.Add(url)
}
buf := &bytes.Buffer{}
_, err := s.WriteTo(buf)
if tt.wantErr != "" {
assert.EqualError(t, err, tt.wantErr)
} else {
assert.NoError(t, err)
assert.Equalf(t, tt.want, buf.String(), "NewSitemap()")
}
})
}
}
func TestSitemapTooBig(t *testing.T) {
s := NewSitemap()
s.Add(URL{URL: strings.Repeat("b", sitemapFileLimit)})
buf := &bytes.Buffer{}
_, err := s.WriteTo(buf)
assert.EqualError(t, err, "The sitemap is too big: 52428931")
func TestNewSitemapIndex(t *testing.T) {
ts := time.Unix(1651322008, 0).UTC()
tests := []struct {
name string
urls []URL
want string
wantErr string
}{
{
name: "empty",
urls: []URL{},
want: xml.Header + `<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">` +
"" +
"</sitemapindex>\n",
},
{
name: "regular",
urls: []URL{
{URL: "https://gitea.io/test1", LastMod: &ts},
},
want: xml.Header + `<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">` +
"<sitemap><loc>https://gitea.io/test1</loc><lastmod>2022-04-30T12:33:28Z</lastmod></sitemap>" +
"</sitemapindex>\n",
},
{
name: "without lastmod",
urls: []URL{
{URL: "https://gitea.io/test1"},
},
want: xml.Header + `<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">` +
"<sitemap><loc>https://gitea.io/test1</loc></sitemap>" +
"</sitemapindex>\n",
},
{
name: "multiple",
urls: []URL{
{URL: "https://gitea.io/test1", LastMod: &ts},
{URL: "https://gitea.io/test2", LastMod: nil},
},
want: xml.Header + `<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">` +
"<sitemap><loc>https://gitea.io/test1</loc><lastmod>2022-04-30T12:33:28Z</lastmod></sitemap>" +
"<sitemap><loc>https://gitea.io/test2</loc></sitemap>" +
"</sitemapindex>\n",
},
{
name: "too many sitemaps",
urls: make([]URL, 50001),
wantErr: "The sitemap contains 50001 sub-sitemaps, but only 50000 are allowed",
},
{
name: "too big file",
urls: []URL{
{URL: strings.Repeat("b", 50*1024*1024+1)},
},
wantErr: "The sitemap has 52428952 bytes, but only 52428800 are allowed",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s := NewSitemapIndex()
for _, url := range tt.urls {
s.Add(url)
}
buf := &bytes.Buffer{}
_, err := s.WriteTo(buf)
if tt.wantErr != "" {
assert.EqualError(t, err, tt.wantErr)
} else {
assert.NoError(t, err)
assert.Equalf(t, tt.want, buf.String(), "NewSitemapIndex()")
}
})
}
}

View file

@ -10,6 +10,7 @@ type CreatePushMirrorOption struct {
RemoteUsername string `json:"remote_username"`
RemotePassword string `json:"remote_password"`
Interval string `json:"interval"`
SyncOnCommit bool `json:"sync_on_commit"`
}
// PushMirror represents information of a push mirror
@ -22,4 +23,5 @@ type PushMirror struct {
LastUpdateUnix string `json:"last_update"`
LastError string `json:"last_error"`
Interval string `json:"interval"`
SyncOnCommit bool `json:"sync_on_commit"`
}

View file

@ -76,8 +76,15 @@ func HTMLRenderer(ctx context.Context) (context.Context, *render.Render) {
compilingTemplates = false
if !setting.IsProd {
watcher.CreateWatcher(ctx, "HTML Templates", &watcher.CreateWatcherOpts{
PathsCallback: walkTemplateFiles,
BetweenCallback: renderer.CompileTemplates,
PathsCallback: walkTemplateFiles,
BetweenCallback: func() {
defer func() {
if err := recover(); err != nil {
log.Error("PANIC: %v\n%s", err, log.Stack(2))
}
}()
renderer.CompileTemplates()
},
})
}
return context.WithValue(ctx, rendererKey, renderer), renderer

View file

@ -23,20 +23,6 @@ func EnsureAbsolutePath(path, absoluteBase string) string {
return filepath.Join(absoluteBase, path)
}
const notRegularFileMode os.FileMode = os.ModeSymlink | os.ModeNamedPipe | os.ModeSocket | os.ModeDevice | os.ModeCharDevice | os.ModeIrregular
// GetDirectorySize returns the disk consumption for a given path
func GetDirectorySize(path string) (int64, error) {
var size int64
err := filepath.Walk(path, func(_ string, info os.FileInfo, err error) error {
if info != nil && (info.Mode()&notRegularFileMode) == 0 {
size += info.Size()
}
return err
})
return size, err
}
// IsDir returns true if given path is a directory,
// or returns false when it's a file or does not exist.
func IsDir(dir string) (bool, error) {

View file

@ -497,6 +497,7 @@ team_not_exist = The team does not exist.
last_org_owner = You cannot remove the last user from the 'owners' team. There must be at least one owner for an organization.
cannot_add_org_to_team = An organization cannot be added as a team member.
duplicate_invite_to_team = The user was already invited as a team member.
organization_leave_success = You have successfully left the organization %s.
invalid_ssh_key = Can not verify your SSH key: %s
invalid_gpg_key = Can not verify your GPG key: %s

View file

@ -34,6 +34,60 @@ func saveAsPackageBlob(hsr packages_module.HashedSizeReader, pi *packages_servic
contentStore := packages_module.NewContentStore()
uploadVersion, err := getOrCreateUploadVersion(pi)
if err != nil {
return nil, err
}
err = db.WithTx(func(ctx context.Context) error {
pb, exists, err = packages_model.GetOrInsertBlob(ctx, pb)
if err != nil {
log.Error("Error inserting package blob: %v", err)
return err
}
// FIXME: Workaround to be removed in v1.20
// https://github.com/go-gitea/gitea/issues/19586
if exists {
err = contentStore.Has(packages_module.BlobHash256Key(pb.HashSHA256))
if err != nil && (errors.Is(err, util.ErrNotExist) || errors.Is(err, os.ErrNotExist)) {
log.Debug("Package registry inconsistent: blob %s does not exist on file system", pb.HashSHA256)
exists = false
}
}
if !exists {
if err := contentStore.Save(packages_module.BlobHash256Key(pb.HashSHA256), hsr, hsr.Size()); err != nil {
log.Error("Error saving package blob in content store: %v", err)
return err
}
}
return createFileForBlob(ctx, uploadVersion, pb)
})
if err != nil {
if !exists {
if err := contentStore.Delete(packages_module.BlobHash256Key(pb.HashSHA256)); err != nil {
log.Error("Error deleting package blob from content store: %v", err)
}
}
return nil, err
}
return pb, nil
}
// mountBlob mounts the specific blob to a different package
func mountBlob(pi *packages_service.PackageInfo, pb *packages_model.PackageBlob) error {
uploadVersion, err := getOrCreateUploadVersion(pi)
if err != nil {
return err
}
return db.WithTx(func(ctx context.Context) error {
return createFileForBlob(ctx, uploadVersion, pb)
})
}
func getOrCreateUploadVersion(pi *packages_service.PackageInfo) (*packages_model.PackageVersion, error) {
var uploadVersion *packages_model.PackageVersion
// FIXME: Replace usage of mutex with database transaction
@ -84,66 +138,35 @@ func saveAsPackageBlob(hsr packages_module.HashedSizeReader, pi *packages_servic
return nil
})
uploadVersionMutex.Unlock()
if err != nil {
return nil, err
return uploadVersion, err
}
func createFileForBlob(ctx context.Context, pv *packages_model.PackageVersion, pb *packages_model.PackageBlob) error {
filename := strings.ToLower(fmt.Sprintf("sha256_%s", pb.HashSHA256))
pf := &packages_model.PackageFile{
VersionID: pv.ID,
BlobID: pb.ID,
Name: filename,
LowerName: filename,
CompositeKey: packages_model.EmptyFileKey,
}
var err error
if pf, err = packages_model.TryInsertFile(ctx, pf); err != nil {
if err == packages_model.ErrDuplicatePackageFile {
return nil
}
log.Error("Error inserting package file: %v", err)
return err
}
err = db.WithTx(func(ctx context.Context) error {
pb, exists, err = packages_model.GetOrInsertBlob(ctx, pb)
if err != nil {
log.Error("Error inserting package blob: %v", err)
return err
}
// FIXME: Workaround to be removed in v1.20
// https://github.com/go-gitea/gitea/issues/19586
if exists {
err = contentStore.Has(packages_module.BlobHash256Key(pb.HashSHA256))
if err != nil && (errors.Is(err, util.ErrNotExist) || errors.Is(err, os.ErrNotExist)) {
log.Debug("Package registry inconsistent: blob %s does not exist on file system", pb.HashSHA256)
exists = false
}
}
if !exists {
if err := contentStore.Save(packages_module.BlobHash256Key(pb.HashSHA256), hsr, hsr.Size()); err != nil {
log.Error("Error saving package blob in content store: %v", err)
return err
}
}
filename := strings.ToLower(fmt.Sprintf("sha256_%s", pb.HashSHA256))
pf := &packages_model.PackageFile{
VersionID: uploadVersion.ID,
BlobID: pb.ID,
Name: filename,
LowerName: filename,
CompositeKey: packages_model.EmptyFileKey,
}
if pf, err = packages_model.TryInsertFile(ctx, pf); err != nil {
if err == packages_model.ErrDuplicatePackageFile {
return nil
}
log.Error("Error inserting package file: %v", err)
return err
}
if _, err := packages_model.InsertProperty(ctx, packages_model.PropertyTypeFile, pf.ID, container_module.PropertyDigest, digestFromPackageBlob(pb)); err != nil {
log.Error("Error setting package file property: %v", err)
return err
}
return nil
})
if err != nil {
if !exists {
if err := contentStore.Delete(packages_module.BlobHash256Key(pb.HashSHA256)); err != nil {
log.Error("Error deleting package blob from content store: %v", err)
}
}
return nil, err
if _, err := packages_model.InsertProperty(ctx, packages_model.PropertyTypeFile, pf.ID, container_module.PropertyDigest, digestFromPackageBlob(pb)); err != nil {
log.Error("Error setting package file property: %v", err)
return err
}
return pb, nil
return nil
}
func deleteBlob(ownerID int64, image, digest string) error {

View file

@ -196,10 +196,15 @@ func InitiateUploadBlob(ctx *context.Context) {
from := ctx.FormTrim("from")
if mount != "" {
blob, _ := workaroundGetContainerBlob(ctx, &container_model.BlobSearchOptions{
Image: from,
Digest: mount,
Repository: from,
Digest: mount,
})
if blob != nil {
if err := mountBlob(&packages_service.PackageInfo{Owner: ctx.Package.Owner, Name: image}, blob.Blob); err != nil {
apiError(ctx, http.StatusInternalServerError, err)
return
}
setResponseHeaders(ctx.Resp, &containerHeaders{
Location: fmt.Sprintf("/v2/%s/%s/blobs/%s", ctx.Package.Owner.LowerName, image, mount),
ContentDigest: mount,

View file

@ -1042,7 +1042,7 @@ func Routes(ctx gocontext.Context) *web.Route {
m.Get("/blobs/{sha}", repo.GetBlob)
m.Get("/tags/{sha}", repo.GetAnnotatedTag)
m.Get("/notes/{sha}", repo.GetNote)
}, context.ReferencesGitRepo(), reqRepoReader(unit.TypeCode))
}, context.ReferencesGitRepo(true), reqRepoReader(unit.TypeCode))
m.Post("/diffpatch", reqRepoWriter(unit.TypeCode), reqToken(), bind(api.ApplyDiffPatchFileOptions{}), repo.ApplyDiffPatch)
m.Group("/contents", func() {
m.Get("", repo.GetContentsList)

View file

@ -346,10 +346,11 @@ func CreatePushMirror(ctx *context.APIContext, mirrorOption *api.CreatePushMirro
}
pushMirror := &repo_model.PushMirror{
RepoID: repo.ID,
Repo: repo,
RemoteName: fmt.Sprintf("remote_mirror_%s", remoteSuffix),
Interval: interval,
RepoID: repo.ID,
Repo: repo,
RemoteName: fmt.Sprintf("remote_mirror_%s", remoteSuffix),
Interval: interval,
SyncOnCommit: mirrorOption.SyncOnCommit,
}
if err = repo_model.InsertPushMirror(ctx, pushMirror); err != nil {

View file

@ -149,8 +149,8 @@ func Install(ctx *context.Context) {
// Server and other services settings
form.OfflineMode = setting.OfflineMode
form.DisableGravatar = false // when installing, there is no database connection so that given a default value
form.EnableFederatedAvatar = false // when installing, there is no database connection so that given a default value
form.DisableGravatar = setting.DisableGravatar // when installing, there is no database connection so that given a default value
form.EnableFederatedAvatar = setting.EnableFederatedAvatar // when installing, there is no database connection so that given a default value
form.EnableOpenIDSignIn = setting.Service.EnableOpenIDSignIn
form.EnableOpenIDSignUp = setting.Service.EnableOpenIDSignUp
@ -443,10 +443,13 @@ func SubmitInstall(ctx *context.Context) {
cfg.Section("server").Key("OFFLINE_MODE").SetValue(fmt.Sprint(form.OfflineMode))
// if you are reinstalling, this maybe not right because of missing version
if err := system_model.SetSettingNoVersion(system_model.KeyPictureDisableGravatar, strconv.FormatBool(form.DisableGravatar)); err != nil {
ctx.RenderWithErr(ctx.Tr("install.secret_key_failed", err), tplInstall, &form)
ctx.RenderWithErr(ctx.Tr("install.save_config_failed", err), tplInstall, &form)
return
}
if err := system_model.SetSettingNoVersion(system_model.KeyPictureEnableFederatedAvatar, strconv.FormatBool(form.EnableFederatedAvatar)); err != nil {
ctx.RenderWithErr(ctx.Tr("install.save_config_failed", err), tplInstall, &form)
return
}
cfg.Section("picture").Key("ENABLE_FEDERATED_AVATAR").SetValue(fmt.Sprint(form.EnableFederatedAvatar))
cfg.Section("openid").Key("ENABLE_OPENID_SIGNIN").SetValue(fmt.Sprint(form.EnableOpenIDSignIn))
cfg.Section("openid").Key("ENABLE_OPENID_SIGNUP").SetValue(fmt.Sprint(form.EnableOpenIDSignUp))
cfg.Section("service").Key("DISABLE_REGISTRATION").SetValue(fmt.Sprint(form.DisableRegistration))

View file

@ -108,13 +108,20 @@ func MembersAction(ctx *context.Context) {
}
case "leave":
err = models.RemoveOrgUser(org.ID, ctx.Doer.ID)
if organization.IsErrLastOrgOwner(err) {
if err == nil {
ctx.Flash.Success(ctx.Tr("form.organization_leave_success", org.DisplayName()))
ctx.JSON(http.StatusOK, map[string]interface{}{
"redirect": "", // keep the user stay on current page, in case they want to do other operations.
})
} else if organization.IsErrLastOrgOwner(err) {
ctx.Flash.Error(ctx.Tr("form.last_org_owner"))
ctx.JSON(http.StatusOK, map[string]interface{}{
"redirect": ctx.Org.OrgLink + "/members",
})
return
} else {
log.Error("RemoveOrgUser(%d,%d): %v", org.ID, ctx.Doer.ID, err)
}
return
}
if err != nil {

View file

@ -784,6 +784,10 @@ func setTemplateIfExists(ctx *context.Context, ctxDataKey string, possibleFiles
}
}
}
}
if !strings.HasPrefix(template.Ref, "refs/") { // Assume that the ref intended is always a branch - for tags users should use refs/tags/<ref>
template.Ref = git.BranchPrefix + template.Ref
}
ctx.Data["HasSelectedLabel"] = len(labelIDs) > 0
ctx.Data["label_ids"] = strings.Join(labelIDs, ",")

View file

@ -27,7 +27,7 @@ func CodeSearch(ctx *context.Context) {
ctx.Data["IsPackageEnabled"] = setting.Packages.Enabled
ctx.Data["IsRepoIndexerEnabled"] = setting.Indexer.RepoIndexerEnabled
ctx.Data["Title"] = ctx.Tr("code.title")
ctx.Data["Title"] = ctx.Tr("explore.code")
ctx.Data["ContextUser"] = ctx.ContextUser
language := ctx.FormTrim("l")

View file

@ -100,14 +100,18 @@ func KeysPost(ctx *context.Context) {
loadKeysData(ctx)
ctx.Data["Err_Content"] = true
ctx.Data["Err_Signature"] = true
ctx.Data["KeyID"] = err.(asymkey_model.ErrGPGInvalidTokenSignature).ID
keyID := err.(asymkey_model.ErrGPGInvalidTokenSignature).ID
ctx.Data["KeyID"] = keyID
ctx.Data["PaddedKeyID"] = asymkey_model.PaddedKeyID(keyID)
ctx.RenderWithErr(ctx.Tr("settings.gpg_invalid_token_signature"), tplSettingsKeys, &form)
case asymkey_model.IsErrGPGNoEmailFound(err):
loadKeysData(ctx)
ctx.Data["Err_Content"] = true
ctx.Data["Err_Signature"] = true
ctx.Data["KeyID"] = err.(asymkey_model.ErrGPGNoEmailFound).ID
keyID := err.(asymkey_model.ErrGPGNoEmailFound).ID
ctx.Data["KeyID"] = keyID
ctx.Data["PaddedKeyID"] = asymkey_model.PaddedKeyID(keyID)
ctx.RenderWithErr(ctx.Tr("settings.gpg_no_key_email_found"), tplSettingsKeys, &form)
default:
ctx.ServerError("AddPublicKey", err)
@ -139,7 +143,9 @@ func KeysPost(ctx *context.Context) {
loadKeysData(ctx)
ctx.Data["VerifyingID"] = form.KeyID
ctx.Data["Err_Signature"] = true
ctx.Data["KeyID"] = err.(asymkey_model.ErrGPGInvalidTokenSignature).ID
keyID := err.(asymkey_model.ErrGPGInvalidTokenSignature).ID
ctx.Data["KeyID"] = keyID
ctx.Data["PaddedKeyID"] = asymkey_model.PaddedKeyID(keyID)
ctx.RenderWithErr(ctx.Tr("settings.gpg_invalid_token_signature"), tplSettingsKeys, &form)
default:
ctx.ServerError("VerifyGPG", err)

View file

@ -18,7 +18,6 @@ import (
"code.gitea.io/gitea/modules/git"
"code.gitea.io/gitea/modules/notification"
"code.gitea.io/gitea/modules/storage"
"code.gitea.io/gitea/modules/util"
)
// NewIssue creates new issue with labels for repository.
@ -201,7 +200,7 @@ func GetRefEndNamesAndURLs(issues []*issues_model.Issue, repoLink string) (map[i
for _, issue := range issues {
if issue.Ref != "" {
issueRefEndNames[issue.ID] = git.RefEndName(issue.Ref)
issueRefURLs[issue.ID] = git.RefURL(repoLink, util.PathEscapeSegments(issue.Ref))
issueRefURLs[issue.ID] = git.RefURL(repoLink, issue.Ref)
}
}
return issueRefEndNames, issueRefURLs

View file

@ -34,10 +34,14 @@ func (f *GitBucketDownloaderFactory) New(ctx context.Context, opts base.MigrateO
return nil, err
}
baseURL := u.Scheme + "://" + u.Host
fields := strings.Split(u.Path, "/")
oldOwner := fields[1]
oldName := strings.TrimSuffix(fields[2], ".git")
if len(fields) < 2 {
return nil, fmt.Errorf("invalid path: %s", u.Path)
}
baseURL := u.Scheme + "://" + u.Host + strings.TrimSuffix(strings.Join(fields[:len(fields)-2], "/"), "/git")
oldOwner := fields[len(fields)-2]
oldName := strings.TrimSuffix(fields[len(fields)-1], ".git")
log.Trace("Create GitBucket downloader. BaseURL: %s RepoOwner: %s RepoName: %s", baseURL, oldOwner, oldName)
return NewGitBucketDownloader(ctx, baseURL, opts.AuthUsername, opts.AuthPassword, opts.AuthToken, oldOwner, oldName), nil
@ -72,6 +76,7 @@ func (g *GitBucketDownloader) ColorFormat(s fmt.State) {
func NewGitBucketDownloader(ctx context.Context, baseURL, userName, password, token, repoOwner, repoName string) *GitBucketDownloader {
githubDownloader := NewGithubDownloaderV3(ctx, baseURL, userName, password, token, repoOwner, repoName)
githubDownloader.SkipReactions = true
githubDownloader.SkipReviews = true
return &GitBucketDownloader{
githubDownloader,
}

View file

@ -76,6 +76,7 @@ type GithubDownloaderV3 struct {
curClientIdx int
maxPerPage int
SkipReactions bool
SkipReviews bool
}
// NewGithubDownloaderV3 creates a github Downloader via github v3 API
@ -805,6 +806,9 @@ func (g *GithubDownloaderV3) convertGithubReviewComments(cs []*github.PullReques
// GetReviews returns pull requests review
func (g *GithubDownloaderV3) GetReviews(reviewable base.Reviewable) ([]*base.Review, error) {
allReviews := make([]*base.Review, 0, g.maxPerPage)
if g.SkipReviews {
return allReviews, nil
}
opt := &github.ListOptions{
PerPage: g.maxPerPage,
}

View file

@ -73,32 +73,8 @@ func GitGcRepos(ctx context.Context, timeout time.Duration, args ...git.CmdArg)
return db.ErrCancelledf("before GC of %s", repo.FullName())
default:
}
log.Trace("Running git gc on %v", repo)
command := git.NewCommand(ctx, args...).
SetDescription(fmt.Sprintf("Repository Garbage Collection: %s", repo.FullName()))
var stdout string
var err error
stdout, _, err = command.RunStdString(&git.RunOpts{Timeout: timeout, Dir: repo.RepoPath()})
if err != nil {
log.Error("Repository garbage collection failed for %v. Stdout: %s\nError: %v", repo, stdout, err)
desc := fmt.Sprintf("Repository garbage collection failed for %s. Stdout: %s\nError: %v", repo.RepoPath(), stdout, err)
if err = system_model.CreateRepositoryNotice(desc); err != nil {
log.Error("CreateRepositoryNotice: %v", err)
}
return fmt.Errorf("Repository garbage collection failed in repo: %s: Error: %w", repo.FullName(), err)
}
// Now update the size of the repository
if err := repo_module.UpdateRepoSize(ctx, repo); err != nil {
log.Error("Updating size as part of garbage collection failed for %v. Stdout: %s\nError: %v", repo, stdout, err)
desc := fmt.Sprintf("Updating size as part of garbage collection failed for %s. Stdout: %s\nError: %v", repo.RepoPath(), stdout, err)
if err = system_model.CreateRepositoryNotice(desc); err != nil {
log.Error("CreateRepositoryNotice: %v", err)
}
return fmt.Errorf("Updating size as part of garbage collection failed in repo: %s: Error: %w", repo.FullName(), err)
}
// we can ignore the error here because it will be logged in GitGCRepo
_ = GitGcRepo(ctx, repo, timeout, args)
return nil
},
); err != nil {
@ -109,6 +85,37 @@ func GitGcRepos(ctx context.Context, timeout time.Duration, args ...git.CmdArg)
return nil
}
// GitGcRepo calls 'git gc' to remove unnecessary files and optimize the local repository
func GitGcRepo(ctx context.Context, repo *repo_model.Repository, timeout time.Duration, args []git.CmdArg) error {
log.Trace("Running git gc on %-v", repo)
command := git.NewCommand(ctx, args...).
SetDescription(fmt.Sprintf("Repository Garbage Collection: %s", repo.FullName()))
var stdout string
var err error
stdout, _, err = command.RunStdString(&git.RunOpts{Timeout: timeout, Dir: repo.RepoPath()})
if err != nil {
log.Error("Repository garbage collection failed for %-v. Stdout: %s\nError: %v", repo, stdout, err)
desc := fmt.Sprintf("Repository garbage collection failed for %s. Stdout: %s\nError: %v", repo.RepoPath(), stdout, err)
if err := system_model.CreateRepositoryNotice(desc); err != nil {
log.Error("CreateRepositoryNotice: %v", err)
}
return fmt.Errorf("Repository garbage collection failed in repo: %s: Error: %w", repo.FullName(), err)
}
// Now update the size of the repository
if err := repo_module.UpdateRepoSize(ctx, repo); err != nil {
log.Error("Updating size as part of garbage collection failed for %-v. Stdout: %s\nError: %v", repo, stdout, err)
desc := fmt.Sprintf("Updating size as part of garbage collection failed for %s. Stdout: %s\nError: %v", repo.RepoPath(), stdout, err)
if err := system_model.CreateRepositoryNotice(desc); err != nil {
log.Error("CreateRepositoryNotice: %v", err)
}
return fmt.Errorf("Updating size as part of garbage collection failed in repo: %s: Error: %w", repo.FullName(), err)
}
return nil
}
func gatherMissingRepoRecords(ctx context.Context) ([]*repo_model.Repository, error) {
repos := make([]*repo_model.Repository, 0, 10)
if err := db.Iterate(
@ -162,7 +169,7 @@ func DeleteMissingRepositories(ctx context.Context, doer *user_model.User) error
}
log.Trace("Deleting %d/%d...", repo.OwnerID, repo.ID)
if err := models.DeleteRepository(doer, repo.OwnerID, repo.ID); err != nil {
log.Error("Failed to DeleteRepository %s [%d]: Error: %v", repo.FullName(), repo.ID, err)
log.Error("Failed to DeleteRepository %-v: Error: %v", repo, err)
if err2 := system_model.CreateRepositoryNotice("Failed to DeleteRepository %s [%d]: Error: %v", repo.FullName(), repo.ID, err); err2 != nil {
log.Error("CreateRepositoryNotice: %v", err)
}

View file

@ -55,7 +55,7 @@ type (
Wait bool `json:"wait"`
Content string `json:"content"`
Username string `json:"username"`
AvatarURL string `json:"avatar_url"`
AvatarURL string `json:"avatar_url,omitempty"`
TTS bool `json:"tts"`
Embeds []DiscordEmbed `json:"embeds"`
}

View file

@ -139,7 +139,7 @@ func (f *WechatworkPayload) PullRequest(p *api.PullRequestPayload) (api.Payloade
func (f *WechatworkPayload) Review(p *api.PullRequestPayload, event webhook_model.HookEventType) (api.Payloader, error) {
var text, title string
switch p.Action {
case api.HookIssueSynchronized:
case api.HookIssueReviewed:
action, err := parseHookPullRequestEventType(event)
if err != nil {
return nil, err

View file

@ -77,6 +77,7 @@
</div>
{{else if .IsSigned}}
<div class="right stackable menu">
{{if EnableTimetracking}}
<a class="active-stopwatch-trigger item ui label {{if not .ActiveStopwatch}}hidden{{end}}" href="{{.ActiveStopwatch.IssueLink}}">
<span class="text">
<span class="fitted item">
@ -115,6 +116,7 @@
</form>
</div>
</div>
{{end}}
<a href="{{AppSubUrl}}/notifications" class="item tooltip not-mobile" data-content="{{.locale.Tr "notifications"}}" aria-label="{{.locale.Tr "notifications"}}">
<span class="text">

View file

@ -51,7 +51,7 @@
{{.locale.Tr "packages.settings.delete"}}
</div>
<div class="content">
<div class="ui warning message text left">
<div class="ui warning message text left word-break">
{{.locale.Tr "packages.settings.delete.notice" .PackageDescriptor.Package.Name .PackageDescriptor.Version.Version}}
</div>
<form class="ui form" action="{{.Link}}" method="post">

View file

@ -27,7 +27,7 @@
</a>
{{if and ($.Permission.CanWrite $.UnitTypeCode) (not $.Repository.IsArchived) (not .IsDeleted)}}{{- /* */ -}}
<div class="ui primary tiny floating dropdown icon button">{{.locale.Tr "repo.commit.actions"}}
{{svg "octicon-triangle-down" 14 "dropdown icon"}}<span class="sr-mobile-only">{{.locale.Tr "repo.commit.actions"}}</span>
{{svg "octicon-triangle-down" 14 "dropdown icon"}}
<div class="menu">
<div class="ui header">{{.locale.Tr "repo.commit.actions"}}</div>
<div class="divider"></div>

View file

@ -143,7 +143,7 @@
{{$.locale.Tr "repo.diff.file_suppressed_line_too_long"}}
{{else}}
{{$.locale.Tr "repo.diff.file_suppressed"}}
<a class="ui basic tiny button diff-show-more-button" data-href="{{$.Link}}?file-only=true&files={{$file.Name}}&files={{$file.OldName}}">{{$.locale.Tr "repo.diff.load"}}</a>
<a class="ui basic tiny button diff-load-button" data-href="{{$.Link}}?file-only=true&files={{$file.Name}}&files={{$file.OldName}}">{{$.locale.Tr "repo.diff.load"}}</a>
{{end}}
{{else}}
{{$.locale.Tr "repo.diff.bin_not_shown"}}

View file

@ -413,7 +413,7 @@
<div class="df sb ac">
<div class="due-date tooltip {{if .Issue.IsOverdue}}text red{{end}}" {{if .Issue.IsOverdue}}data-content="{{.locale.Tr "repo.issues.due_date_overdue"}}"{{end}}>
{{svg "octicon-calendar" 16 "mr-3"}}
<time data-format="date" datetime="{{.Issue.DeadlineUnix.FormatLong}}">{{.Issue.DeadlineUnix.FormatDate}}</time>
<time data-format="date" datetime="{{.Issue.DeadlineUnix.FormatDate}}">{{.Issue.DeadlineUnix.FormatDate}}</time>
</div>
<div>
{{if and .HasIssuesOrPullsWritePermission (not .Repository.IsArchived)}}

View file

@ -18,14 +18,16 @@
{{end}}
{{range $.LatestCommitStatuses}}
<div class="ui attached segment">
<span>{{template "repo/commit_status" .}}</span>
<span class="ui">{{.Context}} <span class="text grey">{{.Description}}</span></span>
<div class="ui right">
{{if $.is_context_required}}
{{if (call $.is_context_required .Context)}}<div class="ui label">{{$.locale.Tr "repo.pulls.status_checks_requested"}}</div>{{end}}
{{end}}
<span class="ui">{{if .TargetURL}}<a href="{{.TargetURL}}">{{$.locale.Tr "repo.pulls.status_checks_details"}}</a>{{end}}</span>
<div class="ui attached segment pr-status">
{{template "repo/commit_status" .}}
<div class="status-context">
<span>{{.Context}} <span class="text grey">{{.Description}}</span></span>
<div class="ui status-details">
{{if $.is_context_required}}
{{if (call $.is_context_required .Context)}}<div class="ui label">{{$.locale.Tr "repo.pulls.status_checks_requested"}}</div>{{end}}
{{end}}
<span class="ui">{{if .TargetURL}}<a href="{{.TargetURL}}">{{$.locale.Tr "repo.pulls.status_checks_details"}}</a>{{end}}</span>
</div>
</div>
</div>
{{end}}

View file

@ -14831,6 +14831,10 @@
"remote_username": {
"type": "string",
"x-go-name": "RemoteUsername"
},
"sync_on_commit": {
"type": "boolean",
"x-go-name": "SyncOnCommit"
}
},
"x-go-package": "code.gitea.io/gitea/modules/structs"
@ -18011,6 +18015,10 @@
"repo_name": {
"type": "string",
"x-go-name": "RepoName"
},
"sync_on_commit": {
"type": "boolean",
"x-go-name": "SyncOnCommit"
}
},
"x-go-package": "code.gitea.io/gitea/modules/structs"

View file

@ -18,11 +18,11 @@
<p>{{.locale.Tr "settings.gpg_token_required"}}</p>
</div>
<div class="field">
<label for="token">{{.locale.Tr "setting.gpg_token"}}
<label for="token">{{.locale.Tr "settings.gpg_token"}}
<input readonly="" value="{{.TokenToSign}}">
<div class="help">
<p>{{.locale.Tr "settings.gpg_token_help"}}</p>
<p><code>{{$.locale.Tr "settings.gpg_token_code" .TokenToSign .PaddedKeyID}}</code></p>
<p><code>{{$.locale.Tr "settings.gpg_token_code" .TokenToSign .KeyID}}</code></p>
</div>
</div>
<div class="field">

View file

@ -17,9 +17,13 @@
{{range .Orgs}}
<div class="item">
<div class="right floated content">
<form method="post" action="{{.OrganisationLink}}/members/action/leave">
<form>
{{$.CsrfTokenHtml}}
<button type="submit" class="ui primary small button" name="uid" value="{{.ID}}">{{$.locale.Tr "org.members.leave"}}</button>
<button class="ui red button delete-button" data-modal-id="leave-organization"
data-url="{{.OrganisationLink}}/members/action/leave" data-datauid="{{$.SignedUser.ID}}"
data-name="{{$.SignedUser.DisplayName}}"
data-data-organization-name="{{.DisplayName}}">{{$.locale.Tr "org.members.leave"}}
</button>
</form>
</div>
{{avatar . 28 "mini"}}
@ -36,4 +40,14 @@
</div>
</div>
</div>
<div class="ui small basic delete modal" id="leave-organization">
<div class="ui icon header">
{{svg "octicon-x" 16 "close inside"}}
{{$.locale.Tr "org.members.leave"}}
</div>
<div class="content">
<p>{{$.locale.Tr "org.members.leave.detail" `<span class="dataOrganizationName"></span>` | Safe}}</p>
</div>
{{template "base/delete_modal_actions" .}}
</div>
{{template "base/footer" .}}

View file

@ -257,6 +257,32 @@ func TestPackageContainer(t *testing.T) {
})
})
t.Run("UploadBlob/Mount", func(t *testing.T) {
defer tests.PrintCurrentTest(t)()
req := NewRequest(t, "POST", fmt.Sprintf("%s/blobs/uploads?mount=%s", url, unknownDigest))
addTokenAuthHeader(req, userToken)
MakeRequest(t, req, http.StatusAccepted)
req = NewRequest(t, "POST", fmt.Sprintf("%s/blobs/uploads?mount=%s", url, blobDigest))
addTokenAuthHeader(req, userToken)
resp := MakeRequest(t, req, http.StatusCreated)
assert.Equal(t, fmt.Sprintf("/v2/%s/%s/blobs/%s", user.Name, image, blobDigest), resp.Header().Get("Location"))
assert.Equal(t, blobDigest, resp.Header().Get("Docker-Content-Digest"))
req = NewRequest(t, "POST", fmt.Sprintf("%s/blobs/uploads?mount=%s&from=%s", url, unknownDigest, "unknown/image"))
addTokenAuthHeader(req, userToken)
MakeRequest(t, req, http.StatusAccepted)
req = NewRequest(t, "POST", fmt.Sprintf("%s/blobs/uploads?mount=%s&from=%s/%s", url, blobDigest, user.Name, image))
addTokenAuthHeader(req, userToken)
resp = MakeRequest(t, req, http.StatusCreated)
assert.Equal(t, fmt.Sprintf("/v2/%s/%s/blobs/%s", user.Name, image, blobDigest), resp.Header().Get("Location"))
assert.Equal(t, blobDigest, resp.Header().Get("Docker-Content-Digest"))
})
for _, tag := range tags {
t.Run(fmt.Sprintf("[Tag:%s]", tag), func(t *testing.T) {
t.Run("UploadManifest", func(t *testing.T) {
@ -445,21 +471,6 @@ func TestPackageContainer(t *testing.T) {
assert.Equal(t, indexManifestDigest, pd.Files[0].Properties.GetByName(container_module.PropertyDigest))
})
t.Run("UploadBlob/Mount", func(t *testing.T) {
defer tests.PrintCurrentTest(t)()
req := NewRequest(t, "POST", fmt.Sprintf("%s/blobs/uploads?mount=%s", url, unknownDigest))
addTokenAuthHeader(req, userToken)
MakeRequest(t, req, http.StatusAccepted)
req = NewRequest(t, "POST", fmt.Sprintf("%s/blobs/uploads?mount=%s", url, blobDigest))
addTokenAuthHeader(req, userToken)
resp := MakeRequest(t, req, http.StatusCreated)
assert.Equal(t, fmt.Sprintf("/v2/%s/%s/blobs/%s", user.Name, image, blobDigest), resp.Header().Get("Location"))
assert.Equal(t, blobDigest, resp.Header().Get("Docker-Content-Digest"))
})
t.Run("HeadBlob", func(t *testing.T) {
defer tests.PrintCurrentTest(t)()

View file

@ -209,8 +209,6 @@ func (s *TestSession) MakeRequestNilResponseHashSumRecorder(t testing.TB, req *h
const userPassword = "password"
var loginSessionCache = make(map[string]*TestSession, 10)
func emptyTestSession(t testing.TB) *TestSession {
t.Helper()
jar, err := cookiejar.New(nil)
@ -225,12 +223,8 @@ func getUserToken(t testing.TB, userName string) string {
func loginUser(t testing.TB, userName string) *TestSession {
t.Helper()
if session, ok := loginSessionCache[userName]; ok {
return session
}
session := loginUserWithPassword(t, userName, userPassword)
loginSessionCache[userName] = session
return session
return loginUserWithPassword(t, userName, userPassword)
}
func loginUserWithPassword(t testing.TB, userName, password string) *TestSession {

View file

@ -22,7 +22,4 @@ func TestSignOut(t *testing.T) {
// try to view a private repo, should fail
req = NewRequest(t, "GET", "/user2/repo2")
session.MakeRequest(t, req, http.StatusNotFound)
// invalidate cached cookies for user2, for subsequent tests
delete(loginSessionCache, "user2")
}

View file

@ -8,7 +8,7 @@
<DiffFileTreeItem v-for="item in fileTree" :key="item.name" :item="item" />
</div>
<div v-if="isIncomplete" id="diff-too-many-files-stats" class="pt-2">
<span>{{ tooManyFilesMessage }}</span><a :class="['ui', 'basic', 'tiny', 'button', isLoadingNewData === true ? 'disabled' : '']" id="diff-show-more-files-stats" @click.stop="loadMoreData">{{ showMoreMessage }}</a>
<span class="mr-2">{{ tooManyFilesMessage }}</span><a :class="['ui', 'basic', 'tiny', 'button', isLoadingNewData === true ? 'disabled' : '']" id="diff-show-more-files-stats" @click.stop="loadMoreData">{{ showMoreMessage }}</a>
</div>
</div>
</template>
@ -98,6 +98,9 @@ export default {
mounted() {
// ensure correct buttons when we are mounted to the dom
this.adjustToggleButton(this.fileTreeIsVisible);
// replace the pageData.diffFileInfo.files with our watched data so we get updates
pageData.diffFileInfo.files = this.files;
document.querySelector('.diff-toggle-file-tree-button').addEventListener('click', this.toggleVisibility);
},
unmounted() {

View file

@ -119,26 +119,47 @@ function onShowMoreFiles() {
export function doLoadMoreFiles(link, diffEnd, callback) {
const url = `${link}?skip-to=${diffEnd}&file-only=true`;
loadMoreFiles(url, callback);
}
function loadMoreFiles(url, callback) {
const $target = $('a#diff-show-more-files');
if ($target.hasClass('disabled')) {
callback();
return;
}
$target.addClass('disabled');
$.ajax({
type: 'GET',
url,
}).done((resp) => {
if (!resp) {
$target.removeClass('disabled');
callback(resp);
return;
}
$('#diff-incomplete').replaceWith($(resp).find('#diff-file-boxes').children());
// By simply rerunning the script we add the new data to our existing
// pagedata object. this triggers vue and the filetree and filelist will
// render the new elements.
$('body').append($(resp).find('script#diff-data-script'));
onShowMoreFiles();
callback(resp);
}).fail(() => {
$target.removeClass('disabled');
callback();
});
}
export function initRepoDiffShowMore() {
$(document).on('click', 'a.diff-show-more-button', (e) => {
$(document).on('click', 'a#diff-show-more-files', (e) => {
e.preventDefault();
const $target = $(e.target);
loadMoreFiles($target.data('href'), () => {});
});
$(document).on('click', 'a.diff-load-button', (e) => {
e.preventDefault();
const $target = $(e.target);

View file

@ -200,7 +200,7 @@ function getRelativeColor(color) {
}
function rgbToHex(rgb) {
rgb = rgb.match(/^rgb\((\d+),\s*(\d+),\s*(\d+)\)$/);
rgb = rgb.match(/^rgba?\((\d+),\s*(\d+),\s*(\d+).*\)$/);
return `#${hex(rgb[1])}${hex(rgb[2])}${hex(rgb[3])}`;
}

View file

@ -24,6 +24,7 @@ export function initStopwatch() {
trigger: 'click',
maxWidth: 'none',
interactive: true,
hideOnClick: true,
});
// global stop watch (in the head_navbar), it should always work in any case either the EventSource or the PeriodicPoller is used.

View file

@ -1,6 +1,6 @@
function displayError(el, err) {
const target = targetElement(el);
target.remove('is-loading');
target.classList.remove('is-loading');
const errorNode = document.createElement('div');
errorNode.setAttribute('class', 'ui message error markup-block-error mono');
errorNode.textContent = err.str || err.message || String(err);
@ -23,12 +23,16 @@ export async function renderMath() {
for (const el of els) {
const source = el.textContent;
const options = {display: el.classList.contains('display')};
const displayMode = el.classList.contains('display');
const nodeName = displayMode ? 'p' : 'span';
try {
const markup = katex.renderToString(source, options);
const tempEl = document.createElement(options.display ? 'p' : 'span');
tempEl.innerHTML = markup;
const tempEl = document.createElement(nodeName);
katex.render(source, tempEl, {
maxSize: 25,
maxExpand: 50,
displayMode,
});
targetElement(el).replaceWith(tempEl);
} catch (error) {
displayError(el, error);

View file

@ -1665,6 +1665,9 @@
background-color: var(--color-teal);
}
}
.button {
padding: 8px 12px;
}
}
.diff-box .header:not(.resolved-placeholder) {
@ -3491,3 +3494,41 @@ td.blob-excerpt {
max-width: 165px;
}
}
.pr-status {
padding: 0 !important; // To clear fomantic's padding on .ui.segment elements
display: flex;
align-items: center;
.commit-status {
margin: 1em;
flex-shrink: 0;
}
.status-context {
display: flex;
justify-content: space-between;
width: 100%;
> span {
padding: 1em 0;
}
}
.status-details {
display: flex;
padding-right: .5em;
align-items: center;
justify-content: flex-end;
@media @mediaSm {
flex-direction: column;
align-items: flex-end;
justify-content: center;
}
> span {
padding-right: .5em; // To match the alignment with the "required" label
}
}
}