2024-07-01 02:12:59 I am really amazed, ccl takes 6 minutes to build sbcl, while ecl (which is what we're using now) takes 22 minutes 2024-07-01 02:13:38 Too bad ccl is x86_64-only 2024-07-01 02:13:51 as is clisp 2024-07-01 02:19:10 Now i'm curious how fast clisp will be 2024-07-01 02:28:52 It's almost 8 minutes now, so clisp is slower than ccl 2024-07-01 02:41:08 20 minutes now, so clisp may actually be slower than ecl 2024-07-01 02:56:01 clisp takes 26 minutes.. 2024-07-01 10:41:05 +14 :( 2024-07-01 14:15:44 algitbot: retry master 2024-07-01 14:36:53 algitbot: retry master 2024-07-01 14:38:18 algitbot: retry master 2024-07-01 15:18:47 +20 MRs today 2024-07-01 15:27:12 Oh well 2024-07-01 15:28:08 I'm going AFK so no more MRs from me 2024-07-01 16:05:38 Any idea why build-edge-loongarch64 can't find abuild-apk? 2024-07-01 18:45:33 celie: It should be installed through an install_if, but it does not for some reason 2024-07-01 18:47:02 May have to do with: WARNING: opening /home/buildozer/packages/main: UNTRUSTED signature 2024-07-01 19:22:38 socket path too long, that's the first time I've encountered that 2024-07-01 22:08:21 wat 2024-07-01 22:09:05 ah, loongarch 2024-07-02 00:55:07 ikke: That's very likely the case, as i believe either huajingyun or znley fixed libseccomp by uploading directly to dev.a.o/~loongarch/edge (the MR is still open, the last time i checked) 2024-07-02 00:56:58 This may also have something to do with an observation i made last week (when build-edge-loongarch64 first came online), that main/alpine-keys was one of the packages it built 2024-07-02 01:00:58 Aha! !61715 i believe main/alpine-keys had the Loongarch key before build-edge-loongarch64 overwrote it when main/ was unblocked after libseccomp 2.5.5-r1 was uploaded manually 2024-07-02 01:03:12 main/alpine-keys in dev.a.o/~loongarch/edge, that is 2024-07-02 01:03:26 Other archs obviously didn't have it as the MR is still open 2024-07-02 01:06:58 s/didn't/don't/ 2024-07-02 02:35:47 algitbot: retry master 2024-07-02 07:21:14 algitbot: retry master 2024-07-02 07:22:19 algitbot: retry master 2024-07-02 07:22:47 algitbot: retry master 2024-07-02 19:56:19 fatal error: qwayland-plasma-shell.h: No such file or directory 2024-07-02 20:12:21 ikke: oh, that's the same problem we had for a while with another package 2024-07-02 20:14:20 do you remember the solution? 2024-07-03 04:59:57 hehe 2024-07-03 05:00:01 I was just about to merge that 2024-07-03 05:02:10 :) 2024-07-03 05:02:32 I was wondering about the patching in https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/68541/ though 2024-07-03 05:03:39 Hmm, yes, suddenly having so many patches being added does make people wonder why they're needed 2024-07-03 05:04:36 It seems to be mostly format security and curl 2024-07-03 05:05:08 Yes, but also for example, why remove the check to see if the curl version is newer then quite an old version? 2024-07-03 05:07:17 Format security: https://github.com/Ettercap/ettercap/commit/85d717e6a 2024-07-03 05:09:26 Curl version fix: https://github.com/Ettercap/ettercap/commit/4053466204 2024-07-03 05:10:22 Ok, but that is a different patch then included, where the version check is removed completely. But probably moot since it's just a build-time check 2024-07-03 05:13:49 Yeah, i guess what Zewei needs to be told is that we prefer to have patches made from the diff to upstream master, so when that master is realeased as the next version, it is easier to see what patches are no longer needed 2024-07-03 05:13:59 released* 2024-07-03 05:54:04 chromium incoming for 3.20 2024-07-03 09:50:13 algitbot: retry master 2024-07-03 10:28:22 Hmm, it seems ppc64le didn't build findutils 4.10.0? 2024-07-03 10:28:45 algitbot: retry master 2024-07-03 10:28:58 Now it's building 2024-07-03 13:14:35 Hello 2024-07-03 13:15:57 There seems to be a bad signature for "dbus-daemon-launch-helper" on build-edge-loongarch64 2024-07-03 13:16:42 algitbot: retry master 2024-07-03 13:36:46 Anyone can use "algitbot: retry master" btw, and you can do it on any channel algitbot is on (including #alpine-loongarch) 2024-07-03 13:36:51 algitbot: retry master 2024-07-03 14:05:44 algitbot: retry master 2024-07-03 14:11:10 algitbot: retry master 2024-07-03 14:13:24 algitbot: retry master 2024-07-03 14:20:53 algitbot: retry master 2024-07-03 14:24:07 algitbot: retry master 2024-07-03 14:26:23 algitbot: retry master 2024-07-03 14:32:11 algitbot: retry master 2024-07-03 14:34:33 algitbot: retry master 2024-07-03 14:41:57 algitbot: retry master 2024-07-03 14:49:06 algitbot: retry master 2024-07-03 14:53:31 algitbot: retry master 2024-07-03 14:59:32 algitbot: retry master 2024-07-03 15:05:56 ah, go upgrade 2024-07-03 15:08:39 Yes 2024-07-03 15:08:51 Though build-edge-loongarch64 seems to keep building py3-rdflib instead 2024-07-03 15:10:11 It's committed 2024-07-03 15:11:05 Ah, now it builds Go 2024-07-03 15:11:13 ACTION keeps fingers crossed 2024-07-03 15:29:58 algitbot: retry master 2024-07-03 15:35:49 i/o timeout 2024-07-03 15:36:23 github-cli: "failed to create TUF client: tuf refresh failed: failed to persist metadata: %!w() 2024-07-03 15:36:25 " 2024-07-03 15:36:43 I guess the connectivity concerns you had is justified now 2024-07-03 15:37:02 github-cli hasn't been upgraded for many versions IIRC 2024-07-03 15:39:06 but it's built successfully on x86 2024-07-03 15:39:09 algitbot: retry master 2024-07-03 15:40:10 "%!w()" is sus 2024-07-03 15:46:52 algitbot: retry master 2024-07-03 15:49:22 algitbot: retry master 2024-07-03 15:52:11 algitbot: retry master 2024-07-03 15:56:16 algitbot: retry master 2024-07-03 16:01:32 algitbot: retry master 2024-07-03 16:04:51 algitbot: retry master 2024-07-03 16:06:47 algitbot: retry master 2024-07-03 16:13:41 algitbot: retry master 2024-07-03 16:13:57 Loongarch, please build Perl successfully :) 2024-07-03 16:14:17 or rather, a Perl aport 2024-07-03 16:15:21 :) 2024-07-03 16:16:42 algitbot: retry master 2024-07-03 16:17:09 algitbot: retry master 2024-07-03 16:17:31 Wow, Go's use of "net" is really triggering the connectivity issues 2024-07-03 16:17:39 Oh wait 2024-07-03 16:17:41 lol 2024-07-03 16:17:46 Loki was aarch64 2024-07-03 16:18:37 "warning: Not a git repository. Use --no-index to compare two paths outside a working tree" 2024-07-03 16:19:24 Hmm, after that the build continues 2024-07-03 16:19:27 possibly due to https://gitlab.alpinelinux.org/alpine/abuild/-/blob/master/abuild.in#L704? 2024-07-03 16:19:31 Then it fails a test 2024-07-03 16:19:35 yup 2024-07-03 16:20:27 Loongarch keeps building that 2024-07-03 16:20:30 algitbot: retry master 2024-07-03 16:22:08 algitbot: retry master 2024-07-03 16:27:05 algitbot: retry master 2024-07-03 16:33:03 Rdflib again 2024-07-03 16:33:04 algitbot: retry master 2024-07-03 16:37:35 and again 2024-07-03 16:37:37 algitbot: retry master 2024-07-03 16:39:50 algitbot: retry master 2024-07-03 16:42:46 celeste: maybe disable tests for py3-rdflib on loongarch? and file issue 2024-07-03 16:43:38 I think it's a network connectivity issue 2024-07-03 16:44:29 So, maybe it can wait for Loongarch office hours, and see if they can let the connection go through 2024-07-03 19:19:05 algitbot: retry master 2024-07-03 22:37:06 algitbot: retry master 2024-07-03 22:48:46 algitbot: retry master 2024-07-03 22:49:24 algitbot: retry master 2024-07-03 22:50:41 algitbot: retry master 2024-07-03 22:54:39 algitbot: retry master 2024-07-03 23:17:06 algitbot: retry master 2024-07-03 23:24:00 algitbot: retry master 2024-07-03 23:25:42 algitbot: retry master 2024-07-03 23:33:19 algitbot: retry master 2024-07-04 00:29:46 algitbot: retry master 2024-07-04 00:42:15 algitbot: retry master 2024-07-04 00:55:17 algitbot: retry master 2024-07-04 01:14:10 algitbot: retry master 2024-07-04 01:19:58 algitbot: retry master 2024-07-04 01:27:04 Hmm, it seems elastic-beats is failing only on x86_64? 2024-07-04 01:28:42 8.14.1-r0 built on 25 Jun, i think that was before x86_64 was moved to the new builder, so maybe it has something to do with that? 2024-07-04 02:02:41 algitbot: retry master 2024-07-04 02:04:19 For opa: https://github.com/open-policy-agent/opa/issues/6777 2024-07-04 02:04:51 It seems the opa devs haven't gotten around to updating the wasmtime-go version they are using due to problems on Windows 2024-07-04 04:51:46 algitbot: retry master 2024-07-04 05:03:09 algitbot: retry master 2024-07-04 05:21:59 algitbot: retry master 2024-07-04 05:39:44 algitbot: retry master 2024-07-04 05:46:37 algitbot: retry master 2024-07-04 05:56:26 I wonder if Opa should just be disabled, since upstream is having issues making it compatible with new wasmtime (and during wasmtime upgrades, we don't also build opa and ensure it succeeds first before upgrading wasmtime) 2024-07-04 06:09:35 algitbot: retry master 2024-07-04 06:11:46 algitbot: retry master 2024-07-04 06:15:38 algitbot: retry master 2024-07-04 08:12:29 celeste: let's disable opa for !61506 2024-07-04 08:25:02 Thanks 2024-07-04 08:25:43 Now there's elastic-beats (which succeeds on aarch64) 2024-07-04 08:25:59 I wonder if it could be someone on the new x86_64 builder? 2024-07-04 08:26:37 (but then i remember that it should be a container, so the whole container was probably moved from old builder host to the new one) 2024-07-04 08:36:31 Yes, that's why i think it may be failing due to something on the builder 2024-07-04 10:48:03 Is build-edge-riscv64 stuck on filebrowser, and build-edge-loongarch64 on py3-tika? 2024-07-04 10:52:51 first one looks like yes 2024-07-04 10:53:53 algitbot: kick build-edge-riscv64 2024-07-04 10:55:13 that no longer works 2024-07-04 11:00:53 ikke: We now have community/pfetch and testing/pfetch with 2 different maintainers 2024-07-04 11:03:10 😒 2024-07-04 11:06:12 and now that may be about to happen again with !68704 2024-07-04 11:09:43 cely: thanks, closed that MR 2024-07-04 11:10:14 You're welcome 2024-07-04 11:10:25 Unfortunately things like this happen.. 2024-07-04 11:12:24 Maybe there could be a bot that looks for first time contributions of new aports that do not go to testing 2024-07-04 11:12:25 yup 2024-07-04 11:20:51 could be added to aports-qa-bot 2024-07-04 11:47:01 and in any case if a package already exists 2024-07-04 11:49:56 Yeah, that too 2024-07-04 11:51:06 Though i remember the case of community/py3-ruamel.yaml and testing/py3-ruamel-yaml 2024-07-04 11:51:52 a4f3ecf594f33d23512ce0d4ef230ff6b198b96e 2024-07-04 11:51:55 Slightly different names, but still the same aport, and the one in testing/ had to be removed later on 2024-07-04 11:52:02 Yeah 2024-07-04 11:52:38 yeah, it will not be perfect, but at least catch the obvious cases 2024-07-04 11:52:52 :) 2024-07-04 11:58:24 Bye, elastic-beats 2024-07-04 11:59:05 Well, hopefully someone who actually uses that notices and comes to take up maintainership 2024-07-04 12:00:22 Not sure if it can, it requires a fork of go 2024-07-04 12:00:27 oh sorry 2024-07-04 12:00:33 that was cloudeflared 2024-07-04 12:00:37 Yeah 2024-07-04 12:00:42 I was just recalling the name 2024-07-04 12:00:48 trying to recall* 2024-07-04 12:17:04 +22 MRs already today 2024-07-04 12:17:21 wow 2024-07-04 12:17:48 Everyone has been busyb 2024-07-04 12:17:50 lol 2024-07-04 12:17:57 I think i almost typed busybox 2024-07-04 12:18:01 0haha 2024-07-04 12:18:02 hahaha 2024-07-04 12:46:00 ^ commit message could be better 2024-07-04 12:52:13 Oops, sorry 2024-07-04 12:52:17 What would you suggest? 2024-07-04 12:54:16 cely: at least add context about what is being fixed / why 2024-07-04 12:55:56 Ok 2024-07-04 13:04:34 Hmm, apparently there are still some aports in testing/ that have not been rebuilt against Python 3.12: py3-certauth and py3-wsgiproxy (found by !68726) 2024-07-04 13:06:24 Too bad upstream for both seems to be inactive 2024-07-04 14:45:01 hmz, amount of open MRs only increases 2024-07-04 17:08:21 Riscv64 is the first?! 2024-07-04 17:08:33 Oh, it has !check 2024-07-04 17:11:59 Cheating! 2024-07-04 17:15:21 and the thing is, the line of shell to disable check is (from memory): `[ "$CARCH" != "riscv64" ] || options="!check"` 2024-07-04 17:16:18 First time i've seen the logic expressed this way 2024-07-04 17:16:56 double negative, I'd use [ "$CARCH" 2024-07-04 17:17:02 My impression is that it's usually `[ "$CARCH" = "riscv64" ] && options="!check"` 2024-07-04 17:17:05 yes 2024-07-04 17:17:30 It's a double negative, so a bit harder to understand 2024-07-04 17:17:51 Yes, so "cheating" and an unusual way of expressing the logic :D 2024-07-04 19:49:00 algitbot: retry master 2024-07-04 19:49:05 please build please build please build 2024-07-04 20:07:57 algitbot: retry master 2024-07-05 01:18:40 algitbot: retry master 2024-07-05 02:17:30 algitbot: retry master 2024-07-05 06:45:37 algitbot: retry master 2024-07-05 06:48:37 Hmm, i wonder why it fails to find pthread.h 2024-07-05 06:48:39 algitbot: retry master 2024-07-05 07:16:02 Hmm 2024-07-05 07:17:04 Hopefully this version of Synapse doesn't need any newer Python modules not yet available in 3.20-stable 2024-07-05 10:20:04 Hmm, what has happened 2024-07-05 10:22:33 Ah, i think the kernel's pkgrel was not reset to 0 2024-07-05 10:23:41 or rather, the KBUILD_BUILD_TIMESTAMP commit set pkgrel to 1, and then the upgrade commit immediately after that didn't set it back to 0 2024-07-05 13:57:36 :/ 2024-07-05 13:58:01 I thought the ARM issue was properly fixed in the MR 2024-07-05 14:21:00 algitbot: retry master 2024-07-05 15:11:34 What?! 2024-07-05 15:11:45 algitbot: retry master 2024-07-05 15:11:52 I think those arches were still building 2024-07-05 15:12:27 Such flaky tests :( 2024-07-05 15:13:34 flaky tests are flaky 2024-07-05 15:39:12 If nushell fails again, i'll probably disable that new failing test again 2024-07-05 15:39:19 but i'll stop there 2024-07-05 15:39:38 If it fails again someone else will have to think of something 2024-07-05 15:40:31 In the back of my mind, i sort of expect that as more tests are disabled, more failing tests will be uncovered 2024-07-05 16:09:13 signal 11, fun 2024-07-05 16:09:19 algitbot: retry master 2024-07-05 16:21:38 Finally! 2024-07-05 17:43:20 algitbot: retry 3.20-stable 2024-07-06 02:35:52 Hmm, "Operation timed out" while connecting to "objects.githubusercontent.com" 2024-07-06 02:35:54 algitbot: retry master 2024-07-06 02:51:17 It seems that we need to add /home/buildozer/packages/main to /etc/apk/repositories 2024-07-06 02:52:01 To fix the indi-3rdparty issue? 2024-07-06 02:53:56 No, just because the dependency package libindi-dev is missing 2024-07-06 02:53:59 Since there is only main in /etc/apk/repositories currently, I just added community 2024-07-06 02:56:35 Need to retry master 2024-07-06 03:00:19 ^ my solution to the indi-3rdparty build failure 2024-07-06 03:00:46 (tried it out in CI, and it works; it was failing for all archs before that) 2024-07-06 03:23:19 rdflib again 2024-07-06 04:12:35 03:32:04 if someone with access is reading this please reset the riscv64 builder queues; the risc-v build is clogged for a few days https://build.alpinelinux.org/ stuck at testing/kompose-1.31.2-r5 and testing/filebrowser-2.27.0-r6. 2024-07-06 04:12:47 i guess they're right? 2024-07-06 04:13:02 They're right 2024-07-06 04:13:19 but i think something may be wrong with Node on riscv64 2024-07-06 04:30:34 Ah, filebrowser is newly enabled on riscv64 2024-07-06 04:32:02 algitbot: retry master 2024-07-06 04:36:33 algitbot: retry master 2024-07-06 04:44:25 algitbot: retry master 2024-07-06 04:48:53 algitbot: retry master 2024-07-06 04:56:09 algitbot: retry master 2024-07-06 05:38:48 algitbot: retry master 2024-07-06 15:18:20 algitbot: retry master 2024-07-06 17:14:25 algitbot: retry master 2024-07-06 17:14:46 kig should hopefully be a temporary failure 2024-07-06 18:04:32 algitbot: retry master 2024-07-06 20:25:18 algitbot: retry master 2024-07-07 02:28:47 algitbot: retry master 2024-07-07 02:34:56 algitbot: retry master 2024-07-07 05:58:04 I think s390x is having some issues contacting crates.io 2024-07-07 05:58:16 algitbot: retry master 2024-07-07 05:59:11 Hmm, but was jaq the only Rust aport i merged (at least that's enabled for s390x)? 2024-07-07 06:11:28 algitbot: retry master 2024-07-07 06:14:56 Ok, seems like jaq has built on s390x now 2024-07-07 07:14:27 algitbot: retry master 2024-07-07 13:01:06 algitbot: retry master 2024-07-07 13:01:29 algitbot: retry 3.20-stable 2024-07-07 13:02:39 ^ affected by the s390x crates.io network issue is 2024-07-07 13:02:46 issue* 2024-07-07 13:02:52 algitbot: retry master 2024-07-07 14:24:37 algitbot: retry master 2024-07-07 14:25:34 Finally! 2024-07-07 16:57:34 algitbot: retry master 2024-07-07 16:59:54 algitbot: retry 3.20-stable 2024-07-07 17:00:54 Hmm, build-3-20-riscv64 immediately retried lego again 2024-07-07 17:01:05 Hopefully, it's just flaky tests 2024-07-07 17:02:10 algitbot: retry master 2024-07-07 17:13:38 algitbot: retry master 2024-07-07 17:17:15 algitbot: retry master 2024-07-07 17:25:14 algitbot: retry master 2024-07-07 19:16:00 algitbot: retry master 2024-07-07 20:43:51 algitbot: retry master 2024-07-08 01:26:47 algitbot: retry master 2024-07-08 04:19:28 algitbot: retry 3.20-stable 2024-07-08 04:20:01 Hmm, build-3-20-riscv64 didn't get retried 2024-07-08 04:20:18 Together with build-edge-riscv64 being stuck for so long on filebrowser 2024-07-08 04:20:47 I guess the riscv64 builder has been stopped or something 2024-07-08 06:25:20 algitbot: retry master 2024-07-08 07:15:36 algitbot: retry master 2024-07-08 08:58:19 algitbot: retry 3.20-stable 2024-07-08 09:06:16 algitbot: retry 3.20-stable 2024-07-08 09:36:18 algitbot: retry 3.20-stable 2024-07-08 09:47:27 algitbot: retry 3.20-stable 2024-07-08 10:01:37 algitbot: retry master 2024-07-08 10:42:05 algitbot: retry 3.20-stable 2024-07-08 12:30:12 algitbot: retry master 2024-07-08 12:43:50 algitbot: retry master 2024-07-08 12:44:56 algitbot: retry 3.20-stable 2024-07-08 12:54:58 algitbot: retry master 2024-07-08 13:13:20 algitbot: retry 3.20-stable 2024-07-08 13:19:41 algitbot: retry 3.20-stable 2024-07-08 13:32:41 algitbot: retry 3.20-stable 2024-07-08 13:46:30 algitbot: retry 3.20-stable 2024-07-08 14:01:26 algitbot: retry master 2024-07-08 14:40:37 algitbot: retry 3.20-stable 2024-07-08 14:51:02 algitbot: retry master 2024-07-08 14:51:06 algitbot: retry 3.20-stable 2024-07-08 14:56:58 algitbot: retry master 2024-07-08 16:18:31 algitbot: retry 3.20-stable 2024-07-08 16:20:38 algitbot: retry 3.20-stable 2024-07-08 16:23:24 algitbot: retry 3.20-stable 2024-07-08 16:27:55 algitbot: retry 3.20-stable 2024-07-08 16:37:42 algitbot: retry 3.20-stable 2024-07-08 16:37:46 algitbot: retry master 2024-07-08 16:50:29 algitbot: retry 3.20-stable 2024-07-08 16:59:14 algitbot: retry 3.20-stable 2024-07-08 17:09:10 algitbot: retry 3.20-stable 2024-07-08 17:22:43 algitbot: retry master 2024-07-08 17:41:35 algitbot: retry master 2024-07-08 19:47:27 algitbot: retry 3.20-stable 2024-07-08 19:48:40 algitbot: retry master 2024-07-09 01:14:22 algitbot: retry 3.20-stable 2024-07-09 01:14:27 algitbot: retry master 2024-07-09 01:46:25 algitbot: retry 3.20-stable 2024-07-09 01:51:21 x86_64 is first this time :) 2024-07-09 01:51:51 algitbot: retry master 2024-07-09 03:01:08 algitbot: retry master 2024-07-09 03:06:03 algitbot: retry master 2024-07-09 03:22:24 algitbot: retry master 2024-07-09 03:22:46 It seems every Rust aport merged now requires multiple retries before succeeding on s390x due to network issues 2024-07-09 03:23:29 algitbot: retry master 2024-07-09 03:29:48 algitbot: retry master 2024-07-09 03:44:24 algitbot: retry 3.20-stable 2024-07-09 03:51:49 algitbot: retry 3.20-stable 2024-07-09 04:07:40 algitbot: retry 3.20-stable 2024-07-09 04:20:28 algitbot: retry 3.20-stable 2024-07-09 04:29:00 algitbot: retry 3.20-stable 2024-07-09 04:44:29 algitbot: retry 3.20-stable 2024-07-09 04:58:46 algitbot: retry 3.20-stable 2024-07-09 05:07:10 algitbot: retry 3.20-stable 2024-07-09 05:16:57 algitbot: retry 3.20-stable 2024-07-09 05:25:32 algitbot: retry 3.20-stable 2024-07-09 10:53:56 algitbot: retry 3.20-stable 2024-07-09 11:35:27 Now Github is very slow for me :/ 2024-07-09 13:51:42 \o/ 2024-07-09 13:51:47 Finally uploaded 2024-07-09 14:34:29 Hmm, it seems traefik finally passed on 3.20-stable 2024-07-09 16:27:43 algitbot: retry master 2024-07-09 16:30:39 algitbot: retry master 2024-07-09 16:59:17 algitbot: retry master 2024-07-09 19:33:12 oof, that's an ancient version of github.com/cilium/ebpf 2024-07-09 19:35:22 there's a MR available though 2024-07-09 21:16:37 algitbot: retry master 2024-07-09 23:00:59 algitbot: retry master 2024-07-10 00:53:02 algitbot: retry master 2024-07-10 01:14:23 algitbot: retry master 2024-07-10 01:34:56 fcolista: Hi fcolista, can you help review this !68984? gsa has blocked the builder 2024-07-10 03:00:44 algitbot: retry master 2024-07-10 03:05:57 algitbot: retry master 2024-07-10 03:17:29 algitbot: retry master 2024-07-10 06:34:42 bogofilter: "/usr/include/fortify/sys/select.h:26:10: fatal error: ../fortify-headers.h: No such file or directory" 2024-07-10 07:44:43 ^ probably something else that needs a workaround for fortify-headers 2.3.1 2024-07-10 07:50:21 yyjson also needed a workaround, and its maintainer added it, but i think then resigned in frustration (frustration can be seen in !64292) 2024-07-10 07:54:01 algitbot: retry master 2024-07-10 08:49:28 MRs under alpine/aports have gone below 500 :) 2024-07-10 08:50:09 ACTION waits to see who will create the 500th open MR 2024-07-10 08:50:32 Didn't need to wait long :D 2024-07-10 08:51:25 algitbot: retry master 2024-07-10 08:52:17 algitbot: retry master 2024-07-10 09:55:45 algitbot: retry master 2024-07-10 13:57:56 algitbot: retry master 2024-07-10 13:58:15 ACTION hopes that backport fixes iwd 2024-07-10 13:58:46 Ok it does 2024-07-10 13:58:54 Oh wait 2024-07-10 13:58:57 I spoke too soon 2024-07-10 15:14:35 algitbot: retry master 2024-07-11 00:28:28 algitbot: retry master 2024-07-11 00:29:52 algitbot: retry master 2024-07-11 01:04:57 algitbot: retry master 2024-07-11 01:24:56 algitbot: retry master 2024-07-11 01:33:23 algitbot: retry master 2024-07-11 01:42:12 algitbot: retry master 2024-07-11 02:10:19 algitbot: retry master 2024-07-11 04:20:49 We're at 494 MRs now under alpine/aports 2024-07-11 04:20:55 Finally it is below 500 again 2024-07-11 05:06:46 algitbot: retry master 2024-07-11 05:46:50 algitbot: retry master 2024-07-11 09:28:17 algitbot: retry master 2024-07-11 17:07:20 algitbot: retry master 2024-07-11 17:09:07 keydb probably needs update_config_*, and maybe macchina is broken due to new Rust or something 2024-07-11 17:10:08 algitbot: retry master 2024-07-12 01:06:25 algitbot: retry master 2024-07-12 04:22:33 algitbot: retry master 2024-07-12 04:23:14 algitbot: retry master 2024-07-12 08:18:57 algitbot: retry master 2024-07-12 15:41:04 algitbot: retry master 2024-07-12 15:42:18 algitbot: retry master 2024-07-12 15:44:10 algitbot: retry master 2024-07-12 16:26:16 algitbot: retry master 2024-07-12 18:19:36 algitbot: retry master 2024-07-13 02:03:59 algitbot: retry master 2024-07-13 06:53:24 algitbot: retry master 2024-07-13 07:39:29 algitbot: retry master 2024-07-13 11:45:11 algitbot: retry master 2024-07-13 11:51:04 ACTION takes a look at what's on MQTT 2024-07-13 11:51:10 13 aports left :) 2024-07-13 13:13:17 librewolf failure was lucily a fluke 2024-07-13 14:04:25 nice 2024-07-13 14:25:29 algitbot: retry master 2024-07-13 15:59:50 I have tested gleam in CI, and it passes, however maybe there is still some concern about riscv64 (it passed, but took 40 minutes in CI), so if it fails and i'm not around then someone please just disable it again 2024-07-13 16:00:08 Argh 2024-07-13 16:00:15 Silly libc thing 2024-07-13 16:05:32 Same thing with loongarch64, but hopefully it will build successfully, considering even riscv64 does (in CI), and the original reason for disabling (ring crate) doesn't seem to apply any longer 2024-07-14 02:15:12 :) 2024-07-14 03:13:08 Wish me luck 2024-07-14 03:54:41 algitbot: retry master 2024-07-14 05:05:04 algitbot: retry master 2024-07-14 05:10:41 Hmm 2024-07-14 05:10:43 it passed in the end 2024-07-14 06:38:53 algitbot: retry master 2024-07-14 06:41:51 algitbot: retry master 2024-07-14 06:45:07 Ok, back to being disabled 2024-07-14 11:16:49 Hmm, it seems pueue only ran on the Scaleway CI runners 2024-07-14 11:17:46 algitbot: retry master 2024-07-14 11:41:04 Hmm, it seems a different test fails now (client::integration::follow::default::case_2) 2024-07-14 11:41:48 or at least, the error message is different from what i remember from before 2024-07-14 11:41:50 algitbot: retry master 2024-07-14 11:42:40 Let's hope third time's the charm 2024-07-14 12:00:57 nope 2024-07-14 13:12:45 Fourth time 2024-07-14 14:09:58 oh lol 2024-07-15 00:11:32 algitbot: retry master 2024-07-15 00:19:44 algitbot: retry master 2024-07-15 00:24:22 algitbot: retry master 2024-07-15 01:03:32 algitbot: retry master 2024-07-15 01:06:14 algitbot: retry master 2024-07-15 03:44:46 algitbot: retry master 2024-07-15 03:58:07 algitbot: retry master 2024-07-15 08:14:38 Fetch failed 2024-07-15 08:14:40 algitbot: retry master 2024-07-15 14:20:48 algitbot: retry master 2024-07-15 14:36:21 algitbot: retry master 2024-07-15 14:47:46 algitbot: retry master 2024-07-15 14:48:54 algitbot: retry master 2024-07-15 14:50:00 algitbot: retry master 2024-07-15 14:50:05 Come on, build py3-ytmusicapi 2024-07-15 14:50:21 \o/ 2024-07-15 14:51:46 Well, i guess i'll just come back to look at the "blocked by uvicorn" aports in testing/ tomorrow then 2024-07-15 14:51:58 Hopefully kdiff3 has built by then 2024-07-15 15:24:06 algitbot: retry master 2024-07-15 15:42:51 algitbot: retry master 2024-07-15 17:16:13 algitbot: retry master 2024-07-15 17:16:22 i love it when INTERNAL_ERROR 2024-07-16 00:18:45 algitbot: retry master 2024-07-16 00:45:48 algitbot: retry master 2024-07-16 01:04:49 algitbot: retry master 2024-07-16 03:03:47 Hmm 2024-07-16 03:33:11 Ugh, command line history fail 2024-07-16 03:33:25 The cloud-init commit should've been "enable on loongarch64" 2024-07-16 03:34:01 but i hit the up key and edited that message instead :/ 2024-07-16 05:07:20 algitbot: retry master 2024-07-16 05:07:50 I suppose that's the libc crate that needs to be updated? 2024-07-16 05:15:32 cely: thanks :) 2024-07-16 05:23:35 You're welcome 2024-07-16 09:28:29 algitbot: retry master 2024-07-16 09:36:43 algitbot: retry maste 2024-07-16 09:36:58 s/maste/master/ 2024-07-16 09:37:09 algitbot: retry master 2024-07-17 02:12:57 algitbot: retry master 2024-07-17 11:33:21 Hmm, it seems the kernel upgrade was pushed before yt-dlp 2024-07-17 11:33:39 but some builders are working on yt-dlp first 2024-07-17 11:34:01 algitbot: retry master 2024-07-17 11:34:07 Oh 2024-07-17 11:34:08 lol 2024-07-17 11:34:12 Raspberry Pi kernel :D 2024-07-17 11:35:41 algitbot: retry master 2024-07-17 11:37:35 algitbot: retry master 2024-07-17 11:39:09 algitbot: retry master 2024-07-17 11:42:23 algitbot: retry master 2024-07-17 11:46:19 algitbot: retry master 2024-07-17 13:29:13 algitbot: retry master 2024-07-17 13:30:02 Yes, i think something is wrong with Plasma on 32-bit 2024-07-17 14:12:09 algitbot: retry master 2024-07-17 14:13:40 algitbot: retry master 2024-07-17 14:26:30 algitbot: retry master 2024-07-17 14:34:10 algitbot: retry master 2024-07-17 15:13:27 algitbot: retry master 2024-07-17 16:36:51 algitbot: retry master 2024-07-17 16:42:40 algitbot: retry master 2024-07-17 16:45:28 algitbot: retry master 2024-07-17 16:54:44 algitbot: retry master 2024-07-17 21:16:43 anyone investigating the libplasma crashes? 2024-07-17 21:17:35 at this point i'm tempted to just grep for "TODO"s in plasma code 2024-07-17 21:17:43 ptrc: nope 2024-07-17 21:18:14 I also saw another MR failing on the same arches due to illegal instructions 2024-07-17 21:19:15 the downside to doing it locally is that i'd have to also build all the other stuff 2024-07-17 21:19:40 ( as in, plasma-activities, kwayland, kio, etc. ) 2024-07-17 21:20:58 other way is temporary disable with dependent aports to allow upload 2024-07-17 21:22:15 it's an annoying amount of stuff to disable tbh. I'll just disable that one test on those arches for now 2024-07-17 21:22:31 oh wait, drkonqi fails the same way? 2024-07-17 21:22:33 nvm then... 2024-07-17 21:25:20 it's gotta be something in kio again 2024-07-17 21:25:26 maybe something with qttest? 2024-07-17 21:44:53 (gdb) bt 2024-07-17 21:44:53 #0 0xf73472be in ??? () at /usr/lib/libKF6CoreAddons.so.6 2024-07-17 21:45:27 no debug symbols for those.. 2024-07-17 22:08:54 sigh 2024-07-17 22:08:56 i rebuild kcoreaddons 2024-07-17 22:09:00 and then it magically works locally 2024-07-17 22:09:03 let's try that on the builders 2024-07-17 22:12:32 algitbot: retry master 2024-07-17 22:12:36 > error: Xvfb failed to start 2024-07-17 22:14:36 algitbot: retry master 2024-07-17 22:16:21 okay, x86 building community/libplasma-6.1.3-r0 2024-07-17 22:16:29 fingers crossed 2024-07-17 22:17:01 and succeeded! 2024-07-17 22:17:02 lol 2024-07-17 22:17:12 guess it was some weird miscompile? 2024-07-17 22:29:25 https://ptrc.gay/ABkswuMm 2024-07-17 22:29:26 oh well 2024-07-17 22:29:28 that looks like a double-free 2024-07-17 22:30:01 and i bet it goes away when i bump libkservice 2024-07-17 22:31:32 Segmentation fault (core dumped) 2024-07-17 22:31:32 ~/aports/community/kservice/src/kservice-6.4.0 $ meinproc6 --stylesheet /usr/share/kf6/kdoctool 2024-07-17 22:31:32 ook 2024-07-17 22:31:32 s/customization/kde-include-man.xsl --check ./po/fr/docs/kbuildsycoca6/man-kbuildsycoca6.8.docb 2024-07-17 22:31:36 er 2024-07-17 22:31:38 bad formatting but 2024-07-17 22:31:39 this is bad 2024-07-17 22:32:14 https://ptrc.gay/UZdYzOBg 2024-07-17 22:32:17 wtf is with qt on x86 2024-07-17 22:33:46 PureTryOut: ^ 2024-07-17 22:34:23 kdoctools segfaults on doing pretty much anything with the docs 2024-07-17 22:34:53 you can reproduce it with `docker run --rm -it i386/alpine:edge` and just `meinproc6 xml` 2024-07-17 22:53:39 algitbot: retry master 2024-07-17 22:53:41 ah 2024-07-17 22:53:42 no 2024-07-17 22:53:53 that was fixed by the rebuild in e311b55eb8fb4dd7e1219b1523062e87329f3001 2024-07-17 22:54:00 it just didn't make it to the builders yet 2024-07-17 23:00:02 https://ptrc.gay/FxNlyqzp 2024-07-17 23:00:04 okay yeah 2024-07-17 23:00:10 this is segfaulting in Qt6 itself 2024-07-17 23:00:16 i'll leave this to someone else 2024-07-17 23:20:31 algitbot: retry master 2024-07-17 23:51:06 algitbot: retry master 2024-07-18 00:58:44 algitbot: retry master 2024-07-18 01:03:43 algitbot: retry master 2024-07-18 01:14:14 algitbot: retry master 2024-07-18 02:00:26 Hmm 2024-07-18 02:00:56 algitbot: retry master 2024-07-18 02:08:01 algitbot: retry master 2024-07-18 08:20:54 algitbot: retry master 2024-07-18 08:44:30 ncopa: ^ 2024-07-18 08:52:04 fixed i think. thanks! 2024-07-18 17:32:10 algitbot: retry master 2024-07-18 21:20:05 algitbot: retry master 2024-07-19 01:32:13 algitbot: retry master 2024-07-19 01:36:28 algitbot: retry master 2024-07-19 06:42:46 algitbot: retry master 2024-07-19 10:18:48 algitbot: retry master 2024-07-19 12:34:08 openjdk8 failed again 2024-07-20 04:08:18 algitbot: retry master 2024-07-20 05:01:10 algitbot: retry master 2024-07-20 05:30:50 algitbot: retry master 2024-07-20 05:43:28 algitbot: retry master 2024-07-20 05:56:32 algitbot: retry master 2024-07-20 08:22:05 algitbot: retry master 2024-07-20 08:41:17 algitbot: retry master 2024-07-20 16:53:05 algitbot: retry master 2024-07-20 16:55:38 algitbot: retry master 2024-07-20 16:58:13 algitbot: retry master 2024-07-20 17:02:21 algitbot: retry master 2024-07-20 19:03:11 https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/69454#note_422628 2024-07-21 02:58:23 Could it be that build-3-20-armv7 has an old Go installed? 2024-07-21 02:58:39 algitbot: retry 3.20-stable 2024-07-21 02:59:09 I don't see Go being installed in the build log 2024-07-21 03:00:40 That's not good news 2024-07-21 03:01:27 It would mean that every Go rebuild on 3.20-stable armv7 was actually rebuilding against the old Go (1.22.2, according to the community/reader build log) 2024-07-21 08:58:42 Now jellyfin is also failing for some reason 2024-07-21 09:00:18 algitbot: retry master 2024-07-21 09:01:32 algitbot: retry master 2024-07-21 09:04:46 algitbot: retry master 2024-07-21 09:06:12 algitbot: retry master 2024-07-21 09:08:20 algitbot: retry master 2024-07-21 09:10:02 algitbot: retry master 2024-07-21 09:12:18 algitbot: retry master 2024-07-21 09:14:57 algitbot: retry master 2024-07-21 10:07:11 ikke: i think build-3-20-armv7 has old Go installed (see history of this channel from about 7 hours ago) 2024-07-21 10:07:24 algitbot: retry 3.20-stable 2024-07-21 10:07:35 ^ 2024-07-21 10:12:22 apparently it still had go-bootstrap 2024-07-21 10:12:26 algitbot: retry master 2024-07-21 10:14:08 That means all the Go rebuilds that were done on build-3-20-armv7 2024-07-21 10:14:17 all just went down the drain :( 2024-07-21 10:14:20 algitbot: retry 3.20-stable 2024-07-21 10:14:50 :( 2024-07-21 10:45:06 we could remove those related packages from that builder so that only that builder would need to rebuild it and invalidate the dl-cdn cache for those 2024-07-21 10:46:06 Does $BOOTSTRAP work for this? 2024-07-21 10:46:39 I'm thinking something like if $BOOTSTRAP is not set then add !go-bootstrap to makedepends 2024-07-21 10:46:40 Not sure how that could help 2024-07-21 10:47:35 Hmm, ok probably not, go-bootstrap is used for upgrades too 2024-07-21 10:47:49 It's a tragedy that just when we had an APKBUILD for go that does not require bootstrap, go changes it's model that invalidates that 2024-07-21 10:48:08 Not just when bootstrapping from scratch, it's just that when bootstrapping a new release, you have to install go-bootstrap by hand 2024-07-21 10:49:08 cely: yes, indeed 2024-07-21 10:51:52 Hmm, i wonder if we could have something tied to tagging a release then, something that checks for all *-bootstrap aports 2024-07-21 10:52:11 I think when a release is ready to be tagged, none of them should still be installed 2024-07-22 02:35:37 algitbot: retry master 2024-07-22 02:40:28 algitbot: retry master 2024-07-22 02:42:02 algitbot: retry master 2024-07-22 02:44:07 algitbot: retry master 2024-07-22 02:48:11 algitbot: retry master 2024-07-22 02:50:16 algitbot: retry master 2024-07-22 02:52:55 algitbot: retry master 2024-07-22 02:55:01 algitbot: retry master 2024-07-22 03:09:42 algitbot: retry master 2024-07-22 05:36:03 Unsquashed *sighs* 2024-07-22 05:41:09 cely: yup, sorry 2024-07-22 05:42:43 No problem, and no need for sorry :) 2024-07-22 05:43:21 It's just that i did a `git log` after a `git pull` and wondered i was looking at the right repo 2024-07-22 05:43:22 lol 2024-07-22 05:43:29 if i was* 2024-07-22 06:53:55 algitbot: retry master 2024-07-22 08:54:14 algitbot: retry master 2024-07-22 13:18:10 algitbot: retry master 2024-07-22 13:19:03 algitbot: retry master 2024-07-22 13:26:25 algitbot: retry master 2024-07-22 13:27:49 algitbot: retry master 2024-07-22 13:36:22 algitbot: retry master 2024-07-22 13:57:51 algitbot: retry master 2024-07-22 17:43:04 algitbot: retry master 2024-07-22 17:48:05 algitbot: retry master 2024-07-22 17:54:18 Simon Zeni mentioned there is an update present for jellyfin that could fix this issue 2024-07-22 18:06:12 algitbot: retry master 2024-07-22 18:19:22 algitbot: retry master 2024-07-22 18:23:23 regal: "failed to send diagnostic", "timed out waiting for file diagnostics to be sent" 2024-07-22 18:23:33 i wonder if it's a network issue again 2024-07-22 18:24:27 Also, did the new jellyfin just error out again on aarch64? 2024-07-22 18:24:37 algitbot: retry master 2024-07-22 18:26:07 algitbot: retry master 2024-07-22 18:27:23 I hope not, it passed in CI 2024-07-22 18:27:25 I wonder if some dotnet cache needs to be cleared on the builders 2024-07-22 18:27:34 for jellyfin 2024-07-22 18:27:40 if it fails again, I will 2024-07-22 18:31:40 armhf in the meantime thinks, what is all this fuss about 2024-07-22 18:32:54 Hehe 2024-07-22 18:33:22 That's the old jellyfin 2024-07-22 18:33:25 algitbot: retry master 2024-07-22 19:28:22 nooo 2024-07-22 19:28:24 that's the new version 2024-07-22 19:30:14 algitbot: retry master 2024-07-22 19:32:29 PureTryOut: any chance you could have a look at kwidgetaddons on 32-bits? 2024-07-22 19:41:32 sadly I don't know what's wrong. xvfb-run fails to start 2024-07-22 19:42:00 which seems unrelated to kwidgetsaddons itself, besides the fact that it's needed for tests to run 2024-07-22 19:43:29 maybe https://gitlab.freedesktop.org/ofourdan/xwayland-run can be useful, unrelated from this particular problem 2024-07-22 19:43:41 oh, I see what's going on 2024-07-22 19:44:54 algitbot: retry master 2024-07-22 19:52:22 Received signal 11 (SIGSEGV), code 128, for address 0x00000000 2024-07-22 19:53:14 algitbot: retry master 2024-07-23 04:58:31 algitbot: retry master 2024-07-23 05:13:00 algitbot: retry master 2024-07-23 05:14:55 algitbot: retry master 2024-07-23 07:28:35 Clojure finally uploaded :) 2024-07-23 07:50:01 algitbot: retry master 2024-07-23 07:52:41 algitbot: retry master 2024-07-23 07:59:22 algitbot: retry master 2024-07-23 08:14:59 algitbot: retry master 2024-07-23 08:22:07 algitbot: retry master 2024-07-23 08:38:33 fetch failed 2024-07-23 08:38:53 I have noticed such things with the tar.xz 2024-07-23 08:39:09 Maybe tar.gz will work, but i don't know 2024-07-23 08:39:33 So, let's upload both kstars and nodejs-current source archives to distfiles.a.o? 2024-07-23 08:40:42 I wonder if aarch64 will make it 2024-07-23 08:42:35 algitbot: retry master 2024-07-23 08:42:56 idk, this shit happens with firefox sources quite often as well 2024-07-23 08:43:05 and it always succeeds *at some point* 2024-07-23 08:45:05 i guess not this time.. 2024-07-23 08:48:29 algitbot: retry master 2024-07-23 09:09:51 algitbot: retry master 2024-07-23 09:11:25 algitbot: retry master 2024-07-23 09:12:56 algitbot: retry master 2024-07-23 09:20:38 algitbot: retry master 2024-07-23 09:45:55 algitbot: retry master 2024-07-23 10:02:22 algitbot: retry master 2024-07-23 11:37:07 algitbot: retry master 2024-07-24 03:48:31 algitbot: retry master 2024-07-24 03:50:11 algitbot: retry master 2024-07-24 03:50:44 algitbot: retry master 2024-07-24 03:53:19 algitbot: retry master 2024-07-24 03:54:24 algitbot: retry master 2024-07-24 03:55:39 algitbot: retry master 2024-07-24 03:56:30 algitbot: retry master 2024-07-24 04:01:10 algitbot: retry master 2024-07-24 04:05:39 algitbot: retry master 2024-07-24 04:07:29 algitbot: retry master 2024-07-24 04:11:26 algitbot: retry master 2024-07-24 04:28:54 algitbot: retry master 2024-07-24 04:33:58 algitbot: retry master 2024-07-24 04:57:57 algitbot: retry master 2024-07-24 05:17:59 #16309 2024-07-24 09:54:15 :/ 2024-07-24 09:54:21 Apparently, it didn't work 2024-07-24 09:55:16 algitbot: retry master 2024-07-24 09:55:27 but i think the error message for kglobalacceld is different now 2024-07-24 09:55:46 Maybe some files in $HOME need to be cleared 2024-07-24 09:56:18 algitbot: retry master 2024-07-24 09:58:59 Anyway, kglobalacceld builds and tests successfully for me locally in an x86 container 2024-07-24 09:59:33 I have a hunch that setting $HOME to some other directory while running tests may allow them to pass 2024-07-24 09:59:43 s/some other/an empty/ 2024-07-24 10:01:53 but it seems the segfault is still there, so probably something else also needs a rebuild 2024-07-24 10:05:07 Trying things out in CI 2024-07-24 10:07:52 I might have done something else in my container 2024-07-24 10:22:20 Ugh, maybe all the KDE frameworks packages should be rebuilt to let lua-aports work out the correct rebuild order 2024-07-24 10:22:43 but i'll give it another try 2024-07-24 10:28:01 I accidentally rebuilt kglobalaccel too in my container, mistaking it for kglobalacceld 2024-07-24 10:28:16 So that may be the difference that allows kglobalacceld tests to pass for me 2024-07-24 10:29:55 algitbot: retry master 2024-07-24 10:30:32 Did we find out what happened to lua-aports, which had the result of allowing the apache-arrow rebuild to be green in CI, but with no actual rebuilds? 2024-07-24 10:31:38 If the build order changed, maybe that could be causing issues here 2024-07-24 10:31:47 Anywa, kglobalacceld finally passed on armv7 2024-07-24 10:32:02 Anyway* 2024-07-24 10:32:48 but probably there will be other segfaults (like the last message from algitbot: kdeplasma-addons) 2024-07-24 10:37:21 cely: no, and I didn't have the time to find out either 2024-07-24 10:37:30 ncopa did update lua-aports recently 2024-07-24 10:42:25 It's frustrating that the errors seem to appear on specific archs 2024-07-24 10:43:01 hi, what broke? 2024-07-24 10:43:09 Hi ncopa 2024-07-24 10:43:17 i updated lua-aports to lua 5.4 2024-07-24 10:43:28 I'm trying to unblock the Qt/KDE stuff on 32-bit 2024-07-24 10:43:42 is the build order wrong? 2024-07-24 10:44:03 What broke was probably this (unrelated to Qt/KDE): !69454 2024-07-24 10:44:20 It seems the CI succeeded without doing anything 2024-07-24 10:44:34 despite pkgrels being bumped 2024-07-24 10:45:37 >>> No packages found to be built. 2024-07-24 10:45:47 Not sure about build order, that was just a hypothesis, but it would be difficult to detect it was true 2024-07-24 10:49:37 Btw, is the riscv64 builder down or something? 2024-07-24 10:50:18 I seem to have not seen anything building on it today 2024-07-24 12:10:35 cely: yes i rebooted both but one is failing 2024-07-24 12:10:43 see infra channel 2024-07-24 12:14:31 Ok 2024-07-24 12:46:34 https://github.com/qt/qtwebengine/commit/2c80ff478bcc :/ 2024-07-24 13:06:05 algitbot: retry master 2024-07-24 13:06:09 meh 2024-07-24 13:06:43 It went out of memory? 2024-07-24 13:09:22 yeah 2024-07-24 13:09:27 usually what bad_alloc means 2024-07-24 13:11:30 Ok 2024-07-24 14:22:33 Ugh 2024-07-24 14:22:54 Oh ok 2024-07-24 14:23:03 Normal xvfb-run without -a error 2024-07-24 14:24:11 I think with that, i have very likely fixed all of the segfaults 2024-07-24 14:24:21 What's left are dependency resolution issues 2024-07-24 14:24:38 in plasma-desktop (and by extension, plasma-desktop-meta) 2024-07-24 14:25:20 algitbot: retry master 2024-07-24 14:27:51 algitbot: retry master 2024-07-24 14:28:09 algitbot: retry master 2024-07-24 14:28:36 Hmm, there seems to be something else besides plasma-desktop/plasma-desktop-meta on x86.. 2024-07-24 14:28:38 algitbot: retry master 2024-07-24 14:28:52 algitbot: retry master 2024-07-24 14:29:11 algitbot: retry master 2024-07-24 14:29:58 algitbot: retry master 2024-07-24 14:30:10 algitbot: retry master 2024-07-24 14:30:26 Ok, it's nextcloud 2024-07-24 14:33:48 I'll let someone else look into the dependency problem, i think plasma-desktop on 32-bit will have to go the way of riscv64, and be built without kaccounts-integration 2024-07-24 15:39:45 algitbot: retry master 2024-07-24 15:40:02 algitbot: retry master 2024-07-24 15:40:16 algitbot: retry master 2024-07-24 15:40:28 algitbot: retry master 2024-07-24 17:10:39 algitbot: retry master 2024-07-24 17:15:08 algitbot: retry 3.20-stable 2024-07-24 20:44:08 algitbot: retry master 2024-07-24 21:44:50 andypost[m]: thanks 2024-07-24 21:45:13 np) 2024-07-25 01:37:08 algitbot: retry master 2024-07-25 01:37:21 algitbot: retry master 2024-07-25 01:37:34 algitbot: retry master 2024-07-25 01:39:50 algitbot: retry master 2024-07-25 01:41:23 algitbot: retry master 2024-07-25 05:02:39 algitbot: retry master 2024-07-25 05:02:55 algitbot: retry master 2024-07-25 05:03:07 algitbot: retry master 2024-07-25 05:05:22 I never cease to be amazed by OCaml's speed, maybe it would be even faster with a shared OPAMROOT 2024-07-25 05:06:33 algitbot: retry master 2024-07-25 05:06:46 algitbot: retry master 2024-07-25 05:10:38 cely: it seems that qt6-qtwebengine no longer provides the lib that it complains about 2024-07-25 05:11:42 https://tpaste.us/kXP7 2024-07-25 05:12:16 So I suppose signon-ui needs to be rebuilt 2024-07-25 05:12:36 No 2024-07-25 05:13:10 The Github link i posted yesterday reveals that QtWebEngine Quick is not built on 32-bit any longer 2024-07-25 05:13:13 Upstream's decision 2024-07-25 05:13:22 ah 2024-07-25 05:13:35 that's a bummer 2024-07-25 05:13:42 and their reason is Node.js doesn't build the Chromium stuff properly on 32-bit 2024-07-25 05:13:54 something like that 2024-07-25 05:14:10 Could you create an issue for that? 2024-07-25 05:14:18 if there isn't already 2024-07-25 05:14:26 Ok 2024-07-25 05:15:15 And I suppose #16309 can be closed? 2024-07-25 05:18:19 Very likely, yes 2024-07-25 05:18:33 The segfaults have been solved, and the 2 packages left are probably: plasma-desktop and plasma-desktop-meta 2024-07-25 05:18:55 correct 2024-07-25 05:24:33 ok, so it was xvfb failing that caused segfaults / sigill? 2024-07-25 05:25:42 It's anoying that xvfb keeps running 2024-07-25 05:30:35 #16313 2024-07-25 05:31:19 cely: thank you 2024-07-25 05:45:46 ikke: You're welcome. No it wasn't xvfb, i rebuilt 3 more packages: kservice, kglobalaccel, and sonnet 2024-07-25 05:46:19 and that seems to have cleared up the segfaults 2024-07-25 05:47:20 Earlier on, kdoctools and kcoreaddons were rebuilt (maybe there were more even earlier on), and that solved some of the segfaults 2024-07-25 05:51:52 yeah, I saw later 2024-07-25 06:16:10 algitbot: retry master 2024-07-25 06:16:26 algitbot: retry master 2024-07-25 06:16:38 algitbot: retry master 2024-07-25 07:59:50 algitbot: retry master 2024-07-25 08:09:28 algitbot: retry master 2024-07-25 08:10:30 algitbot: retry master 2024-07-25 08:12:46 algitbot: retry master 2024-07-25 13:43:27 OOM on a 256G system is an achievement on its own 2024-07-25 13:54:53 ikke: thats really impressive if its not a mem leak 2024-07-25 13:55:53 On the host, I did not see it go below 240G memory available, so if it used up all the memory, it must have been really quickly 2024-07-25 13:58:26 hnm 2024-07-25 13:59:09 at least, within 5m 2024-07-25 14:16:24 algitbot: retry 3.18-stable 2024-07-25 16:49:40 algitbot: retry 3.18-stable 2024-07-26 02:17:19 algitbot: retry master 2024-07-26 02:17:33 cely: thanks 2024-07-26 02:17:45 You're welcome 2024-07-26 03:03:16 algitbot: retry master 2024-07-26 04:51:51 algitbot: retry master 2024-07-26 05:13:26 cely: do you think it's a matter for reinstating that patch for qt6-qtwebengine to fix this? 2024-07-26 05:16:20 Probably 2024-07-26 05:17:36 However, there's the question of why it was removed in the first place, and if we should be supporting something upstream doesn't want to support 2024-07-26 05:37:08 cely: it means we loose some software on 32-bits arches, some of which I assume is used on phones 2024-07-26 05:37:18 lose* 2024-07-26 05:43:58 Hmm, not sure i want to deal with Chromium though 2024-07-26 05:44:15 Anyway, i still see quite a number of comments that say armv7 is blocked due to qt6-qtwebengine 2024-07-26 05:48:26 Apparently, 127fc6550f8225 additionally enabled armhf :( 2024-07-26 05:49:55 So, while the signon-ui error is only seen on armv7 and x86, actually armhf also has a broken qt6-qtwebengine 2024-07-26 05:50:05 I didn't realize this before 2024-07-26 05:51:05 Anyway, i think i shouldn't touch qt6-qtwebengine without knowing the full story behind what's happening, sorry 2024-07-26 07:04:29 algitbot: retry 3.18-stable 2024-07-26 15:58:17 qt6-qtwebengine seems to be building for 2.5 hours on 32-bit archs now 2024-07-26 15:59:05 I wonder if it's still building 2024-07-26 16:07:02 checking 2024-07-26 16:14:28 Thanks 2024-07-26 16:19:49 no log output on x86 atm 2024-07-26 16:30:13 algitbot: retry master 2024-07-26 16:30:40 :/ 2024-07-26 16:30:43 What happened? 2024-07-26 16:30:49 I happened 2024-07-26 16:31:30 Ah ok, that's what you meant by "no log output" 2024-07-26 16:31:45 That nothing new is being logged 2024-07-26 16:31:48 ahuh 2024-07-26 18:23:48 algitbot: retry master 2024-07-26 18:35:52 Something is wrong, i think i'll disable erlang-ls on these archs that failed tests for now, and look into what's wrong later on 2024-07-27 17:02:53 Hmm 2024-07-27 17:03:34 I wonder why this new version of ruby-tcxread takes so long 2024-07-28 10:21:54 algitbot: retry master 2024-07-28 21:08:52 algitbot: retry master 2024-07-28 21:25:48 ooh, the librewolf failure doesn't feel good 2024-07-28 23:00:43 algitbot: retry master 2024-07-29 00:54:36 I hope it's not Rust 1.80.0 that's causing the Firefox issue 2024-07-29 00:55:05 but i think we've already built Firefox 128 at least 2 times with that without issues 2024-07-29 03:27:53 "terminate called after throwing an instance of 'std::bad_alloc'" hmm 2024-07-29 03:28:07 Clang running out of memory again? 2024-07-29 03:28:14 Clang/LLVM* 2024-07-29 03:28:18 algitbot: retry master 2024-07-29 03:57:55 Ok, Firefox developer edition finally built on x86_64 2024-07-29 04:13:04 and aarch64 too 2024-07-29 13:33:39 algitbot: retry master 2024-07-29 14:50:12 Chromium has been building for 4 hours now, hopefully it will finish soon.. 2024-07-29 15:18:45 It's building 2 at the same time 🙃 2024-07-29 15:34:32 there goes aarch64 2024-07-29 16:04:32 Apparently, setting `arch="noarch"`, then using `::all` (or even `::$CARCH`) in subpackages= results in automatic dep tracing being turned off 2024-07-29 17:15:02 It's only since recent that it's even possible to use ::arch with noarch 2024-07-29 17:15:47 What happened before that? 2024-07-29 17:16:17 I think the only thing that uses `:$CARCH` is main/lua-openrc 2024-07-29 17:21:42 iirc, aports-build would keep trying to build the package 2024-07-29 17:21:52 Ok 2024-07-29 17:23:48 https://gitlab.alpinelinux.org/alpine/abuild/-/merge_requests/241 2024-07-29 19:52:45 algitbot: retry master 2024-07-29 19:53:44 hmm, mesa-rusticl broken? 2024-07-30 01:45:00 Hmm 2024-07-30 02:31:29 So, this spirv-libclc-mesa thing is solved on 64-bit now, 32-bit is still waiting on qt6-qtwebengine 2024-07-30 05:27:37 It seems the pioneer2 CI is back online 2024-07-30 05:31:10 already since yesterday morning 2024-07-30 05:35:22 Ok 2024-07-30 05:41:57 algitbot: retry master 2024-07-30 05:42:58 all 32-bits seems to be hanging on qt6-qtwebengine :/ 2024-07-30 05:43:36 algitbot: retry master 2024-07-30 05:52:56 It probably has a few more things to build now besides qt6-qtwebengine 2024-07-30 05:54:18 at least spirv* is in main, so that will get fixed 2024-07-30 06:00:58 Well, actually the spirv* upgrade didn't get built, so mesa wasn't broken 2024-07-30 06:01:44 but it's all upgraded and rebuilt now 2024-07-30 06:12:18 x86 is on qt6-qtwebengine again now 2024-07-30 06:33:22 Should be the usual test flakiness 2024-07-30 06:33:24 algitbot: retry master 2024-07-30 07:03:35 All 3 archs are building qt6-qtwebengine again 2024-07-30 07:05:30 The last time it built for Qt 6.6.3 it took about 2 hours (on x86) 2024-07-30 07:06:31 I think x86 has been building it for an hour now 2024-07-30 07:06:39 So, one more hour left, if it works 2024-07-30 10:47:56 They're still not finished :/ 2024-07-30 11:55:54 !69918 2024-07-30 11:56:37 uhm... why is build-edge-riscv64 idle? 2024-07-30 11:57:52 I'm sorry, I've been at this way longer than I can allow myself to, I'll be back later 2024-07-30 13:27:35 omni: The riscv64 builder has been down for quite a while now 2024-07-30 13:29:32 The kernel of the builder host is unstable. clandmeter is trying to get it fixed 2024-07-30 13:31:03 Ok 2024-07-30 14:46:23 cely: which builder is down atm? 2024-07-30 14:47:09 Isn't it riscv64? 2024-07-30 14:47:22 and all three 32-bit builders are stuck on qt6-qtwebengine 2024-07-30 14:47:37 both pioneer boxes are online 2024-07-30 14:47:48 Yes, but the riscv64 builder isn't building 2024-07-30 14:47:52 at least, according to build.a.o 2024-07-30 14:47:59 which one? 2024-07-30 14:48:06 edge? 2024-07-30 14:48:11 Yes 2024-07-30 14:48:26 its bulding 2024-07-30 14:48:33 Oh ok 2024-07-30 14:48:44 atleast all cores were at 100% 2024-07-30 14:48:50 Then perhaps build.a.o isn't being updated 2024-07-30 14:48:55 kisfft 2024-07-30 14:49:10 kissfft 2024-07-30 14:49:18 ah thats ci 2024-07-30 14:49:19 or could the 100% be from CI? 2024-07-30 14:50:41 ah the service did not start 2024-07-30 14:50:45 or failed 2024-07-30 14:51:09 rebooting container 2024-07-30 14:51:32 Thanks 2024-07-30 14:51:40 you are welcome 2024-07-30 14:51:42 omni: py3-qt6 also failed to build on 3.20 riscv64: https://gitlab.alpinelinux.org/Celeste/aports/-/jobs/1468361 2024-07-30 14:51:48 its a bit confusing both boxes are doing CI 2024-07-30 14:53:09 So, something is definitely up with riscv64, hopefully we can find out what it is before the 3.21 builders come online 2024-07-30 14:53:18 or else there will be lots of aports failing on riscv64 2024-07-30 14:53:27 cely where in the build does it fail? 2024-07-30 14:53:51 it looks like abuild build works 2024-07-30 14:53:56 "sip-build: '/usr/lib/qt6/bin/qmake -recursive PyQt6.pro' failed returning -11" 2024-07-30 14:54:04 is that in build or in check? 2024-07-30 14:54:09 In build 2024-07-30 14:54:20 This was a pkgrel bump on 3.20 2024-07-30 14:54:23 So, not an upgrade 2024-07-30 14:54:35 and the CI ran on "pioneer1" 2024-07-30 14:55:21 strange as i cloned your repo and build it and it works 2024-07-30 14:55:29 at least in abuild build 2024-07-30 14:55:55 Some other failures i've noticed: !69404 (disabled), !68002 (stuck in draft) 2024-07-30 14:56:22 and there's also pixman tests from !66689, which have been disabled on riscv64 now i think 2024-07-30 14:58:23 and digging through history, !67406 (also disabled, together with dependencies) 2024-07-30 15:00:03 Also, my branch for py3-qt6 is named "3.20-upgrade-qbs" as its reused 2024-07-30 15:00:10 >>> py3-qt6*: Create py3-qt6-6.7.1-r0.apk 2024-07-30 15:00:21 qbs also fails to build (i tried an upgrade, didn't try rebuild) 2024-07-30 15:00:31 So, something is wrong with the CI 2024-07-30 15:00:38 the riscv64 CI 2024-07-30 15:00:46 its the same box 2024-07-30 15:00:50 just in docker 2024-07-30 15:00:54 that reminds me 2024-07-30 15:01:04 i was trying croc on rv64 and it crashes 2024-07-30 15:03:27 the only difference for my build is that im using ccache 2024-07-30 15:04:12 Since you're around and trying things, perhaps you could try one of my aports, it's in the branch "try-ocaml5-riscv64" 2024-07-30 15:04:44 It's one of the aports that we have never managed to get it to pass on the riscv64 CI 2024-07-30 15:04:50 The native compiler of OCaml, that is 2024-07-30 15:05:05 do you have a container on one of the pioneers? 2024-07-30 15:05:07 So, what's enabled in repo now is the bytecode compiler 2024-07-30 15:05:11 No 2024-07-30 15:05:20 can i provide you with one? 2024-07-30 15:05:42 I have observed the process before, it seems to involve IPv6? 2024-07-30 15:05:52 no 2024-07-30 15:06:12 it means you need wg, thats about it 2024-07-30 15:06:18 and an ssh key :) 2024-07-30 15:06:51 Ok, wireguard, what do i need to get that set up? 2024-07-30 15:07:26 i am not sure for your env, i can provide you with a basic config 2024-07-30 15:07:39 it will provide you with an 172x address 2024-07-30 15:07:50 and i give you the container address 2024-07-30 15:08:14 Alright 2024-07-30 15:10:21 and you need to give me your public wg key 2024-07-30 15:19:53 -17 MRs, that's the spirit 2024-07-30 15:20:21 Haha 2024-07-30 15:22:53 One more down 2024-07-30 15:23:36 momentum's strong today ^^ 2024-07-30 15:24:24 good to see the count back off the 500 threshold 2024-07-30 15:25:02 yup 2024-07-30 16:49:57 Wow, the OCaml native compiler thing is bad, even compiling a simple hello world example on riscv64 with the native compiler segfaults :/ 2024-07-30 18:27:03 cely: thanks 2024-07-31 02:25:04 fetch failed 2024-07-31 02:25:04 algitbot: retry master 2024-07-31 03:01:14 another fetch failed 2024-07-31 03:01:15 algitbot: retry master 2024-07-31 04:40:27 algitbot: retry master 2024-07-31 04:40:31 Wow 2024-07-31 04:40:46 Do you think it will succeed this time? 2024-07-31 04:41:22 I don't have much hope 2024-07-31 04:41:39 :( 2024-07-31 04:42:07 I guess we should try to figure out why it's hanging 2024-07-31 04:43:44 ncopa mentioned a patch from SUSE in #16321 2024-07-31 04:45:47 I'm now building mesa 24.2.0_rc2 in the riscv64 lxc container to see if it'll fix gtksourceview5 tests 2024-07-31 04:46:35 Seems a relevant change is included in 24.2.0: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26018 2024-07-31 04:46:38 That patch seems to address an OOM, not hanging 2024-07-31 04:46:50 Ok 2024-07-31 04:47:31 Does it always hang around the same place? 2024-07-31 04:48:48 seems so: [30024/30294] STAMP obj/tools/v8_context_snapshot/v8_context_snapshot.stamp 2024-07-31 04:50:01 Not sure if that line is relevant, but it appears alway to be the last line 2024-07-31 04:54:06 Could be relevant: https://forums.gentoo.org/viewtopic-t-1164017.html 2024-07-31 04:56:58 So, based on what was said there, it seems we need nodejs18 2024-07-31 05:02:11 And guess what we no longer have 2024-07-31 05:02:25 I'm a bit astounded it depends on nodejs in the first place 2024-07-31 05:07:33 what if we would try nodejs-current? 2024-07-31 05:11:02 You could try it 2024-07-31 05:11:38 but i think qtwebengine uses an older branch of chromium 2024-07-31 05:12:29 meaning not compattible with nodejs22? 2024-07-31 05:13:13 Likely, but i'm not sure 2024-07-31 05:47:35 algitbot: retry master 2024-07-31 08:33:19 i think i finally have a fix for qt6-qtwebengine 2024-07-31 10:21:27 ncopa: oh, interesting 2024-07-31 10:44:58 seems like it built on armv7 2024-07-31 10:51:31 ncopa: oh, so it was the gentoo fix 2024-07-31 10:51:42 that celeste linked to earlier 2024-07-31 13:28:29 No, it was the SUSE fix 2024-07-31 13:28:42 Gentoo "fix" was probably "use nodejs18" ;) 2024-07-31 13:46:51 I think someone may need to run gdb on plasma-desktop to see what needs a rebuild to fix its tests 2024-07-31 13:47:27 The builder has built too much without uploading, so doing that locally will probably not be accurate anymore 2024-07-31 13:49:39 armhf has uploaded community/, but plasma-desktop is not enabled there 2024-07-31 15:16:08 Yes, i think specifically, if anyone wants to try to build plasma-desktop locally, they would have to built qt6-qtwebengine too, as plasma-desktop was failing on that before 2024-07-31 15:16:16 s/built/build/ 2024-07-31 15:19:31 How would one use gdb to find out what needs to be rebuilt? 2024-07-31 15:19:46 So, most likely the solution would be to run gdb on the builder to find out which package specifically to rebuild, or just do a full rebuild of KDE against Qt 6.7.2 and hope that fixes things 2024-07-31 15:19:47 "This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem." 2024-07-31 15:20:19 Run the test binary under gdb and look at the stack trace 2024-07-31 15:22:56 but it is very perplexing that only 32-bit encounters this issue 2024-07-31 15:23:52 Looking at the git log, kde-frameworks 6.4.0 was committed on 13th July, qt 6.7.2 on the 16th, and plasma 6.1.3 on the 17th 2024-07-31 15:26:50 https://tpaste.us/yYMe 2024-07-31 15:28:02 So, what has been rebuilt so far? sonnet, kglobalaccel, kservice, kcoreaddons, kdoctools...all from the kde-frameworks 6.4.0 commit 2024-07-31 15:31:05 You'll probably have to run gdb under dbus-run-session and xvfb-run like what's done in check() 2024-07-31 15:32:23 Wait, looking at the build log, it is foldermodeltest that segfaults 2024-07-31 15:32:59 screenmappertest still shows "Passed" 2024-07-31 15:35:20 right, with the same invocation as in check(), I get 2 segfaults now 2024-07-31 15:36:49 Another test also segfaults now? 2024-07-31 15:37:06 Besides foldermodeltest 2024-07-31 15:37:11 4 - positionertest (SEGFAULT) 2024-07-31 15:38:19 That's excluded in check() 2024-07-31 15:43:28 I have to wait for the builder host to be reachable again 2024-07-31 15:46:32 What happened? 2024-07-31 15:50:58 no idea 2024-07-31 15:53:08 back 2024-07-31 15:58:40 I get ptrace: permission denied, even after I removed dropping that capability from the container config and restarting the container :/ 2024-07-31 15:58:55 3915896.843460] ptrace attach of "bin/foldermodeltest"[33378] was attempted by "gdb --nx --batch -ex thread apply all bt --pid 567"[33384] 2024-07-31 16:00:29 Maybe if it's too difficult to pinpoint the exact package to rebuild, we should just rebuild all packages from the kde-frameworks 6.4.0 commit 2024-07-31 16:03:04 ran it as root 2024-07-31 16:03:37 Ouch 2024-07-31 16:03:40 https://tpaste.us/YYKQ 2024-07-31 16:05:19 qt6-qtbase-x11? 2024-07-31 16:05:34 Probably 2024-07-31 16:05:50 It's just that and libQt6Core/Test in the backtrace 2024-07-31 16:06:06 qt6-qtbase was patched yesterday for a CVE 2024-07-31 16:06:32 Wait 2024-07-31 16:06:36 qt6-qtbase-x11 is a subpackage of qt6-qtbase 2024-07-31 16:06:45 Which brings us back to square one :/ 2024-07-31 16:11:30 hmm, I don't see a segfault now 2024-07-31 16:11:53 How can I use dbus-run-session -- xvfb-run together with gth? 2024-07-31 16:11:55 gdb* 2024-07-31 16:12:35 I think you just chain them together 2024-07-31 16:12:41 I tried both ways 2024-07-31 16:12:43 dbus-run-session -- xvfb-run -a gdb .. 2024-07-31 16:13:30 I did that, but then it complains it cannot connect to display 2024-07-31 16:13:51 Hmm 2024-07-31 16:14:44 oh, now I see the segfault 2024-07-31 16:15:08 yeah, it was there, sorry 2024-07-31 16:17:59 https://tpaste.us/5Xmj 2024-07-31 16:23:33 I think it's beginning to look like something that can't be fixed with rebuilds 2024-07-31 16:25:16 Too bad due to un-uploaded dependencies, probably the only place this issue can be reproduced is on the builder, as we can't use the CI for this 2024-07-31 16:26:53 Maybe since the positionertest that you found to also segfault is disabled, we can just do the same for foldermodeltest for now 2024-07-31 16:27:41 I think the bigger question to be answered here is why only 32-bit 2024-07-31 16:29:24 Could it be due to 64-bit having finished all the upgrades/rebuilds before the commit related to this issue was pushed.. 2024-07-31 16:30:58 Maybe it's a bit too late to answer that now, and we should've tried if 64-bit also had such issues before rebuilding those packages from kde-frameworks 2024-07-31 17:05:22 algitbot: retry master 2024-07-31 17:10:07 algitbot: retry master 2024-07-31 17:20:20 algitbot: retry master 2024-07-31 17:35:28 algitbot: retry master 2024-07-31 17:38:42 algitbot: retry master 2024-07-31 17:43:55 algitbot: retry master