2024-06-01 02:12:59 algitbot: retry master 2024-06-01 13:32:09 I don't see a reply :( 2024-06-01 13:32:09 Hmmmmm! 2024-06-01 13:32:09 algitbot: hi 2024-06-01 13:32:09 How do we ping J0WI :) 2024-06-01 13:32:09 Anyway, fix is available in !63191, has been available for quite some time now 2024-06-01 13:32:37 Now i see it 2024-06-01 13:32:50 algitbot: hi 2024-06-01 13:57:01 algitbot: retry master 2024-06-01 14:25:46 algitbot: retry master 2024-06-01 15:02:44 algitbot: hi 2024-06-01 15:06:38 algitbot: retry master 2024-06-01 15:47:00 J0WI will probably need to rebase !67046 to include the cargo-crev p384 stack overflow fix (if it works, that is) 2024-06-01 15:51:21 ... 2024-06-01 15:52:34 Hmm, probably build-edge-armhf didn't get the fix yet 2024-06-01 15:52:49 algitbot: retry master 2024-06-01 15:53:17 Whatever, now i'm confused 2024-06-01 15:53:52 If it doesn't work again, probably the higher value from !63191 will 2024-06-01 15:55:10 I used this lower value, because it worked for things like community/cargo-outdated and community/himalaya that i fixed for 3.20 2024-06-01 15:57:04 aarch64 has finally finished building git-annex, i was getting worried 2024-06-01 15:59:53 and the lower value of RUST_MIN_STACK has worked for x86 2024-06-01 16:02:48 Failed again! 2024-06-01 16:03:30 Hold on, i'm increasing RUST_MIN_STACK to the value used in !63191 2024-06-01 16:03:40 👍 2024-06-01 16:04:51 Wait a minute 2024-06-01 16:04:53 it isn't p384 2024-06-01 16:05:14 It's some other error 2024-06-01 16:05:16 :/ 2024-06-01 16:08:14 This has happened before, fix p384 only to find some other failure 2024-06-01 16:11:37 The latest cargo-crev has libc 0.2.155 in Cargo.lock 2024-06-01 16:19:53 !63191 uses master as source branch, so i've opened !67051 2024-06-01 16:21:25 May need to be fast tracked to avoid cargo-crev blocking armhf (that is, if the new version doesn't have the same problem) 2024-06-01 16:32:56 and it seems the new cargo-crev passes the armhf CI 2024-06-01 22:33:31 algitbot: retry master 2024-06-02 02:08:56 algitbot: retry master 2024-06-02 06:49:26 I wonder if armv7 is stuck on dotnet6 2024-06-02 06:50:06 It hasn't even gotten to cargo-crev 2024-06-02 06:50:18 So, maybe armv7 also needs to be disabled 2024-06-02 06:51:10 (if, like armhf, the p384 stack increase is not enough to fix its issues) 2024-06-02 06:54:16 There's very little activity, so I suppose so 2024-06-02 06:54:32 algitbot: retry master 2024-06-02 13:03:00 :O 2024-06-02 13:03:13 algitbot: retry master 2024-06-02 13:31:46 :( 2024-06-02 13:54:31 I may have a fix for that, but i am a bit tired of this issue 2024-06-02 13:55:31 So, i won't re-enable armhf. someone less weighed down by this can attempt that 2024-06-02 15:24:21 Ugh, my connection to gitlab.a.o has stopped working 2024-06-02 15:27:37 After some retries.. 2024-06-02 15:27:51 thanks! 2024-06-02 15:28:17 You're welcome 2024-06-02 15:28:46 cely: is this with your internet connection a temporary thing or do you know when it gets fixed? 2024-06-02 15:29:54 The last time i heard a fix was coming soon, so hopefully by mid-June it should be fixed 2024-06-02 15:31:43 that would be nice 2024-06-02 15:31:55 good luck 2024-06-02 15:32:14 Sure, thanks 2024-06-02 17:43:13 ^ Everyone who IRCs from Matrix now have a new aport to try out 2024-06-03 18:36:10 algitbot: retry 3.20-stable 2024-06-04 09:56:02 Hmm, test failure (Exec format error), but it passed in CI 2024-06-04 09:56:21 Hopefully a retry will solve it 2024-06-04 09:56:22 algitbot: retry master 2024-06-04 09:57:04 ACTION goes AFK 2024-06-04 22:32:53 sigh 2024-06-04 22:33:34 at least i managed to not mess up ESR 2024-06-05 00:46:58 algitbot: retry master 2024-06-05 00:46:58 algitbot: retry master 2024-06-05 00:47:00 algitbot: retry master 2024-06-05 00:47:04 ... 2024-06-05 00:47:32 algitbot: retry master 2024-06-05 07:36:27 Check failed 2024-06-05 07:36:28 algitbot: retry master 2024-06-05 13:47:53 andypost:matrix.org: Aha! https://github.com/astral-sh/uv/pull/3833 gives a clue about why libgit2 is no longer needed 2024-06-05 13:48:49 Though i wonder if that means uv now needs `depends="git"` 2024-06-05 17:08:13 algitbot: retry master 2024-06-05 17:31:50 algitbot: retry master 2024-06-05 17:53:08 I made an MR that should help that traefik builds for s390x but it never gets in 2024-06-05 17:53:13 it fails just because its slow 2024-06-05 18:17:11 celeste: as I get uv is using only bzip2 lib and I just attempted to reuse zlib/zstd but it did not work 2024-06-06 01:26:16 andypost[m]: i'm not really sure about that, sorry 2024-06-06 04:45:22 algitbot: retry master 2024-06-06 06:27:42 algitbot: retry 3.20-stable 2024-06-06 18:10:52 algitbot: retry 3.20-master 2024-06-06 18:25:59 algitbot: retry 3.20-master 2024-06-07 03:06:38 algitbot: retry 3.20-stable 2024-06-07 10:01:10 algitbot: retry master 2024-06-07 10:03:23 algitbot: retry master 2024-06-09 10:55:26 Oops 2024-06-09 10:55:44 I think marianbu's accessment of x86 failing due to disk space was not exactly accurate 2024-06-09 10:55:52 failing in CI* 2024-06-09 11:08:55 assessment* 2024-06-09 16:29:00 oh-no 2024-06-09 19:11:59 algitbot: retry master 2024-06-10 03:42:00 Alright, now 7 archs are at the first hurdle the perl 5.40.0 rebuild has to go through: postgresql16 2024-06-10 03:42:21 and aarch64 has gone through that :) 2024-06-10 03:43:58 armv7 also, but only to head straight into the next hurdle: nginx 2024-06-10 03:46:12 nginx has passed now 2024-06-10 03:46:32 onto freeradius for aarch64 and apparmor for armv7 2024-06-10 03:46:48 and apparmor has passed 2024-06-10 03:46:55 freeradius too 2024-06-10 04:02:42 Hmm, didn't expect it would be s390x and armhf to be the first ones to make it past 50% of main 2024-06-10 04:02:58 50% of main for the perl rebuild, that is 2024-06-10 04:14:15 now aarch64 and armv7 have also past 50% 2024-06-10 04:27:26 armhf has just 5 aports to go :) 2024-06-10 04:27:28 for main 2024-06-10 04:27:49 Oh now it's reached postgresql15, that will take some time 2024-06-10 04:39:33 Now all archs except riscv64 have reached at least 50% 2024-06-10 04:46:15 and s390x has complete main 2024-06-10 04:46:28 and aarch64 too 2024-06-10 04:47:08 It was 101 vs 100 aports for aarch64 and s390x in main, but now it's 125 vs 117 for community 2024-06-10 04:47:41 So s390x has 8 aports more disabled than aarch64 2024-06-10 04:48:09 and armhf has 118 aports for community, armv7 120 2024-06-10 04:49:24 I think a few of those that are now disabled for armhf/v7 can be enabled after this, as i've also enabled -Duse64bitint for Perl together with the 5.40.0 upgrade 2024-06-10 04:49:38 but that's probably something for another day 2024-06-10 05:11:25 2 more aports for ppc64le main 2024-06-10 05:11:34 Last aport now 2024-06-10 05:12:00 So, 5 archs will be working on community soon 2024-06-10 05:12:16 Ok 2024-06-10 05:12:34 Now it's working on community 2024-06-10 05:12:44 121 aports for ppc64le 2024-06-10 05:18:22 32-bit ARM is now 50% through the Perl rebuilds in community 2024-06-10 05:19:36 sounds like sport commentary :D 2024-06-10 05:19:47 Hehe 2024-06-10 05:33:27 12 aports left for armhf community 2024-06-10 05:36:10 Hmm 2024-06-10 05:36:12 algitbot: retry master 2024-06-10 05:36:47 and now it has passed 2024-06-10 05:36:54 Flaky tests will be flaky 2024-06-10 05:37:20 and now for the final 60 aports in testing for armhf 2024-06-10 05:38:27 Hmm 2024-06-10 05:41:20 https://build.alpinelinux.org/buildlogs/build-edge-armv7/community/biber2.19/biber2.19-2.19.1-r1.log 2024-06-10 05:41:33 https://build.alpinelinux.org/buildlogs/build-edge-armv7/community/biber2.19/biber2.19-2.19-r1.log * 2024-06-10 05:43:43 I wonder if it tried to build that because of the `provides="biber=$pkgver-r$pkgrel"` 2024-06-10 05:44:25 because i didn't bump pkgrel for that 2024-06-10 05:46:22 and due to that, armv7 now shows "28/27" for aports built 2024-06-10 05:46:31 and now it's onto testing, 60 aports, just like armhf 2024-06-10 05:49:51 armhf has just 6 aports left :) 2024-06-10 05:50:53 and it is done \o/ 2024-06-10 05:51:00 Oh wait, one last aport 2024-06-10 05:51:46 Hopefully, the other 2 ARM builders get a speed boost after this 2024-06-10 05:54:15 and x86_64 has 126 aports in community, x86 120 2024-06-10 05:55:04 s390x is now onto the last 61 aports in testing 2024-06-10 05:55:27 and aarch64 the last 65 2024-06-10 06:15:54 All ARMs have completed the Perl 5.40 rebuild, and s390x should follow shortly \o/ 2024-06-10 06:56:03 ppc64le is now onto the last 65 aports in testing from the rebuild :) 2024-06-10 07:06:13 And, expected to come in last is...riscv64, which has just begun slogging through the 117 aports it has in community 2024-06-10 15:42:15 I guess after the Perl rebuild, we'll be seeing a Go rebuild next 2024-06-10 16:01:55 ikke: is strace somehow installed on the build-edge-armv7 builder? 2024-06-10 16:05:09 Does something fail when that is installed? 2024-06-10 16:05:33 yea, it runs additional tests then that are normally skipped 2024-06-10 16:05:37 Oh, probably it enables some tests that wouldn't be enabled if strace wasn't available 2024-06-10 16:05:40 yea 2024-06-10 16:05:43 Right :) 2024-06-10 16:08:54 I've come across a few things being installed on the builders which then caused build issues (openjdk on build-3-20-aarch64, and perl on build-3-20-x86_64), so i wouldn't be surprised 2024-06-10 17:21:16 nmeum: removed it 2024-06-10 17:30:53 ikke: thanks! 2024-06-10 17:30:56 algitbot: retry master 2024-06-10 20:16:08 I'll remote tillitis-key1-apps 2024-06-10 23:35:52 algitbot: retry master 2024-06-10 23:40:45 algitbot: retry master 2024-06-10 23:47:08 algitbot: retry master 2024-06-10 23:49:18 algitbot: retry master 2024-06-11 00:05:57 algitbot: retry master 2024-06-11 00:10:27 algitbot: retry master 2024-06-11 00:13:32 algitbot: retry master 2024-06-11 00:16:44 algitbot: retry master 2024-06-11 00:18:40 algitbot: retry master 2024-06-11 00:35:02 algitbot: retry master 2024-06-11 00:45:57 * algitbot: retry master 2024-06-11 00:49:13 * algitbot: retry master 2024-06-11 00:58:36 * algitbot: retry master 2024-06-11 01:02:08 algitbot: retry master 2024-06-11 01:21:38 algitbot: retry master 2024-06-11 01:30:14 algitbot: retry master 2024-06-11 01:33:02 algitbot: retry master 2024-06-11 02:15:26 Wow, that's unexpected, riscv64 has moved on to testing, while the rest (besides 32-bit ARM) are still stuck on community 2024-06-11 02:40:13 algitbot: retry master 2024-06-11 02:44:39 Finally 2024-06-11 02:46:17 Now for the around 200 aports in testing 2024-06-11 03:20:01 algitbot: retry master 2024-06-11 05:57:30 3 more aports from the Go rebuild for aarch64 2024-06-11 06:03:41 Oh no 2024-06-11 06:03:58 I remember testing/simpleiot 2024-06-11 06:04:53 It's the aport that requires Elm to build 2024-06-11 06:05:38 and i think the only reason it builds is because it relies on some cached Elm on the builder 2024-06-11 06:06:02 but now it's failing some tests 2024-06-11 06:06:44 I see it's failing while opening some sqlite db 2024-06-11 06:07:11 Hopefully, a retry will let it pass 2024-06-11 06:07:16 algitbot: retry master 2024-06-11 06:07:33 aarch64 is now on the last aport of the Go rebuild :) 2024-06-11 06:08:00 and now that last aport (fcitx5-bamboo) has passed 2024-06-11 06:20:03 Hmm, is this another aport with flaky tests 2024-06-11 06:20:51 Hmm, it even checks for fuse support through /proc/modules, and removes a test if there's no support 2024-06-11 06:20:56 algitbot: retry master 2024-06-11 12:25:41 x86_64 seems to be on the last aport of the Go rebuild :) 2024-06-11 12:26:14 Hmm, so i guess simpleiot tests passed after enough retries 2024-06-11 16:22:38 algitbot: retry master 2024-06-11 16:24:14 Err 2024-06-11 16:24:22 I think you haven't merged the fix for mozjs115? 2024-06-11 16:32:52 oh 2024-06-11 16:32:52 right 2024-06-11 16:35:58 algitbot: retry master 2024-06-11 16:36:09 Now it will get the fix 2024-06-11 16:52:27 algitbot: retry master 2024-06-12 05:20:47 algitbot: retry 3.19-stable 2024-06-12 06:41:24 Hmm, is it normal for clang16 to be built on ppc64le before fortify-headers? 2024-06-12 06:42:00 (other archs build fortify-headers first) 2024-06-12 06:45:59 Oh now x86 is also building clang16 before fortify-headers 2024-06-12 06:46:36 ACTION waits for riscv64 2024-06-12 06:51:09 Hmmmm 2024-06-12 06:51:48 Now i did a git diff of my latest git pull (with the Clang commits), and see that clang15's pkgrel jumped from 19 to 22 2024-06-12 06:53:31 (i don't scrutinize the git diffs so closely, it's just that this bit immediately jumped out at me) 2024-06-12 06:53:55 and riscv64 builds fortify-headers first, just like the other 5 archs 2024-06-12 07:05:19 Going AFK, bye 2024-06-13 17:54:31 PureTryOut: ^ 2024-06-13 17:54:40 seems like an assert failed 2024-06-13 17:57:21 Did not expect that one. I'll have to look at it later 2024-06-13 17:57:39 nod 2024-06-13 18:18:14 algitbot: retry master 2024-06-13 18:30:03 algitbot: retry master 2024-06-14 02:39:58 algitbot: hi 2024-06-14 02:40:05 algitbot: hi 2024-06-15 12:54:51 build-edge-x86_64 has finally finished Chromium and Electron \o/ 2024-06-15 12:55:36 Now it's working on element-desktop, and based on the last version, that should complete in about 10 minutes 2024-06-15 12:55:39 Oh, and now it's done 2024-06-16 00:49:30 algitbot: retry master 2024-06-17 06:44:10 algitbot: retry master 2024-06-17 07:24:06 I wonder if all the computations in SymPy tests are just too much for the builders :/ 2024-06-17 07:29:32 Probably shouldn't have used `-n auto` 2024-06-17 07:30:08 Anyway, if they start failing, and i'm not around, then feel free to just disable checks again 2024-06-17 07:31:19 Maybe this is one aport where checks just have to be disabled, i've already thrown many techniques at it 2024-06-17 08:05:11 Anyway, sorry about blocking the builders, i've opened !67786 to hopefully solve that 2024-06-17 08:06:39 I think 8 maximum parallel test jobs should be quite safe for the builders (probably can't tell this from the CI, as that is what gave me the impression that `-n auto` would work in the first place) 2024-06-17 09:53:33 ncopa, ikke: sorry for blocking the builders, i've merged ^ that limits the number of test jobs to a maximum of 8, which should be safe enough 2024-06-17 09:54:11 However, the builders will probably need a reboot or something, hopefully the situation is not as bad as that time with 6 copies of Chromium 2024-06-17 09:54:18 s/Chromium/WebkitGTK/ 2024-06-17 09:59:12 no worries 2024-06-17 10:23:33 still >128G memory free 2024-06-17 10:38:53 I think it isn't so much about free memory, but about Pytest being unable to recover after starting too many (computationally-intensive) test jobs 2024-06-17 11:03:39 Unless there is still some activity in the Pytest output (i've configured a timeout of 10 minutes, with only 3 reruns, so it shouldn't go on for as long as it has), perhaps the Pytest process can be killed and tried again? 2024-06-17 16:32:25 I think the SymPy Pytest process is not going to terminate by itself, so when it does get killed manually, my solution from !67786 which is already merged should hopefully allow the tests to pass 2024-06-17 16:34:00 I think 8 maximum test jobs should be quite safe, i looked at the log from the only arch that failed (armv7), and somehow Pytest started 80 test jobs (does the builder really have 80 cores?) 2024-06-17 16:34:51 but whatever the number of cores, 80 times 3 (for the 3 ARMs, assuming it starts 80 for all 3 of them) is not a workable number 2024-06-17 16:38:21 Anyway, i'll be going AFK now, so if limiting it to 8 jobs still doesn't do the trick, then feel free to just disable tests, and i'll give it another go at another time 2024-06-17 16:38:30 Bye 2024-06-17 17:17:19 cely: che-bld-1 [~]# nproc 2024-06-17 17:17:21 80 2024-06-17 17:29:34 cely: how does reducing the amount of concurrent jobs fix this issue? It's not that it's exhausting any resources atm 2024-06-17 17:29:54 there is just one process spinning 2024-06-17 17:32:44 Ok @ nproc returning 80 2024-06-17 17:33:22 Perhaps it's not so much exhausting resources, and more Pytest being unable to cope with 80 processes 2024-06-17 17:34:55 (and probably if it really did create 80 test workers on multiple builders that share the same host, then when it was doing that, the jobs would be competing for CPU time) 2024-06-17 17:35:22 but whatever it is, the tests aren't supposed to take this long 2024-06-17 17:35:37 ahuh 2024-06-17 17:36:37 it's more like some kind of deadlock 2024-06-17 17:36:56 Thanks 2024-06-17 17:38:09 s390x with just 8 cores also hang 2024-06-17 17:38:29 If it happens again, feel free to just disable the tests (it was originally disabled, before i enabled them) 2024-06-17 17:38:52 algitbot: retry master 2024-06-17 17:40:18 Now there are more things to build, so probably the builders won't start testing SymPy all at the same time again 2024-06-17 17:40:53 But it also affected x86 and x86_64, each on a dedicated host 2024-06-17 17:41:12 Ok 2024-06-17 17:42:03 If that's the case then probably it means SymPy tests cause issues on the builders even though they seem alright in the CI 2024-06-17 17:43:03 x86(_64) ci has more cores than the builders 2024-06-17 17:43:11 But let's see what happens on this second attempt 2024-06-17 17:43:14 (for now 😉) 2024-06-17 17:44:01 Meaning there are plans to get more cores for the builders? 2024-06-17 17:44:11 new builders are about to be ready 2024-06-17 17:44:31 That's good news 2024-06-17 17:44:51 How long does chromium takes to build on x86_64 right now (give or take)? 2024-06-17 17:45:46 Check elapsed time on build logs? 2024-06-17 17:45:53 My impression is i've seen it take half aday 2024-06-17 17:45:57 a day* 2024-06-17 17:46:39 Build complete at Sat, 15 Jun 2024 05:36:03 +0000 elapsed time 8h 31m 49s 2024-06-17 17:46:52 celie: half a day was my impression as well 2024-06-17 17:47:15 so ~8-9h 2024-06-17 17:47:26 The new builder did it in 4h 2024-06-17 17:48:06 Hehe 2024-06-17 17:49:19 I'll be looking forward to that 2024-06-17 17:52:02 I was going to say, i wonder if it would be possible to try Rust with llvm18 on that, to see if it still encounters the OOM issues like it did on the CI 2024-06-17 17:52:19 but i then remembered, that even if x86_64 doesn't have issues, s390x still does 2024-06-17 17:53:08 So, never mind, probably it should wait for the LLVM upgrade (already released) first 2024-06-17 17:54:17 Going AFK for real now, bye 2024-06-17 17:54:57 o/ 2024-06-18 01:20:13 I'm working on SymPy tests again 2024-06-18 01:21:11 Seems limiting the max number of processes to 24 is good enough for CI, and allows the 2 x86*, and 3 ARMs to complete tests in around 15 minutes 2024-06-18 01:21:34 but that's all in the CI, so i will add !check before merging 2024-06-18 01:22:13 Unless there is a way of testing if something is running on the CI in the APKBUILD 2024-06-18 01:23:41 But i'm not going to risk it now, after blocking the builders for a day, so will probably go with the !check 2024-06-18 01:23:47 -probably 2024-06-18 01:30:52 (I think 24 is good enough, as the tests slow down at 99%, where it's probably just waiting for the last few long running tests to finish, but anyway, the builders won't run tests, so it probably doesn't matter anyway) 2024-06-18 02:26:22 Ok, so i've tried running the only deselected test, it is 20 minutes now and it hasn't completed, will wait another 10 just to be sure 2024-06-18 02:45:52 Ok, it's more like 40 minutes now for that one test, and that's the reason i thought tests were safe to enable, because i thought i had found the test that "went into an infinite loop" (according to what the APKBUILD comments used to say) 2024-06-18 02:46:28 but apparently, even after that, tests are still having issues on the builders (even though they pass on the CI) 2024-06-18 02:52:46 Ok, i've merged ^ 2024-06-18 02:53:00 Very sorry for tying the builders up for almost a day, please kill the Pytest process at your convenience 2024-06-18 02:58:14 It's still a bit weird though that ppc64le and riscv64 managed to not hang (s390x had Pytest killed) 2024-06-18 02:59:24 Probably there is something that disables whatever problematic tests on those archs, or Pytest works a bit differently there 2024-06-18 03:08:58 Ah no, it was just my own workaround (allowing tests to fail on those archs), and they did fail (thankfully, otherwise all 8 builders would be blocked) 2024-06-18 03:09:30 In CI now, i'm not seeing that issue on ppc64le and s390x anymore 2024-06-18 04:13:06 algitbot: retry master 2024-06-18 08:37:43 When it's convenient, can the SymPy Pytest process be killed please? :) 2024-06-18 08:39:38 I've already started thinking about how to prevent this, during the next upgrade of SymPy, i'll very likely (try to) add a `timeout 2h` (which from what i've seen is enough for CI) to the Pytest call, so even if it's acidentally merged with tests enabled, the longest they can block the builders for is 2 hours 2024-06-18 08:40:38 Of course, i won't be intentionally trying the `timeout 2h` on the builders, i'll try a `timeout 5m` on the CI 2024-06-18 08:46:22 (I've already merged an MR disabling tests) 2024-06-18 12:31:06 ncopa, ikke: i am now convinced (have been for about half a day now) that SymPy tests will not complete on the builders, and have merge an MR that disables them, sorry it took so long for me to come to this conclusion, can the Pytest processes be killed? Thanks. 2024-06-18 12:32:20 let me have a look 2024-06-18 12:32:56 cely: I'm interested to know why it hangs on the builders 2024-06-18 12:35:43 Same here, but i think it's risking a bit too much to have them blocked for more than a day 2024-06-18 12:36:38 Maybe Pytest does something different in CI Docker vs builder lxc, but that may be a bit of a stretch 2024-06-18 12:37:10 i killed it on build-edge-x86_64 2024-06-18 12:37:23 but i've tried it in CI a few times, it doesn't hang there 2024-06-18 12:37:42 Thanks 2024-06-18 12:38:09 The tests are disabled now, so you can kill it on all the builders 2024-06-18 12:38:12 algitbot: retry master 2024-06-18 12:40:35 Conveniently, it hangs at 99%.. 2024-06-18 12:42:30 done 2024-06-18 12:43:37 Thanks 2024-06-18 12:43:45 I'll open an MR to upgrade py3-pytest-rerunfailures 2024-06-18 12:44:20 Maybe the new version fixes something, but i don't think i dare try SymPy tests on the builders again so soon 2024-06-18 12:45:07 algitbot: retry master 2024-06-18 12:47:12 Hmm, however py3-pytest-rerunfailures tests are now failing with latest pytest 8 (works with py3-pytest7 though) :/ 2024-06-18 12:52:03 and it seems that has just been reported: https://github.com/pytest-dev/pytest-rerunfailures/issues/269 2024-06-18 12:52:22 algitbot: retry master 2024-06-18 12:58:04 algitbot: retry master 2024-06-18 12:58:22 algitbot: retry master 2024-06-18 13:09:29 algitbot: retry master 2024-06-18 13:15:54 algitbot: retry master 2024-06-18 13:19:12 algitbot: retry master 2024-06-18 13:24:45 i can repro the py3-sympy deadlock 2024-06-18 13:26:48 How about with py3-pytest7? 2024-06-18 13:26:59 Because py3-pytest-rerunfailures tests are alright with that 2024-06-18 13:46:01 algitbot: retry master 2024-06-18 13:48:24 algitbot: retry master 2024-06-18 13:49:20 algitbot: retry master 2024-06-18 14:03:56 algitbot: retry master 2024-06-18 14:05:39 algitbot: retry master 2024-06-18 14:07:44 algitbot: retry master 2024-06-18 14:08:23 I think some gdu test is not working on 32-bit 2024-06-18 14:47:08 algitbot: retry 3.19-stable 2024-06-18 16:28:50 algitbot: retry master 2024-06-18 20:18:30 looking at gdu btw 2024-06-18 20:18:46 seems some new tests have been added, which trip an issue on 32-bits arches 2024-06-19 05:39:40 Lots of MRs being closed due to inactivity :O 2024-06-19 11:36:01 algitbot: retry master 2024-06-19 11:38:22 algitbot: retry master 2024-06-19 11:38:55 Hmm, ok, it's a "no space left on device" issue 2024-06-19 11:41:22 ikke: no space on x86 ^ 2024-06-19 11:41:54 Solution: migrate to the new build server 2024-06-19 11:42:51 That's an interesting solution :) 2024-06-19 13:01:57 Err 2024-06-19 13:02:11 Probably should give x86 some space first 2024-06-19 13:02:19 llvm18 is biiig 2024-06-19 13:05:04 It was just last week that llvm18 was made our default llvm, and now i see upstream is already talking about branching llvm19 next month 2024-06-19 14:45:50 oof 2024-06-19 14:45:53 ikke: 2024-06-19 14:46:07 a 2024-06-19 14:46:12 was mentioned in #-infra 2024-06-19 19:28:03 algitbot: retry build-edge-x86 2024-06-19 19:28:10 algitbot: retry master 2024-06-20 04:58:04 I noticed that the x86 builder seems to be faster than x86_64 now, i guess that means the move to a new (x86) host been completed? 2024-06-20 07:44:46 Upstream removes tag, typical 2024-06-20 07:45:48 Hmm, or did Codeberg just go down 2024-06-20 08:07:10 algitbot: retry master 2024-06-20 08:08:23 Codeberg is taking some time to come back, so i'll revert the upgrade for now, since it hasn't built on any arch 2024-06-20 08:49:16 x86 before x86_64 :) 2024-06-20 09:12:51 A whopping 19 seconds 2024-06-20 09:13:27 And yes, x86 has been moved 2024-06-20 09:13:35 See #-infra 2024-06-20 12:08:33 Yes, i saw #-infra :) 2024-06-21 06:40:32 Hmm 2024-06-21 06:40:38 algitbot: retry master 2024-06-21 07:22:56 algitbot: retry master 2024-06-21 08:11:12 I wonder why task3 succeeded in CI 2024-06-21 08:12:59 Oops, task3 is now Rust software 2024-06-21 08:13:44 and we had a Rust upgrade after the last time CI ran 2024-06-21 08:51:56 Unfortunately, i have thrown a few things i could think of at this, but couldn't fix it 2024-06-21 08:52:09 I didn't look upstream though, maybe there's something there 2024-06-21 08:53:44 I think maybe the issue here is that CMake now detects the Rust library as having a _lib suffix, while before that (when it worked) it was -lib 2024-06-21 08:54:47 So, CMake cannot link to the Rust library correctly, and neither does it wait for that library to finish building before attempting to link to it 2024-06-21 08:55:20 I tried -j1 and forcing the Rust library to be built first, but CMake is still unable to link to it 2024-06-21 08:55:55 Oh well, just another day in the world of Rust, where breaking things is the norm 2024-06-21 08:58:31 ACTION goes AFK 2024-06-21 10:38:58 ^ made that patch in a bit of a hurry, it should fix things according to my tests 2024-06-21 10:39:15 but it something else fails, then someone else will have to continue 2024-06-21 13:43:01 algitbot: retry master 2024-06-21 13:50:02 algitbot: retry master 2024-06-21 14:29:38 algitbot: retry master 2024-06-21 14:31:01 algitbot: retry master 2024-06-21 14:31:20 algitbot: retry 3.20-stable 2024-06-21 14:31:36 algitbot: retry master 2024-06-21 14:36:03 algitbot: retry master 2024-06-21 16:27:02 algitbot: retry master 2024-06-21 16:31:34 algitbot: retry master 2024-06-21 16:37:20 algitbot: retry master 2024-06-21 16:42:15 algitbot: retry master 2024-06-21 17:13:24 algitbot: retry master 2024-06-21 18:08:15 algitbot: retry master 2024-06-21 18:10:20 algitbot: retry master 2024-06-21 18:42:00 algitbot: retry master 2024-06-21 18:44:16 algitbot: retry master 2024-06-22 09:16:24 algitbot: retry master 2024-06-22 09:18:13 algitbot: retry master 2024-06-22 09:20:59 algitbot: retry master 2024-06-22 10:57:23 algitbot: retry master 2024-06-22 14:40:06 algitbot: retry master 2024-06-24 15:14:42 algitbot: retry master 2024-06-24 15:16:12 algitbot: retry master 2024-06-24 15:17:16 algitbot: retry master 2024-06-24 15:56:33 oh damn it 2024-06-24 16:04:54 why does it report 0 memory pages ;~; 2024-06-25 18:06:06 algitbot: retry master 2024-06-25 18:11:16 lmao 2024-06-25 18:11:26 i love how their own example from the docs doesn't compile properly 2024-06-25 18:12:15 ptrc: talking about lomiri-trust-store? 2024-06-25 18:12:19 glog 2024-06-25 18:12:22 ah ok 2024-06-25 18:12:47 I assume you are talking about #error was not included correctly. See the documentation for how to consume the library. 2024-06-25 18:14:21 yeah 2024-06-25 18:14:37 apparently ( and the main page of their documentation doesn't actually say anything about that ) it requires proper cmake files as well 2024-06-25 18:16:50 i'll try patching lomiri-trust-store so it uses glog "properly" 2024-06-25 18:18:13 ok, I'm opening an issue for it 2024-06-25 18:19:26 #16242 2024-06-25 19:36:48 ugh 2024-06-25 19:37:48 does anyone want to basically adapt the glog.patch from lomiri-trust-store to this 2024-06-25 22:31:51 algitbot: retry master 2024-06-25 23:41:49 algitbot: retry master 2024-06-26 01:32:30 algitbot: retry master 2024-06-26 15:05:23 algitbot: test 2024-06-26 15:27:49 Lots of things are having problems on riscv64 2024-06-26 15:28:20 I first noticed qbs (which didn't get merged), then it was libqofono (where the aport was disabled for that arch) 2024-06-26 15:29:07 Then @sertonix found issues with pixman tests in !66689 and disabled tests 2024-06-26 15:29:32 and finally i think !68276 is also pointing to some riscv64 issue now 2024-06-26 15:30:48 The fakeroot issue seems to be with gettext's msgmerge 2024-06-26 15:31:10 I think something is definitely up with riscv64, and it happened after 3.20 2024-06-26 15:31:18 signal 11 coredump 2024-06-26 15:31:20 fun 2024-06-26 19:46:20 missed the extra commit 2024-06-27 05:05:33 Wow 2024-06-27 05:05:36 The new x86 builder is fast 2024-06-27 05:06:17 Either that, or the librewolf options that cause long build times are disabled for x86 2024-06-27 05:08:05 I compared the APKBUILD with firefox, and didn't see significant changes 2024-06-27 05:08:32 only thing is missing is echo "ac_add_options --disable-crashreporter" >> base-mozconfig 2024-06-27 05:08:50 Though maybe the move to community could've been combined with !68284 2024-06-27 05:09:08 cely: yes, you are right, I only noticed it afterwards 2024-06-27 05:10:28 I think probably the crashreporter is disabled by default for the more privacy-conscious librewolf 2024-06-27 05:10:48 Oh wait 2024-06-27 05:10:59 Librewolf has that, and Firefox doesn't 2024-06-27 05:11:13 only for x86 and armv7 2024-06-27 05:12:12 Ah, for Firefox, it's disabled for those 2 archs, while Librewolf disables it everywhere 2024-06-27 05:14:17 yes, indeed 2024-06-27 05:15:37 Hehe, now that i'm looking at that part of the Firefox APKBUILD, i see "keys", "these are for alpine linux use only" 2024-06-27 05:15:55 ahuh 2024-06-27 05:20:01 20m to build librewolf 2024-06-27 05:20:04 on x86 2024-06-27 05:21:16 I think it's 2 hours for Firefox on x86_64 2024-06-27 05:22:11 *checks* yes, Librewolf took 2 hours, and so did Firefox 2024-06-27 05:23:08 that's a huge difference, is that realistic? 2024-06-27 05:23:26 Not sure 2024-06-27 05:23:40 Maybe you can try building Firefox on the new x86_64 builder :) 2024-06-27 05:23:51 let me try 2024-06-27 05:27:29 So, armv7 also took around 20 minutes, while aarch64 took 1 hour (i checked Librewolf in testing) 2024-06-27 05:28:22 hmm, armv7 and aarch64 use the same hw 2024-06-27 05:28:32 Maybe LLVM does some more optimizations for 64-bit and that takes longer 2024-06-27 05:29:00 "TIER: configure pre-export export [compile] misc libs tools" 2024-06-27 05:29:03 (thinking that maybe it's the Rust part that takes up the time) 2024-06-27 05:29:15 it shows the current build phase 2024-06-27 05:29:23 first time I see something like that 2024-06-27 05:29:37 In the compile logs? 2024-06-27 05:29:53 yes, as a footer 2024-06-27 05:29:58 That's interesting 2024-06-27 05:30:21 probably only when a tty is available 2024-06-27 05:35:20 Ha, now i scroll a little lower from Firefox's prepare() to build(), and see `link_threads=1` for armv7 and x86 2024-06-27 05:37:29 (I remembered reading about https://blog.rust-lang.org/2024/05/17/enabling-rust-lld-on-linux.html, and wondered checked to see if Firefox used that) 2024-06-27 05:37:36 -wondered 2024-06-27 05:45:27 hmm, seems like it's going to be close to 20m as well 2024-06-27 05:45:47 That's good news :) 2024-06-27 05:45:57 Not sure how much will still follow though 2024-06-27 05:46:15 it's now profiling 2024-06-27 05:46:57 ah, that may explain the difference 2024-06-27 05:47:06 that's only done one x86_64 / aarch64' 2024-06-27 05:47:13 and ppc64le 2024-06-27 05:47:18 Ok 2024-06-27 05:48:14 so it will do another build afterwards 2024-06-27 05:48:21 Hehe 2024-06-27 05:49:57 and that's aarch64 completing Librewolf 2024-06-27 05:55:48 We're MR neutral now for today :D 2024-06-27 05:58:00 Hahaha 2024-06-27 09:23:31 cely: it took 1.5h, so at least some progress 2024-06-27 09:24:21 That's good 2024-06-27 15:22:23 algitbot: retry master 2024-06-27 16:11:49 algitbot: retry master 2024-06-27 18:26:43 algitbot: retry 3.20-stable 2024-06-27 20:24:33 PureTryOut: ^ 2024-06-27 20:25:12 probably because we don't have that file in any package 2024-06-27 20:25:13 weird 2024-06-27 20:27:07 strange it only fails on s390x 2024-06-27 20:27:57 indeed 2024-06-27 20:27:59 uh, yeah I don't get why that would only fail there. Afaik that file is provided by the package itself, some generated thing 2024-06-27 20:28:06 But idc for s390x, I'll just disable it 2024-06-27 21:39:07 btw thanks ikke for reviewing and merging all the MRs 2024-06-27 21:40:24 yw, trying to get the amount of open MRs down 2024-06-28 13:07:33 Oh my god 2024-06-28 13:08:48 Can't access the buildlogs 2024-06-28 13:10:51 hehehe 2024-06-28 13:11:11 bzip2 and zlib are the dependencies of main/perl 2024-06-28 13:11:55 I've always wondered how early in the bootstrap process Perl is built (considering it's used by Autotools) 2024-06-28 13:20:54 Now the Loongarch builder seems to be building main/attr, which has Perl in checkdepends, but checks are disabled 2024-06-28 13:21:56 oh, does it have checks disabled globally? 2024-06-28 13:23:13 It has `options="checkroot !check"` 2024-06-28 13:24:11 ah! you mean the main/attr packagew 2024-06-28 13:24:16 s/w$// 2024-06-28 13:24:18 makes sense 2024-06-28 13:25:51 Which package were you thinking about? 2024-06-28 13:26:42 i was thinking of a global `!check` as it was for riscv64 2024-06-28 13:27:01 Ah, ok 2024-06-28 13:27:48 slightly offtopic, but kwin has `$(echo $pkgver | cut -d . -f 1-3)` in the source url.. 2024-06-28 13:30:00 seems i can't ever get a full aports tree without a top-level exec, hah 2024-06-28 13:31:27 Ok, so it was just added 2024-06-28 13:31:51 What are you using to detect those top-level exec's? 2024-06-28 13:40:05 First time i'm see "all failed" 2024-06-28 13:40:08 seeeing* 2024-06-28 13:40:11 seeing* 2024-06-28 13:40:12 lol 2024-06-28 13:40:42 Ah, now it's building :) 2024-06-28 13:41:41 Btw, speaking of setuptools, ptrc: have you noticed it now vendors wheel? 2024-06-28 13:54:54 cely: anything that uses the alpine go library, e.g. apkgquery 2024-06-28 13:55:19 Ok 2024-06-28 13:57:46 "#error the arch code needs to know about your machine type" 2024-06-28 13:59:19 algitbot: retry master 2024-06-28 16:15:45 algitbot: retry master 2024-06-28 16:16:34 Hmm, i wonder why alpine-keys wasn't built 2024-06-28 16:17:06 I mean, it was just built 2024-06-28 16:17:14 and now libseccomp is the last blocker for main/ 2024-06-28 16:17:19 ACTION double checks 2024-06-28 16:17:21 algitbot: retry master 2024-06-28 16:17:41 Yes 2024-06-29 05:46:05 Always nice to see a fast build :) 2024-06-29 06:09:42 algitbot: retry master 2024-06-29 06:10:12 algitbot: retry master 2024-06-29 06:11:42 build.a.o is full of openvpn's now 2024-06-29 06:13:04 it is 2024-06-29 06:14:50 aarch64 is consistently the fastest 2024-06-29 06:31:42 We're at -14 MR already today, so nice :) 2024-06-29 15:28:50 algitbot: retry master 2024-06-29 15:44:08 algitbot: retry master 2024-06-29 15:51:46 algitbot: retry master 2024-06-29 15:55:46 why does it fail? 2024-06-29 15:56:55 Flaky tests 2024-06-29 15:57:06 It's a known problem 2024-06-29 15:57:17 I mean, one that's been worked on, but never solved 2024-06-29 15:57:34 (talking about netbpm, in case you were referring to something else) 2024-06-29 15:59:15 cely: no, was referring to netbpm 2024-06-29 15:59:27 Ok 2024-06-29 15:59:57 Maybe the tests are fine in CI, but once it hits the builders, a retry is usually needed 2024-06-29 16:00:50 strange 2024-06-29 16:05:11 I can't remember the details, but i think, before i added the "retry 3 times" thing, it needed even more retries to pass on the builders 2024-06-29 16:06:33 manual retries, i mean 2024-06-29 16:07:50 Not sure if they're the same tests, but some tests were disabled in !56815 2024-06-29 16:08:00 Those don't seem to be disabled now 2024-06-29 17:54:58 -21 MRs for today :) 2024-06-29 17:55:05 See if we can keep it so low 2024-06-29 22:26:07 weh 2024-06-29 22:26:19 backporting https://github.com/seccomp/libseccomp/commit/6966ec77b195ac289ae168c7c5646d59a307f33f doesn't quite work 2024-06-29 22:26:31 maybe we could just update libseccomp to a _git version 2024-06-30 01:26:21 ptrc: What do you mean? (if you're still here) 2024-06-30 01:26:44 it doesn't apply cleanly, and when i try to fix the issues manually it still doesn't compie 2024-06-30 01:26:46 compile* 2024-06-30 01:27:20 I actually found something 2024-06-30 01:28:57 I looked at dev.a.o/~loongson/edge, and apparenly, the version in repo is not 2.5.5-r0 2024-06-30 01:28:57 it's 2.5.4 2024-06-30 01:29:31 which is probably using this patch: https://github.com/loongarch64-archive/libseccomp/tree/dev-main-2.5.4-rebase 2024-06-30 01:30:44 dev.a.o/~loongarch/edge * 2024-06-30 01:32:09 As i said on #-loongarch, maybe this patch needs to be applied conditionally only to loongarch64 (espcially if it's causing issues on other archs, maybe that's why an MR was never submitted) 2024-06-30 01:40:44 Anyway, does this mean you've managed to get Alpine Loongarch working in QEMU, or something? 2024-06-30 12:49:55 GitHub Action appears again 2024-06-30 12:50:03 ? 2024-06-30 12:50:18 chromium incoming 2024-06-30 12:52:04 The author of that jreleaser commit is "GitHub Action" 2024-06-30 12:52:29 oh, didn't notice 2024-06-30 13:41:00 cely: apparently it was a mistake where their local identity was overridden 2024-06-30 13:41:07 They're going to fix it 2024-06-30 13:44:48 That's good