2024-01-01 00:04:57 algitbot: retry master 2024-01-01 03:38:59 algitbot: retry master 2024-01-01 04:34:44 !58224 2024-01-01 05:19:28 thanks! 2024-01-01 05:23:24 You're welcome 2024-01-03 16:11:22 oh no 2024-01-03 16:11:37 oh, that's on distfiles, huh 2024-01-04 01:23:19 Thanks 2024-01-04 01:59:47 Hmm, so the ARM CI runners went down? 2024-01-04 19:07:28 algitbot: retry master 2024-01-04 19:44:09 algitbot: retry master 2024-01-04 19:54:50 algitbot: retry master 2024-01-05 03:13:55 awh darn it 2024-01-05 03:19:33 I see you're fixing the maintainer fields 2024-01-05 03:20:45 yeah, those 3 commits should take care of all of them 2024-01-05 03:20:47 unless i missed something 2024-01-05 03:21:55 What do you think about testing/asahi-scripts (missing space after colon), and testing/gammastep (missing space after #)? 2024-01-05 03:22:31 At least the former causes pkgs.a.o to be unable to detect a maintainer 2024-01-05 03:23:08 cely: oh, good catch :3 i'll check all aports if they vaguely match the syntax 2024-01-05 03:23:54 And there's testing/btpd which is missing the colon 2024-01-05 03:24:05 some ocaml stuff in testing as well 2024-01-05 03:24:47 ah no, those just have a trailing space 2024-01-05 03:27:34 guess i'm still gonna have to do something about element-web 2024-01-05 03:27:49 ah, it was just flaky..? 2024-01-05 03:36:17 algitbot: retry master 2024-01-05 03:36:18 boop 2024-01-05 03:38:12 Anyway, just wondering, what did you use to check the syntax of the maintainer field? 2024-01-05 03:40:57 ripgrep mostly 2024-01-05 03:41:15 rg --files-without-match '^# Maintainer:(?: .* <.*>)?$' -g '**/APKBUILD' 2024-01-05 03:45:03 Thanks, i should definitely check ripgrep out, i saw a mention of it while upgrading repgrep 2 days ago 2024-01-05 04:04:49 it's basically grep but *way* more useful :p 2024-01-05 04:14:28 :) 2024-01-05 18:30:12 algitbot: retry master 2024-01-05 18:39:19 not sure about sqlalchemy off the top of my head 2024-01-05 19:09:28 It's quite late for me now, and i usually don't reply this late, but that error looks like what i've seen on the CI, and back then it was solved on the next retry 2024-01-05 19:09:39 algitbot: retry master 2024-01-05 19:14:40 Ok, it didn't error out as quickly as it did the last 2 times, and looking at the build log for 2.0.24, it took 11 minutes to finish building back then 2024-01-05 19:15:09 So, i guess it is on its way now, so i'll be going off for the day, bye 2024-01-05 19:49:54 thanks o/ 2024-01-06 11:04:52 algitbot: retry master 2024-01-06 11:15:56 =( 2024-01-06 11:16:11 algitbot: retry master 2024-01-06 11:25:40 celie 2024-01-06 11:48:23 algitbot: retry 3.17-stable 2024-01-06 12:41:42 !58490 2024-01-06 12:49:13 omni, ikke: anyone around to merge that ^ 2024-01-06 13:12:09 Anyway, if i'm not around when that is merged, and it doesn't solve the problem, then please just disable tests, and i will look into it again later on (though i am fairly certain that the temp db files hanging around is the cause of the problem, as s390x, which is newly enabled with this upgrade (and so wouldn't have the test files from the previous run), built fine) 2024-01-06 15:30:06 seems to have fixed 2024-01-07 01:53:45 omni: py3-validators in !58467 is also a downgrade, so likely accidentally included 2024-01-07 02:02:32 cely: the things I glance over when I spot other things first 2024-01-07 02:11:42 omni: It has happened before (!56274) 2024-01-07 02:11:58 I wonder what causes that.. 2024-01-07 02:13:10 "stray change from another project" ..? 2024-01-07 02:19:11 boom 2024-01-07 06:42:48 Okay, i'm done rebasing this monster of an MR, and it should be ready for merging: !56270 2024-01-07 16:18:50 algitbot: retry master 2024-01-07 16:20:02 algitbot: retry 3.19-stable 2024-01-07 16:22:23 algitbot: retry master 2024-01-08 01:02:30 algitbot: retry master 2024-01-08 01:52:28 algitbot: retry master 2024-01-08 02:02:47 I wonder what's up with scaleway-cli and all those 503 service unavailable errors..is it missing ca-certificates or something? 2024-01-08 02:03:07 algitbot: retry master 2024-01-08 08:34:27 Anyone know if ca-certificates is still installed on the builders (maybe that is the cause of scaleway-cli failing)? 2024-01-08 08:34:55 I think libcurl depends on ca-certificates, while apk-tools depends on ca-certificates-bundle 2024-01-08 08:35:12 abuild used to depend on curl, but not anymore 2024-01-08 19:27:43 celie: I'll check in a sec 2024-01-08 19:27:53 cely: ^ 2024-01-08 21:57:03 hmm, it suddenly passed? 2024-01-08 21:57:08 I didn't do a thing 2024-01-09 00:19:32 Yeah, i saw that scaleway-cli somehow stopped failing after httm was merged, and thought you fixed it 2024-01-09 00:22:03 Just mentioning that for the timeline, not implying that they're connected :) 2024-01-09 21:30:39 algitbot: retry master 2024-01-09 21:30:43 algitbot: retry 3.19-stable 2024-01-09 23:53:47 algitbot: retry master 2024-01-10 00:02:34 algitbot: retry master 2024-01-10 01:12:25 I guess the ARM CI went down again? 2024-01-10 01:19:06 algitbot: retry master 2024-01-10 01:20:34 Ah, i guess the ARM CI being down has something to do with the builders this time.. 2024-01-10 01:23:00 Last time the builders still went on building iirc, now it seems they are not updating build.a.o anymore, testing/electron failed on aarch64 but it doesn't seem to respond to retry, and armhf is "stuck" on pict-rs and armv7 on laze 2024-01-10 01:26:28 I wonder if the builders can keep on building without updating build.a.o 2024-01-10 02:05:16 celie: perhaps when the mqtt connection gets lost? 2024-01-10 02:05:57 The ARM CI is down as well 2024-01-10 02:09:37 algitbot: retry master 2024-01-10 06:51:29 ptrc: celie cely I've restarted usa-bld-1, it was not responding 2024-01-10 07:12:59 thanks! 2024-01-10 07:25:38 ikke: Thanks 2024-01-10 12:50:55 algitbot: retry master 2024-01-10 12:59:18 algitbot: retry master 2024-01-10 13:02:21 Did something happen to the ARM builders again? 2024-01-10 13:07:58 no idea if relates: there seems to be a network outage in a US datacenter (hn and sr.ht are down) 2024-01-10 17:32:30 fluix: no, that's not related 2024-01-10 17:32:43 ACTION nods 2024-01-10 17:56:47 I think it's because the mqtt-exec openrc script has changed and is now broken 2024-01-11 06:59:32 omni: did you build it locally and it succeeded? 2024-01-11 07:02:01 ptrc: if you meant electron, it succeeded for x86_64 but failed on aarch64 due to running out of disk space 2024-01-11 07:05:49 ah, nice, didn't see it was a MR 2024-01-11 07:07:24 !58725 2024-01-11 08:37:23 omni: thanks for fixing recoll 2024-01-11 08:38:12 np 2024-01-11 09:28:02 algitbot: retry 3.19-stable 2024-01-11 14:28:35 algitbot: retry master 2024-01-11 14:30:28 "Cannot save file into a non-existent directory: {parent}" 2024-01-11 14:31:03 Oh wait, that's substituted for "$builddir"/tests/data 2024-01-11 14:32:08 I wonder why it fails on this arch 2024-01-11 14:34:45 there is no tests/data dir 2024-01-11 14:34:52 tests/ exist 2024-01-11 14:36:47 The test doesn't fail on other archs though 2024-01-11 14:38:30 I don't know why 2024-01-11 15:39:56 cely: seems to only fail when run under abuild, when I manually run the tests with the same commands, it succeeds 2024-01-11 15:41:56 That's weird, so it fails on one arch, and only under abuild 2024-01-11 15:45:06 cely: sounds like a race condition 2024-01-11 15:45:27 the os.path.exists returns true when it fails 2024-01-11 15:45:33 probably to do with -n auto 2024-01-11 15:46:02 There is a teardown in the tests that cleans up the dir 2024-01-11 15:46:16 so if one test finished while the other still runs, it fails 2024-01-11 15:47:09 oof, it's _faster_ without -n auto 2024-01-11 15:48:34 Yea, only 6 tests :) 2024-01-11 15:49:16 yeah, and most time is spent on collecting workers 2024-01-11 15:53:53 Thanks for looking into the problem 2024-01-11 15:54:02 no problem 2024-01-11 20:14:11 ugh, codeberg ddos :/ 2024-01-11 20:15:52 algitbot: retry 3.16-stable 2024-01-11 20:16:11 it looked like something was stuck 2024-01-11 20:17:52 algitbot: retry master 2024-01-11 20:18:20 ikke: or you mean we are? 2024-01-11 20:18:40 No, just codeberg 2024-01-11 20:19:35 https://social.anoxinon.de/@Codeberg/111738997246741636 2024-01-11 20:21:03 ikke: I was kidding, but I did a retry since I managed to download the file from them here 2024-01-11 20:21:49 :) 2024-01-11 20:30:09 omni: if you have the file, I could upload it to distfiles, then it should work 2024-01-11 20:34:31 ikke: unfortunately I was stupid and thought "hey, this works!", aborted the download and said to algitbot to retry 2024-01-11 20:34:39 oh ok 2024-01-11 20:34:47 Seems to work now anyway 2024-01-11 20:35:24 I'll tell my botnet to download the file from various locations and we'll have it in notime! 2024-01-11 22:46:00 algitbot: retry 3.18-stable 2024-01-12 00:57:13 Wow, so yesterday it was sr.ht, and now it's codeberg that's gone down? 2024-01-12 00:58:18 I have the tarball for snac, if codeberg takes a long time to come back online 2024-01-12 01:09:32 and i also have an MR from yesterday that upgraded waylock to 0.6.4 and switched source URL to its new home at codeberg, in case that gets merged before codeberg comes back online, the last time i checked, the tarball in github release assets had the same checksum 2024-01-12 13:34:38 algitbot: retry master 2024-01-12 13:45:32 algitbot: retry master 2024-01-12 13:50:16 I think things, like pipewire, need to be rebuilt against the new libcamera 2024-01-12 13:57:59 !58808 2024-01-12 14:03:44 of course, pipewire need a libxml2 2.12 patch... 2024-01-12 14:04:49 I think? 2024-01-12 14:12:33 no... 2024-01-12 14:13:37 https://gitlab.freedesktop.org/pipewire/pipewire/-/commit/268f4856f852d72a749932630223f928acd1a704 2024-01-12 14:56:58 omni why? 2024-01-12 14:57:25 see above ^^^ 2024-01-12 14:57:50 rnalrd: oh, hi! 2024-01-12 14:58:07 !58810 !58809 2024-01-12 14:58:33 it also says why in the added comment in the APKBUILD 2024-01-12 14:58:59 gtg, bbl 2024-01-12 15:02:24 You can always limit generating docs to the arches that support 2024-01-12 15:03:22 it was faster this way, I did not see a specific "make doc". Improvements are welcome 2024-01-12 21:23:43 codeberg is having issues 2024-01-13 00:06:38 Argh...Codeberg is back up but the mlmmj file is erroring out 2024-01-13 00:12:49 Anyway, as usual, i have the file, if Codeberg doesn't fix the problem soon, you can tell me where to upload it 2024-01-13 01:34:02 To unblock the x86_64 builder: !58830 2024-01-13 01:36:38 I'm not sure why ikke's picat-3.6-orig.tar.gz has "Picat version 3.5#5", but it doesn't seem to match the sha512sum in the 3.6 upgrade APKBUILD, instead it matches the sha512sum of 3.5.5 from before the upgrade 2024-01-13 01:58:26 cely: would you mind not opening new MRs so that we could get below 400 open MRs? :B 2024-01-13 02:04:20 :O 2024-01-13 02:06:12 Well, i wouldn't mind closing !57194 and !57197 (since they've gone stale waiting for a reply from the maintainer of those aports), if that would help the <400 open MRs cause 2024-01-13 02:07:29 it's a righteous cause! 2024-01-13 02:09:42 I'm old enough to remember the count being down at 270 something, or maybe even 260 something 2024-01-13 02:11:44 Anyway, another MR i can think of that should be mergeable is !57855 (unrealircd changed maintainer in !44423, and i adopted dhex and librime in !58561), i emailed @ay about inspircd (together with dhex and librime), and have not received a reply till now, also @SadieCat is the upstream for inspircd 2024-01-13 02:15:04 omg 2024-01-13 02:15:17 399! 2024-01-13 02:16:09 this is my only motivation, keeping the numbers low 2024-01-13 02:17:17 oh no! 2024-01-13 02:17:22 algitbot: retry master 2024-01-13 02:17:58 Don't worry it just timed out while downloading the source file 2024-01-13 02:18:06 phew! 2024-01-13 02:19:24 Just a bit curious, isn't it like...in the wee hours of the morning where you're at? 2024-01-13 02:20:10 algitbot: retry master 2024-01-13 02:20:47 Finally 2024-01-13 02:20:51 is it? 2024-01-13 02:20:57 I usually go by UTC 2024-01-13 02:21:30 It's still 2:20am in UTC 2024-01-13 02:24:18 Hehe, ircII had a new release 2024-01-13 02:24:23 *checks if we have that in repo* 2024-01-13 02:24:39 OMG, we do 2024-01-13 02:24:47 haha, yeah 2024-01-13 02:26:06 at least it's not in main anymore 2024-01-13 02:38:10 I was wondering where I could find a web page with a link to google.com 2024-01-13 02:38:35 and this one even has links to maps.google.com and xkcd.com! 2024-01-13 02:39:59 Which web page? 2024-01-13 02:40:26 http://eterna23.net/ 2024-01-13 02:41:14 don't accuse me of being off-topic! it's related to !58833 2024-01-13 02:42:32 I never minded off-topic :) 2024-01-13 02:44:53 "don't contact the web master if you think this site needs an update. this is as-designed and there are no plans to change it." 2024-01-13 02:47:34 it is pretty good 2024-01-13 03:01:51 Ah, another MR that has gotten the LGTM: !58679 2024-01-13 03:22:18 algitbot: retry master 2024-01-13 04:01:17 algitbot: retry master 2024-01-13 04:30:49 I wonder what's causing the nushell tests to sometimes fail 2024-01-13 04:30:51 algitbot: retry master 2024-01-13 04:31:32 Oh now it's building chromium, so won't get to nushell until much later 2024-01-13 05:20:29 algitbot: retry master 2024-01-13 12:54:51 algitbot: retry master 2024-01-13 13:30:38 algitbot: retry master 2024-01-13 13:32:37 As the other archs have already built mitra 2.7.0, shouldn't the source tarball be on distfiles.a.o? 2024-01-13 13:32:53 cely: they're synced once per day 2024-01-13 13:33:05 Ah, that explains it 2024-01-13 13:34:04 First it was the x86_64 builder that hosted distfiles, so once that builder downloaded it, it would be available for all other 2024-01-14 16:37:43 checksum failure 2024-01-14 16:41:37 yes, !58910 2024-01-15 02:13:47 algitbot: retry 3.19-stable 2024-01-15 20:27:11 algitbot: retry master 2024-01-15 20:27:44 no qt5-qtwebengine 2024-01-15 23:33:17 nodejs seems to only support riscv64 on 21.x 2024-01-16 00:18:24 I think nodejs on riscv64 is erroring out due to this change: https://github.com/nodejs/node/commit/223853264b 2024-01-16 00:22:36 I wonder if we have another aport enabled on riscv64 that bundles v8 with that change included, maybe it has already patched this problem 2024-01-16 00:32:34 It seems community/nodejs-current has been disabled on riscv64 starting with version 21.1.0, and that change was only cherry-picked to 21.x in 21.2.0 2024-01-16 00:36:07 and for armv7 comunity/discover I'm working on !58653 to enable on armv7 & armhf again 2024-01-16 00:38:26 Comparing https://github.com/nodejs/node/commits/v20.x/deps/v8/src/codegen/riscv/assembler-riscv.h and https://github.com/nodejs/node/commits/v21.x/deps/v8/src/codegen/riscv/assembler-riscv.h, i see that for 21.x, that change was only cherry-picked after upgrading v8 to 11.8.172.13 2024-01-16 00:44:45 Looking through the long list of tags including that change (https://github.com/v8/v8/commit/13192d6e10fa), the earliest version seems to be 11.8.173, so maybe that commit shouldn't have been cherry-picked to nodejs 20.x is still on 11.3.244 2024-01-16 00:45:12 which is still on* 2024-01-16 01:32:38 https://github.com/nodejs/node/issues/50267 2024-01-16 01:34:13 So, it seems 13192d6e10fa from v8 was cherry-picked to fix the build on node v21.x. However, cherry-picking it to v20.x causes the build to fail instead. 2024-01-16 03:25:42 cely: /who cely 2024-01-16 03:26:42 that was me trying to figure out if you had a github or if you wanted someone else to comment on that issue that cherry-picking it back to 20.x broke things 2024-01-16 03:29:12 iggy: That's on my todolist (could get to it within an hour or two), but if someone else has some free time to do it now, please feel free to do so 2024-01-16 03:29:44 on it 2024-01-16 03:29:50 Thanks 2024-01-16 03:37:23 commented, I'll try to keep on top of it (my interest is due to having a couple riscv64 boards.... #alpine-riscv64 4lyfe) 2024-01-16 03:38:22 Ok. Btw, i have also re-enabled nodejs-current on riscv64 in !58995 2024-01-16 03:39:02 Tested that in CI, and it passes 2024-01-16 07:59:17 Some activity on the nodejs github regarding the riscv64 build failure: https://github.com/nodejs/unofficial-builds/issues/106 2024-01-16 11:26:27 cely: so is it a matter of reverting that comment? 2024-01-16 11:26:31 commit* 2024-01-16 13:31:59 ikke: Yes, it is 2024-01-16 21:37:39 algitbot: retry 3.19-stable 2024-01-16 23:57:34 I feared that one of these would fail... 2024-01-17 00:23:44 omni: i gave up working on lzdoom during the mpg123 1.32.1 rebuild, and looking through the commit log, besides being disabled on armhf, i see it hasn't seen any other activity between then and this move to community 2024-01-17 00:24:58 If i recall correctly, there was some other error even after including cstdint 2024-01-17 00:30:26 uhm... thanks 2024-01-17 00:30:36 so maybe just disable it for now? 2024-01-17 00:30:53 Yeah, on ARM 2024-01-17 00:31:30 or has it not hit the other archs yet? 2024-01-17 00:32:27 Yeah, it hasn't, so probably disable everywhere 2024-01-17 00:33:18 and there is a new version anyway: https://zdoom.org/files/lzdoom/src/ 2024-01-17 00:34:16 (3.88b, though in actuality, not that new as the date says May 2022) 2024-01-17 00:35:15 yes, that was the first thing I did in !59051 2024-01-17 00:38:36 So, lzdoom never ran on the CI because the moves were spread across multiple MRs? 2024-01-17 00:39:22 yes, and I was just hoping that it would build and fixing if it didn't would be easy 2024-01-17 00:39:58 but then apparently I'm less focused/more tired than I thought I'd be 2024-01-17 00:40:11 Let me find my old mpg123 rebuild MR and see what i did then 2024-01-17 00:40:53 ah, right, that's where you said you did it 2024-01-17 00:41:08 I was searching for lzdoom 2024-01-17 00:41:33 Hmm, i included cstdint in libraries/music_common/fileio.h, and stdexcept in libraries/zmusic/zmusic/zmusic.h, and then gave up after that 2024-01-17 00:42:16 !52244 ? 2024-01-17 00:42:29 Yes 2024-01-17 01:10:39 something somewhere around here? https://github.com/drfrag666/gzdoom/blob/3.88b/libraries/lzma/C/Threads.c#L208 2024-01-17 01:13:30 I wonder if this could help https://github.com/ZDoom/gzdoom/commit/4497d7fdaa9b0b4520416885c75742bf857bed66 2024-01-17 01:15:45 of course it's not as easy as just applying that as a patch... 2024-01-17 01:17:53 After including cstdint and cstdexcept, i think i ran into some errors in some VM (but this is completely from memory, not even sure if lzdoom has a VM) 2024-01-17 01:19:54 I was guessing around here https://gitlab.alpinelinux.org/alpine/aports/-/jobs/1245931#L915 2024-01-17 01:23:22 Yes, i think that's also what main/7zip/7-zip-musl.patch is doing 2024-01-17 01:26:10 oh, maybe there's something there then? 2024-01-17 01:28:53 but why both comment that out and include sched.h? 2024-01-17 01:29:42 oh, I see 2024-01-17 01:43:56 I think that actually was a small step forward 2024-01-17 01:44:52 or maybe not, I looked at aarch64 log after having looked at armv7 logs 2024-01-17 01:45:13 or, hmm... 2024-01-17 01:49:24 it is never too late to give up, but I think it is about time 2024-01-17 01:53:48 !59053 upstream finally responded with a pull request, and i have backported that 2024-01-17 02:53:32 For lzdoom, apparently you need -D_GNU_SOURCE in CMAKE_C_FLAGS of libraries/lzma/CMakeLists.txt 2024-01-17 02:53:44 and then the fun begins, with lots of errors in src/scripting 2024-01-17 02:55:01 That's probably the point where i gave up while doing the mpg123 rebuild 2024-01-17 02:56:50 On a better note, nodejs riscv64 CI is halfway through, so only about another hour to go before it passes, assuming the OOM killer doesn't get to it first 2024-01-17 02:58:39 Building lzdoom with Clang did not succeed, and this time, this is where i'm going to give up 2024-01-17 02:59:10 I wonder how it successfully built in the first place and got into aports 2024-01-17 03:37:14 I tried looking for a newer fork of lzdoom on Github, and found https://github.com/AmberELEC/lzdoom 2024-01-17 03:37:32 It has the GCC 13 fixes, but still errors out in src/scripting 2024-01-17 03:37:52 Anyway, not going to spend anymore time on a game i do not play 2024-01-17 03:51:41 yeah 2024-01-17 03:55:39 :) 2024-01-17 03:57:35 Hopefully, no other aports fail on riscv64 2024-01-17 03:59:53 but it will probably take a while before the backlog is cleared (there are 2 NodeJSes to build, both have been tested in CI though, so there should be no problems there) 2024-01-17 05:48:36 \o/ 2024-01-17 05:59:09 :) 2024-01-17 13:29:50 #15678 2024-01-17 13:31:01 Hehe, i wonder how outdated it was...it seems the new version was just released yesterday 2024-01-17 13:31:45 or rather, is 2024-01-17 14:02:40 cely: the outdated release is almost a week old! 2024-01-17 14:10:54 !59088 2024-01-17 14:11:04 for riscv64 2024-01-17 14:29:25 ^ !59088 2024-01-17 14:30:54 Let's see if it can build something else in the meantime 2024-01-17 14:31:02 algitbot: retry master 2024-01-17 15:11:39 I probably won't be around to see riscv64 upload testing/, so if any more of the Perl stuff fails due to not having checkdepends, i'll have a look at it tomorrow 2024-01-17 15:24:43 ^ ruff may work on riscv64 too, but the CI exceeded the time limit for Hugo 2024-01-17 15:37:18 algitbot: retry master 2024-01-17 16:51:04 \o/ 2024-01-17 21:10:46 I think we just need to retry it, but I'll wait a bit since x86_64 is quite busy atm 2024-01-17 21:41:26 algitbot: retry 3.18-stable 2024-01-17 22:21:26 algitbot: retry master 2024-01-18 01:24:34 Hmmm 2024-01-18 01:25:42 Let's wait for armhf to see if it fails in the same fashion before retrying 2024-01-18 01:28:02 Nope, it didn't 2024-01-18 01:28:12 algitbot: retry master 2024-01-18 01:32:07 and Chromium has also finished building for aarch64 on 3.19-stable, so if the Elixir test was failing just now as a result of high load, hopefully it doesn't fail this time 2024-01-18 01:34:36 Ok, it succeeded on armv7 :) 2024-01-18 01:34:54 and now aarch64 too, so all good 2024-01-18 01:37:38 algitbot: retry 3.19-stable 2024-01-18 01:38:25 (was wondering why 3.19 x86 was stuck at "pulling git") 2024-01-18 08:47:26 algitbot: retry master 2024-01-19 00:33:03 Hmm 2024-01-19 00:33:47 https://pkgs.alpinelinux.org/package/edge/testing/x86/perl-strip-nondeterminism 2024-01-19 00:34:14 "Description: (perl module)" 2024-01-19 00:37:23 Ah, that's because it uses the somewhat inverted logic of having perl-strip-nondeterminism be the "parent" package, and strip-nondeterminism be the sub-package 2024-01-19 00:38:57 If it were me, i'd set pkgname to strip-nondeterminism, and put perl-$pkgname as the subpackage, amove'ing usr/share/perl5 instead 2024-01-19 00:40:08 hello =) 2024-01-19 00:40:19 Of course the doc package would then become strip-nondeterminism-doc, instead of perl-strip-nondeterminism-doc, but it does contain a man page for strip-nondeterminism(1), so could be considered appropriate 2024-01-19 00:40:27 Oh hi omni, sometimes i just like talking to myself 2024-01-19 00:40:52 me too 2024-01-19 00:41:19 we're down at under 400 open MRs again \o/ 2024-01-19 00:41:39 Congratulations 2024-01-19 00:44:44 Hmm, did riscv64 just finish before x86_64? 2024-01-19 00:46:31 Anyway, it seems !58289 didn't get gpep517'ed 2024-01-19 00:54:14 it would be really cool with a multicore rv64 build server 2024-01-19 01:03:01 Oh no 2024-01-19 01:03:42 It seems build-edge-x86_64 isn't updating build.a.o anymore, i see it building py3-migen, but the build log has already been uploaded 2024-01-19 01:05:53 Same thing for build-3-19-x86_64, it shows gnome-shell being built, but the build log for that is already available 2024-01-19 01:09:34 Oh ok, build-3-19-x86_64 has now moved on, but build-edge-x86_64 is still at py3-migen 2024-01-19 01:12:39 the best thing about gpep packaging is that you don't need to think about the license file 2024-01-19 01:12:53 Hmm, i wonder why some of the builders have just finished building py3-litex-hub-modules, even though it was merged about 2 hours ago, and it's building them together with the newly merged leptosfmt and pilot-link 2024-01-19 01:15:12 were there large community packages in the way? 2024-01-19 01:15:53 Wait 2024-01-19 01:15:58 Am i seeing this right? 2024-01-19 01:16:03 Have they started building it yet again? 2024-01-19 01:17:29 I don't think i'm seeing things, i just refreshed the build log for py3-litex-hub-modules on aarch64 and the build started time just went form 01:10:16 +0000 to 01:15:39+0000 2024-01-19 01:18:22 ikke: ? 2024-01-19 01:18:29 s/form/from/ 2024-01-19 01:19:43 Looking at the commit that added it, i see it has a log.txt in the directory 2024-01-19 01:19:58 Maybe that's causing the issue 2024-01-19 01:21:05 (A testing/py3-litex-hub-modules/log.txt was committed into the aports tree, and maybe that file is somehow causing the builders to keep rebuilding py3-litex-hub-modules) 2024-01-19 01:21:48 Ok, i've opened the build log for that on x86, and now i see a build started time of 01:14:17 2024-01-19 01:22:02 ouch, ffs, how did I miss that =( 2024-01-19 01:22:12 now it's changed to 01:20:28 2024-01-19 01:23:05 So, maybe remove the log.txt and see if it stops getting rebuilt 2024-01-19 01:23:26 so I merged a 3.7M log file, great. 2024-01-19 01:23:49 Thankfully it was caught early 2024-01-19 01:24:57 not before it was merged =( 2024-01-19 01:25:43 Well, there's probably no one else awake to notice it 2024-01-19 01:26:12 So, we'll just keep it between ourselves 2024-01-19 01:26:39 and we might just be able to pretend nothing happened 2024-01-19 01:28:30 riscv64 is now rebuilding py3-litex-hub-modules again 2024-01-19 01:31:07 Hopefully that's the last time py3-litex-hub-modules is rebuilt 2024-01-19 01:31:39 odd that it would cause that 2024-01-19 01:32:56 Well by causing that, it actually brought the issue to attention, so i'd say it's a good thing 2024-01-19 01:34:36 :| 2024-01-19 01:34:49 Has it started building py3-litex-hub-modules all over again? 2024-01-19 01:35:17 looks like it 2024-01-19 01:35:23 maybe this is the last time 2024-01-19 01:35:27 *sigh* 2024-01-19 01:35:35 algitbot: retry master 2024-01-19 01:35:40 just because 2024-01-19 01:36:01 thanks for finding that log.txt file either way 2024-01-19 01:36:14 You're welcome 2024-01-19 01:36:40 did that retry master actually make it not rebuild litex again? 2024-01-19 01:37:31 algitbot: retry master 2024-01-19 01:37:59 From what i see, it was rebuilding it everytime you merged a new commit 2024-01-19 01:38:28 Not sure if it did that on "retry master" as well 2024-01-19 01:49:37 maybe we should somehow have a limit for how large a commit can be in size 2024-01-19 01:51:12 ffs 2024-01-19 01:53:05 Stepped away for a while doing an apk upgrade 2024-01-19 01:53:16 Is it still rebuilding py3-litex-hub-modules over and over gain? 2024-01-19 01:53:25 yes 2024-01-19 01:53:31 :| 2024-01-19 01:53:40 for every new merge, as you said 2024-01-19 01:54:24 Maybe the log.txt needs to be removed from the builders manually...or this is just a hunch, maybe pkgrel needs to be bumped 2024-01-19 01:54:26 its APKBUILD is a bit... special... 2024-01-19 01:55:43 Wait 2024-01-19 02:00:09 Does `apk fetch py3-litex-hub-modules` get you a "BAD archive" error? 2024-01-19 02:02:49 algitbot: retry master 2024-01-19 02:02:55 algitbot: retry master 2024-01-19 02:03:07 Nothing there 2024-01-19 02:04:05 I get: 2024-01-19 02:04:08 py3-litex-hub-modules: unable to select package (or its dependencies) 2024-01-19 02:05:37 Did you `apk update` first? 2024-01-19 02:06:07 of course 2024-01-19 02:06:13 but I can try with some other mirror 2024-01-19 02:06:13 Things have just gotten weirder, `apk fetch py3-litex-hub-modules-pyc` completes, but testing it with tar errors out 2024-01-19 02:07:15 do you have the focus to take a really good look at testing/py3-litex-hub-modules/APKBUILD ? 2024-01-19 02:07:36 I don't 2024-01-19 02:08:35 with another mirror I could apk fetch ist 2024-01-19 02:08:41 it* 2024-01-19 02:09:20 The main package or the -pyc subpackage? 2024-01-19 02:10:01 the main package 2024-01-19 02:10:16 and there it goes again... 2024-01-19 02:10:31 I'm still getting "BAD archive" on the main package 2024-01-19 02:12:10 I got a py3-litex-hub-modules-2023.12-r0.apk 2024-01-19 02:12:21 containing: 2024-01-19 02:12:23 .SIGN.RSA.alpine-devel@lists.alpinelinux.org-6165ee59.rsa.pub 2024-01-19 02:12:23 .PKGINFO 2024-01-19 02:12:24 .dummy 2024-01-19 02:13:21 Yes, that's what empty packages (like this one and build-base) contain 2024-01-19 02:14:08 yeah, so nothing odd with that 2024-01-19 02:14:28 but you got "BAD archive" as output from `apk fetch`? 2024-01-19 02:14:40 Yes 2024-01-19 02:14:49 did you try other mirrors? 2024-01-19 02:14:58 like fastly 2024-01-19 02:15:20 Try apk fetch'ing the -pyc and then extracting it with tar 2024-01-19 02:15:25 I get "gzip: corrupted data" 2024-01-19 02:15:33 even though the apk fetch completes 2024-01-19 02:16:25 no, it's fine 2024-01-19 02:16:54 md5: 4acd703e67decabe7672cd30768d1997 py3-litex-hub-modules-pyc-2023.12-r0.apk 2024-01-19 02:17:38 should we bump pkgrel and see if it goes away? 2024-01-19 02:17:43 the rebuilds I mean 2024-01-19 02:21:08 Yes, it very likely needs a pkgrel bump to fix the "BAD archive" thing, but i'm not sure if that will fix the rebuilding 2024-01-19 02:22:11 and i may have found out what happened to -pyc 2024-01-19 02:22:28 Downloading from different mirrors yielded .apk files of different sizes 2024-01-19 02:23:18 and it seems apk fetch was truncating the .apk it downloaded (probably to the size it has on index), and that corrupted it 2024-01-19 02:24:21 I /usr/bin/truncate'd what i downloaded and it matched the sum of what i apk fetch'ed (also corrupted it as expected) 2024-01-19 02:25:11 and yes, one of the mirrors i downloaded from gave me the same md5sum as yours 2024-01-19 02:26:01 So, the .apk's are corrupted as a result of them being re-uploaded without a corresponding pkgrel bump to update the index 2024-01-19 02:26:26 The question now is, will a pkgrel bump also prevent further rebuilds on new merges 2024-01-19 02:30:35 we could try 2024-01-19 02:31:12 Looking through the APKBUILD 2024-01-19 02:31:35 i see it is also using `--wheel-dir dist` instead of `--wheel-dir .dist` 2024-01-19 02:32:29 is that really an issue? 2024-01-19 02:32:38 if it is consistent 2024-01-19 02:32:53 Maybe not 2024-01-19 02:38:31 Maybe i would make 1 small modification, move the depends= from the for loop in package() into the global for loop 2024-01-19 02:38:42 but you've merged it now 2024-01-19 02:39:44 Ok, now for something to test it on.. 2024-01-19 02:40:09 AAAAAAAAAAAAAAAA 2024-01-19 02:40:09 haha 2024-01-19 02:40:17 now you try 2024-01-19 02:42:29 there are a few more MRs that could be merged to test whatever theory with py3-litex-hub-modules 2024-01-19 02:46:50 cely: if you have any ideas, open an MR and we'll try it 2024-01-19 02:47:12 Yes, let me git pull first :) 2024-01-19 02:48:20 ah, you need to fetch 3.7M+ of our git repo... 2024-01-19 02:51:30 I'll start with moving depends setting to the global scope, in case that's what the builder is choking on (having more depends in the final .apk than what it can read from depends=) 2024-01-19 03:02:06 !59188 (waiting for CI) 2024-01-19 03:09:11 algitbot: retry master 2024-01-19 03:09:17 (just to be sure) 2024-01-19 03:09:55 Ok, now to test it 2024-01-19 03:11:02 1/1 aports built 2024-01-19 03:11:05 I guess that fixed it 2024-01-19 03:11:20 yay! hi5! 2024-01-19 03:11:38 May want to merge another thing after this just to be sure 2024-01-19 03:12:18 So, my hypothesis is it was either the depends= or the pkgver= 2024-01-19 03:12:33 Most likely the latter 2024-01-19 03:13:32 Ok, seems to be fixed 2024-01-19 03:13:36 yeah, that was an odd one 2024-01-19 03:23:23 I hope to try that at one point 2024-01-19 03:23:42 s,one,some, 2024-01-19 03:23:42 ? 2024-01-19 03:25:40 maybe that's enough for now... 2024-01-19 03:43:50 Try what? The Windows 95 theme? :) 2024-01-19 03:44:58 yes 2024-01-19 03:45:09 some nostalgia right there 2024-01-19 03:46:48 I remember you said you came to Alpine from FreeBSD 2024-01-19 03:47:04 Windows 95 some time before that? 2024-01-19 03:55:37 yeah, or at least had to occationally 2024-01-19 03:56:05 think I've used Windows 98 more 2024-01-19 03:56:18 and DOS 2024-01-19 03:56:23 and some AmigaOS 2024-01-19 03:58:22 I started out with MS stuff right away, would be interesting to see how AmigaOS felt like 2024-01-19 04:20:48 I hope riscv64 isn't choking on the Windows 95 theme 2024-01-19 04:24:07 Oh ok, it just took 10 minutes to build for some reason 2024-01-19 07:04:06 algitbot: hi 2024-01-19 07:04:14 google algitbot 2024-01-19 07:04:42 faq upgrade 2024-01-19 07:07:21 algitbot: gitlab.a.o 2024-01-19 07:08:30 algitbot: pr #1 2024-01-19 07:09:20 algitbot: ticket 1 2024-01-19 07:15:50 algitbot: retry 3.18-stable 2024-01-19 16:16:11 algitbot: retry 3.19-stable 2024-01-19 16:16:29 algitbot: retry 3.16-stable 2024-01-20 06:43:20 algitbot: retry 3.16-stable 2024-01-20 07:02:04 Hmm, the build log returns 404 2024-01-20 07:02:29 algitbot: retry 3.16-stable 2024-01-21 12:31:32 algitbot: retry 3.16-stable 2024-01-21 12:34:30 the "logs" from the 3.16 builders aren't helpful 2024-01-21 12:34:58 ikke: any idea? 2024-01-21 14:03:59 I wonder what's going on with gambit 2024-01-21 14:11:18 cely: it needs coreutils 2024-01-21 14:12:50 Ah, thanks 2024-01-21 14:34:55 It passed in CI though, and the "stat: unrecognized option: s" seems to be from `make doc` in build(), whereas the builder is failing later on at rootpkg 2024-01-21 14:35:46 and it passed on the other archs, so not sure what's going on 2024-01-21 14:36:03 Let's see if it passes this time around, and also if it passes on riscv64 2024-01-21 15:17:11 Now it has passed 2024-01-21 15:17:16 algitbot: retry master 2024-01-21 15:17:39 Hopefully it will for riscv64 after a retry 2024-01-21 15:18:09 will pass* 2024-01-21 15:48:32 algitbot: retry master 2024-01-22 01:28:39 ptrc: Just curious if you did !59360 through some automated script, or was just working with an old revision of aports so you didn't notice that the upgrade was already merged? 2024-01-22 01:29:05 ah, you can notice: 'The source branch is 85 commits behind the target branch.' 2024-01-22 01:29:10 so yeah, the latter :p 2024-01-22 01:29:46 i looked for open merge requests, didn't find any, concluded that the update hasn't been done yet 🙃 2024-01-22 01:30:12 but also, thanks! 2024-01-22 01:30:32 as in, for pointing it out 2024-01-22 01:30:34 but my commit was merged 22 hours ago, while yours was 3 hours ago 2024-01-22 01:31:44 yeah, when i made it, i haven't fetched aports since the previous day 2024-01-22 01:32:04 Alright 2024-01-22 01:36:55 It seems clicking on the "85 commits behind" takes you to the full commit log, and doesn't just show you the 85 commits in question 2024-01-22 01:38:08 Anyway, just counted and my commit from 22 hours ago was number 77, so nearing the end of the list :) 2024-01-22 01:38:46 or rather, is number 77 (at least for now) 2024-01-22 01:48:21 algitbot: ticket 15705 2024-01-22 01:48:44 algitbot: ticket 15704 2024-01-22 01:50:04 Hmm, ok, so #15705 is still private, which is probably why algitbot doesn't return anything 2024-01-22 01:58:23 celie: it was reported as a security vulnerability 2024-01-22 01:58:27 and marked as confidential 2024-01-22 02:02:17 Yes, i figured that was the case, mentioning it as it seems edge and the 4 stable versions have been patched 2024-01-22 02:21:44 whoops 2024-01-22 02:22:12 wait why did it trigger only now 2024-01-22 02:22:16 ah no 2024-01-22 02:22:23 i missed it previously 2024-01-22 02:26:59 algitbot: retry master 2024-01-22 02:27:00 boop 2024-01-22 02:27:18 algitbot: retry master 2024-01-22 02:29:11 algitbot: retry master 2024-01-22 02:31:04 algitbot: retry master 2024-01-22 02:40:34 algitbot: retry master 2024-01-22 03:00:31 Hmm 2024-01-22 03:01:25 Test timed out after 10m 2024-01-22 03:01:42 One last try 2024-01-22 03:01:43 algitbot: retry master 2024-01-22 05:42:12 celie: The specific CVE has not been published yet 2024-01-22 06:40:22 ikke: Oh alright, i guess the procedure is to make the issue public after all details are published, not after all Alpine versions are patched 2024-01-22 06:43:26 cely: this is quite an unusual sition, normally we ugprade after a CVE has been published, but in this case the person who found the vulnerability notified us 2024-01-22 06:51:12 Ok, i understand 2024-01-22 11:15:22 algitbot: retry 3.16-stable 2024-01-22 16:10:44 ugh. now what? 2024-01-22 16:13:06 either fix it or disable it 2024-01-22 16:13:14 or revert 2024-01-22 16:16:59 and it's not possible to re-open a merged MR? 2024-01-22 16:17:22 !58984 2024-01-22 16:17:35 No, it's not 2024-01-22 16:18:23 I wonder if we should open an issue to keep track of all the Rust aports that are blocked by 1.72.1, so we know to upgrade them when newer Rust arrives 2024-01-22 16:19:18 Feel free 2024-01-22 16:21:18 cely: you like doing stuff, how about maintaining rust? :B 2024-01-22 16:24:08 I just admitted in !59087 that i don't know Rust, so almost certainly not doable 2024-01-22 16:27:28 oh, you need to know rust to maintain it? 2024-01-22 16:27:30 dang 2024-01-22 16:28:25 It would greatly increase the chances of successfully fixing any issues that arise 2024-01-22 16:28:32 I'll ask in #alpine-devel if anyone knows rust and the first one to respond I'll make maintainer of the aport 2024-01-22 16:28:51 pj is currently maintaining rust, not? 2024-01-22 16:29:06 ikke: New development, not anymore since this morning 2024-01-22 16:29:10 yes, but stepping down 2024-01-22 16:29:11 oh 2024-01-22 16:29:23 unfortunately 2024-01-22 16:31:00 Take Python for an example, it seems some termios tests were added, and now that's blocking an upgrade, either within the 3.11 branch, or to the new 3.12 2024-01-22 16:32:37 maybe Jirutka wants to maintain rust 2024-01-22 16:34:10 But Jirutka is busy, and already maintains Ruby, Nim, Crystal, Jim Tcl, and so on 2024-01-22 16:35:17 right 2024-01-22 19:20:19 thread 'config::test_gcs_service_account' panicked at 'called `Result::unwrap()` on an `Err` value: SCCACHE_S3_NO_CREDENTIALS must be 'true', '1', 'false', or '0'.', src/config.rs:1227:37 2024-01-22 19:25:31 the heck... 2024-01-22 19:25:35 I'll just 2024-01-22 19:25:40 algitbot: retry master 2024-01-22 19:26:19 it went fine here https://gitlab.alpinelinux.org/alpine/aports/-/jobs/1252239#L1306 2024-01-22 19:31:07 flaky tests gonna flake 2024-01-22 19:54:11 omni: \o/ 2024-01-22 20:09:11 \o/ 2024-01-22 23:15:07 maybe I should've adopted xplr, I haven't really used it much but it is pretty neat 2024-01-22 23:15:18 I can always resurrect it 2024-01-24 01:17:09 algitbot: hi 2024-01-24 01:58:52 I wonder if it would be better to add Python 3.12 to testing/ so we could test the Python aports, rather than bumping all of them in the same MR 2024-01-24 02:04:10 curious if it will be possible 2024-01-24 02:05:51 Did you find out why unittest seems to be removed? 2024-01-24 02:06:15 not yet, looks better to put it all into #15341 2024-01-24 02:07:09 distutils because of pep-656 or similar 2024-01-24 02:08:00 It looks like distutils is officially gone, so that will probably require adding `py3-setuptools` to the dependencies of aports 2024-01-24 02:08:19 aports that still use it 2024-01-24 04:53:35 ikke: Someone else reported the test failure in colord (https://github.com/hughsie/colord/issues/163), and i have added what i could find. I will re-open my MR when upstream has prepared a patch for the issue. 2024-01-24 07:52:33 cely: oh-no 2024-01-24 07:53:13 algitbot: retry master 2024-01-24 07:55:41 Phew 2024-01-24 07:56:39 now let's see if you are as lucky this time 2024-01-24 08:00:28 well, there are still some builders that can fail 2024-01-24 08:02:05 not least chromium on aarch64 since I merged that even though it hadn't been successfully built in gitlab CI, we'll see if I have any luck there 2024-01-24 08:09:28 I wish you good luck 2024-01-24 08:16:16 thank you 2024-01-24 10:18:14 phew! 2024-01-24 22:10:11 ah, i did a partial backport, whoops 2024-01-24 22:11:17 happens :) 2024-01-24 22:33:17 oh-no 2024-01-24 22:33:38 ah, you're on it? 2024-01-25 10:30:54 algitbot: retry master 2024-01-26 13:18:04 interesting error: thread panicked while processing panic. aborting. 2024-01-26 13:20:13 algitbot: retry master 2024-01-26 13:48:48 algitbot: retru 3.19-stable 2024-01-26 13:48:48 algitbot: retry master 2024-01-26 13:49:02 algitbot: retry 3.19-stable 2024-01-26 13:49:15 thanks! :) 2024-01-26 13:49:20 You're welcome 2024-01-26 13:49:40 seems like kyua is flaky when it is under load 2024-01-26 14:24:03 cely: I dont say this often enough, but thank you for helping keeping things up to date 2024-01-26 14:25:46 You're welcome :) 2024-01-26 22:16:37 wat 2024-01-26 22:16:42 algitbot: retry master 2024-01-26 22:17:24 \(º«»°)/ 2024-01-26 22:17:49 ¯\_(ツ)_/¯ 2024-01-26 22:20:53 algitbot: retry master 2024-01-26 22:23:19 fail me thrice, shame on me 2024-01-26 23:31:16 algitbot: retry master 2024-01-27 09:57:31 oups 2024-01-27 10:04:47 that was part of that release 2024-01-28 01:50:23 how many browsers do we have really? 2024-01-28 01:51:42 Do you want to include those that don't run JS? ;) 2024-01-28 01:54:45 yes 2024-01-28 01:54:51 what browsers don't we have 2024-01-28 01:54:52 ? 2024-01-28 01:55:04 gotta collect 'em all 2024-01-28 01:55:25 kristall doesn't seem to be much of a js browser 2024-01-28 01:59:00 Oh, you want to include Gemini browsers as well? 2024-01-28 02:00:29 Hmm, now i see the description also includes "http(s)" 2024-01-28 02:05:50 well, I guess I just meant that I wish we had more resources to build all the browsers 2024-01-28 02:19:15 I packaged kristall mostly for gemini, but for gemini lagrange is a nicer graphical browser 2024-01-28 02:26:41 First aport upgrade with the new position of _pkgreal you suggested just landed: !59761 2024-01-28 02:40:02 yes, nice =) 2024-01-28 09:24:56 algitbot: retry master 2024-01-28 09:28:21 my bad 2024-01-28 09:38:44 one commit too many.. 2024-01-28 09:42:50 It's not a bad change :) 2024-01-28 09:43:36 or did you mean to squash it? 2024-01-28 09:43:52 anyway, no big deal 2024-01-28 09:44:50 no i wanted to push the arch commit in the same push 2024-01-28 09:44:55 but my breakfast was ready 2024-01-28 09:45:05 and somebody was getting... 2024-01-28 09:45:33 so i missed the double semi, which is actually allowed by posix :) 2024-01-28 11:01:55 ptrc: ^ firefox-123.0/netwerk/dns/PlatformDNSUnix.cpp:36:15: error: use of undeclared identifier 'res_nquery' 2024-01-28 11:35:51 oh my 2024-01-29 01:09:25 right, reminder to self to run ricv64 pipelines for new aports 2024-01-29 02:52:37 Wow, lots of new aports today 2024-01-31 04:02:08 ^ googling for the error message yields https://github.com/ipfs/kubo/issues/8398 which in turn links to https://github.com/marten-seemann/tcp/pull/1 2024-01-31 04:10:51 From the discussion there, it looks like this affects kubo as well, and as community/kubo and community/k3s have the same maintainer, and kubo is not enabled for riscv64, perhaps the way forward is to disable k3s for riscv64 as well 2024-01-31 09:04:08 algitbot: retry master 2024-01-31 09:17:08 cely: but what is it that wanted marten-seemann/tcp? 2024-01-31 09:18:08 I'm not really sure 2024-01-31 09:22:11 algitbot: retry master 2024-01-31 09:23:22 Are you thinking about removing the code in k3s that wants marten-seemann/tcp? 2024-01-31 09:37:38 no.. 2024-01-31 09:37:56 I would like to report it upstream but know where and what to report 2024-01-31 09:48:16 ok, to k3s I guess 2024-01-31 09:48:24 the commit that added it https://github.com/k3s-io/k3s/commit/37e9b87f62343962d82c21e6503e9916e8cef7ad 2024-01-31 10:08:12 algitbot: retry 3.19-stable 2024-01-31 22:26:27 algitbot: retry master 2024-01-31 23:15:44 algitbot: retry master 2024-01-31 23:59:04 is there a way to just try one of the arm builders? 2024-01-31 23:59:22 if they're competing for resources somehow, maybe they aren't