2023-07-01 01:59:22 ikke: not sure why armv7 fails with no logs 2023-07-01 03:11:14 and it also poofed :D 2023-07-01 03:20:35 seems to not reconnect 2023-07-01 06:24:00 had to reboot it 2023-07-01 06:24:51 weird 2023-07-01 06:30:29 a lot of 0 bytes in /var/log/messages 2023-07-01 06:32:13 memory usage was quite high 2023-07-01 06:32:19 between 192 and 256G 2023-07-01 06:32:42 at the end just 128G though 2023-07-01 06:34:21 wait.. 2023-07-01 06:34:25 again unresponsible :/ 2023-07-01 06:35:31 unresponsive* 2023-07-01 06:39:09 this sounds like a machine bug 2023-07-01 06:39:13 and/or kernel 2023-07-01 06:39:33 and the fact it goes up/down in a loop makes me thing it's either rebooting itself or stuck on something 2023-07-01 06:39:37 the former we had before right 2023-07-01 06:40:29 it doesn't reboot on its own 2023-07-01 06:41:28 https://tpaste.us/D7ZX 2023-07-01 06:42:01 bleh 2023-07-01 06:42:03 which kernel is it 2023-07-01 06:42:27 6.1.30 2023-07-01 06:44:22 Can try edge perhaps 2023-07-01 06:45:16 3.18 is the same kernel 2023-07-01 06:45:28 both .36 2023-07-01 06:45:48 i can't think of anything sadl 2023-07-01 06:58:43 meant linux-edge 2023-07-01 07:00:11 https://tpaste.us/BJnk 2023-07-01 07:13:11 ah 2023-07-01 07:14:13 can't find much relevant 2023-07-01 07:16:34 even with lxc stopped :. 2023-07-01 07:17:00 I still have access the the serial console, but no network 2023-07-01 07:26:22 yeah this is weird :( 2023-07-01 07:26:45 tried to disable services, see if that helps 2023-07-01 07:28:52 upgraded kernel to .36 2023-07-01 07:31:12 hmm 2023-07-01 07:31:20 during upgrade, it hangs again :/ 2023-07-01 07:33:03 I have to continue later :( 2023-07-01 07:36:49 is ok 2023-07-01 11:59:24 running on .36 now, see how stable it is 2023-07-01 11:59:30 (builders are not started yet) 2023-07-01 12:05:33 we'll see if it dies instantly 2023-07-01 12:05:51 It didn't so far 2023-07-01 12:05:55 started the containers again 2023-07-01 12:07:53 armv7 edge aports is corrupt 2023-07-01 12:08:03 HEAD is empty 2023-07-01 12:08:41 git fsck it i guess 2023-07-01 12:09:01 or the classic of rm -r .git/objects and resync 2023-07-01 12:09:10 or just the zero-size ones 2023-07-01 12:09:37 so far only HEAD and index were corrupt, waiting for fsck to finish 2023-07-01 12:11:05 hows armhf 2023-07-01 12:11:57 mqtt-exec not running 2023-07-01 12:12:07 repo seems fine 2023-07-01 12:17:12 sweet 2023-07-01 12:17:24 Hope the issue is fixedd 2023-07-01 12:17:39 need to check 3.18 aarch64 2023-07-01 12:17:56 one of those rare times a small kernel upgrade 'fixed' it 2023-07-01 12:17:57 corrupt as well 2023-07-01 12:18:02 i mean it was just looping forever before.. 2023-07-01 12:18:12 even straight on reboot 2023-07-01 12:18:18 but we'll see 2023-07-01 12:18:41 yeah 2023-07-01 12:20:44 ok, all builders should run again 2023-07-01 12:43:54 distfiles.a.o needed to be added to known_hosts for armv7 2023-07-01 13:20:12 Load average: 103.53 84.29 60.24 2023-07-01 13:21:15 Load average: 212.48 119.62 74.04 2023-07-01 13:24:57 aye 2023-07-01 14:51:52 gitlab was upgraded to 15.11.10 2023-07-01 14:53:56 ! 2023-07-01 14:53:58 ~ 2023-07-01 15:10:19 ikke: we don't have access to dev lxcs on arm? 2023-07-01 15:10:41 mps: You said you have a backup of your containers? 2023-07-01 15:10:59 I don't have access to the containers on che-bld-1, so I cannot sync them 2023-07-01 15:11:03 I have aarch64 only and armv7 but not edge 2023-07-01 15:11:36 is there somewhere I can access them so that I can sync them? 2023-07-01 15:11:40 np, just asked to not create armv7 edge one. now I will create it 2023-07-01 15:12:39 problem is on m1 apple, qemu-user doesn't work always when emulating arm32 2023-07-01 15:13:05 so I pulled my old arm32 chromebook and setting on it 2023-07-01 15:15:04 and have more and more cables, connectors, adapters and other things on desk :) 2023-07-01 15:17:13 The builder at equinix has dmvpn again, so no more fiddling with ipv6 2023-07-01 15:19:15 nice 2023-07-01 15:29:57 3.18 x86 seems stuck in pull 2023-07-01 15:32:31 nope 2023-07-01 15:39:35 nwo gone 2023-07-01 15:41:41 gone? 2023-07-01 15:41:51 it's just sitting idle 2023-07-01 15:44:45 it 'looked' stuck before 2023-07-01 15:44:49 now it's idle 2023-07-01 16:02:53 build-3-17-aarch64 is a thing 2023-07-01 16:03:11 ACTION hopes nothing gets accientally mixed up 2023-07-01 16:31:01 something new appeares 2023-07-01 16:31:30 test 2023-07-01 16:32:37 :) 2023-07-01 16:32:39 passed 2023-07-01 16:49:24 algitbot: retry master 2023-07-01 17:09:22 https://build.alpinelinux.org/buildlogs/build-edge-aarch64/community/py3-sybil/py3-sybil-4.0.1-r2.log 2023-07-01 17:11:43 "Assertion error: " 2023-07-01 17:13:42 I'm impatient to see aarch64 edge finish community this evening (or night at least) 2023-07-01 17:17:11 Seems like it fails upstream as well: https://app.circleci.com/pipelines/github/simplistix/sybil/341/workflows/b13c5171-48c0-4b16-9e7d-10e488e18477/jobs/1725 🤦 2023-07-01 17:18:25 huh, disable check for now? 2023-07-01 17:19:02 Just that specific check 2023-07-01 17:19:28 I have aversion to touch python pkgs 2023-07-01 17:20:20 i already did that 30 minutes ago 2023-07-01 17:24:16 nice 2023-07-01 22:22:41 algitbot: retry master 2023-07-02 03:45:09 algitbot: retry master 2023-07-02 03:56:59 algitbot: retry master 2023-07-02 04:58:13 algitbot: retry master 2023-07-02 10:39:06 algitbot: retry master 2023-07-03 05:11:18 riscv64 is probably stuck 2023-07-03 05:16:10 again 2023-07-03 05:16:41 go has a tendency of hanging on rv64 2023-07-03 05:18:34 https://tpaste.us/ggjy 2023-07-03 05:19:41 yea 2023-07-03 05:19:49 not sure if it's some go compiler race or a qemu issue 2023-07-03 05:20:19 me neither 2023-07-03 05:20:29 all processes are in FUTEX_WAIT 2023-07-03 05:20:32 and one zombie 2023-07-03 20:56:22 ikke: s390x seems stuck 2023-07-04 08:09:27 They're going to move the arm servers today to a location that should have a more stable network connection 2023-07-04 09:03:13 'more stable' or 'stable' 2023-07-04 11:30:44 spam: https://gitlab.alpinelinux.org/alpine/infra/mirrors/-/issues/587 2023-07-04 11:31:20 deleted it 2023-07-04 11:31:47 That's something we should get rspamd to catch 2023-07-04 11:32:16 Once the bayes classifier is active, I think it should catch more 2023-07-04 11:33:03 my local rspamd instance rejected it 2023-07-04 11:34:06 but the one on smtp.a.o didn't have autolearning enabled and nothing manually classified, so it's not active (requires 200 ham and 200 spam classifications before it's active) 2023-07-04 11:48:28 large user spamwave incoming 2023-07-04 13:39:11 been goin for long 2023-07-04 13:43:47 Yeah, but these are mass added from similar domains 2023-07-04 13:43:54 I've been bulk removing them 2023-07-04 13:45:35 ooh there's been like 3k overnight 2023-07-04 13:45:43 was like 9.5 when i went to bed 2023-07-04 13:45:44 now 12k 2023-07-04 13:45:56 Yeah 2023-07-04 13:46:30 I just have a shell + jq snippet that removes latest users from a domain 2023-07-04 15:42:25 lots of accounts from a single domain, purging them all 2023-07-04 16:05:21 @gmail.com ? 2023-07-04 16:06:18 queensmails.com, and other domains as well 2023-07-04 16:06:49 lots of @. 2023-07-04 16:07:09 @stolts.browndecorationlights.com 2023-07-04 16:12:38 yea 2023-07-04 16:12:53 almost below 10k 2023-07-04 16:14:24 <10k now 2023-07-04 16:15:55 I noticed it because I saw a lot of confirmation mails going through rspamd 2023-07-04 16:26:24 psykose: back to 9.5k :) 2023-07-04 16:26:27 :D 2023-07-04 16:28:44 https://imgur.com/a/6XTSMz4 2023-07-04 16:29:19 zoink 2023-07-04 16:29:48 They made it a bit too easy this time :D 2023-07-04 16:30:53 there's a bunch you probably missed in between 2023-07-04 16:31:11 since there's hundreds of @gmail just looking at it right now 2023-07-04 16:33:29 yes, I focused on ones with the obvious spam domains 2023-07-04 16:33:47 algitbot: retry 3.18-stable 2023-07-04 16:34:22 algitbot: retry master 2023-07-04 16:34:39 hm 2023-07-04 16:35:07 what do you expect that to do 2023-07-04 16:36:16 looks like some builders are stuck 2023-07-04 16:36:20 nope 2023-07-04 16:36:44 though if they were that doesn't kill them 2023-07-04 16:38:18 well, mozjs102 might be 2023-07-04 16:40:16 not anymore 2023-07-04 16:41:06 afk 2023-07-04 20:28:14 https://docs.gitlab.com/ee/user/admin_area/moderate_users.html#automatically-delete-unconfirmed-users 2023-07-04 20:31:34 oh, premium, lesigh 2023-07-04 20:54:28 psykose: deleting users under my nose :P 2023-07-04 20:54:39 hey i left you a stack of /watch right below 2023-07-04 20:55:35 (dlv) p len(issues) 2023-07-04 20:55:38 2919 2023-07-04 20:55:40 3k issues 2023-07-04 20:56:04 single user 2023-07-04 20:57:10 which one lol 2023-07-04 20:58:29 mila west 2023-07-04 20:58:34 the one that was just deleted 2023-07-04 21:01:39 ah 2023-07-04 21:01:41 well 2023-07-04 21:01:47 i have deleted like 50 "mila west"'s 2023-07-04 21:02:19 lol 2023-07-05 08:26:39 ikke: are we online again? 2023-07-05 08:26:41 i guess not 2023-07-05 08:40:15 vigir23 is still unreachable 2023-07-05 09:43:31 Hmm 2023-07-05 09:43:59 That means it's connecting to zabbix aga 2023-07-05 09:44:03 Again 2023-07-05 10:31:40 hmm/ 2023-07-05 10:32:46 They are available again 2023-07-05 10:33:10 > From our side there should not be any unplanned interruptions and the vigir23 now has IPv4 on the uplink, hope that helps a bit 2023-07-05 10:34:47 nice 2023-07-05 10:35:10 ci still down 2023-07-05 10:36:19 need to check that still 2023-07-05 10:39:09 dmvpn is now working :) 2023-07-05 10:39:36 mps: you should be able to reach you containers now via dmvpn 2023-07-05 10:39:49 172.16.27.110 2023-07-05 10:39:52 172.16.27.111 2023-07-05 10:41:25 nice 2023-07-05 10:44:08 ikke: thank you 2023-07-05 10:49:14 time on the CI servers is behind, so certs are not accepted 2023-07-05 10:51:40 psykose: all runners should be availabel again 2023-07-05 10:51:47 poggers 2023-07-05 10:52:44 i pressed 'retry' and it was 'queued for zero seconds' and worked 2023-07-05 10:52:52 nah this is a bit too good can you fuck it up a bit please 2023-07-05 10:53:01 lets get that to twenty seconds 2023-07-05 10:53:09 The latency is a lot better 2023-07-05 10:53:19 yea 2023-07-05 10:53:33 they're pretty fast 2023-07-05 10:53:34 neat 2023-07-05 10:53:54 It doesn't have to go via a gre tunnel anymore for ipv4 2023-07-05 10:54:41 :) 2023-07-05 15:15:52 nice, another gitlab security releasae 2023-07-05 15:16:30 awesome 2023-07-05 19:13:16 So what builders are we now going to use? mqtt-exec on che-bld-1 seems to have failed on all builders 2023-07-05 19:13:25 so it's not active at the moment 2023-07-05 19:17:19 probably keep current with vigir disabled, keep ci there 2023-07-05 19:17:23 in a bit we'll see how stable 2023-07-06 07:17:13 ikke: che is ok now? 2023-07-06 07:17:21 clandmeter: yes 2023-07-06 07:17:30 better than before? 2023-07-06 07:28:11 I think so. Latency appears more stable and lower ( but still around 30ms from zabbix) 2023-07-06 07:28:51 Bandwidth I have not tested yet 2023-07-06 14:15:05 started the 3.16 and 3.15 builders on che-bld-1 again and disabled the other builders that are on usa-bld-1 2023-07-06 14:16:39 ~ 2023-07-06 14:18:37 supervise-daemon[54300]: /usr/sbin/corerad, pid 13988, exited with return code 1 2023-07-06 14:18:39 supervise-daemon[54300]: respawned "/usr/sbin/corerad" too many times, exiting 2023-07-06 14:19:06 No indication why it crashes 2023-07-06 14:19:56 you can run it with the same args to get what it prints 2023-07-06 14:20:00 doesn't print it itself tho 2023-07-06 14:20:04 unless logged somewhere 2023-07-06 14:20:26 Is it possible to log stdout with supervisor-daemon? 2023-07-06 14:21:04 output_log= 2023-07-06 14:21:10 to some file 2023-07-06 14:24:26 the file does not appear 2023-07-06 14:30:56 needed to make the file writeable 2023-07-06 14:32:39 but it still doesn't write there 2023-07-06 14:33:43 ah, it's stderr 2023-07-06 14:34:55 that's better 2023-07-06 14:35:42 psykose: thanks 2023-07-06 14:51:10 yeah stderr is error_log 2023-07-06 14:51:25 can make them the same file, all it's doing is > 2> on the process really 2023-07-06 14:52:06 I just error_log 2023-07-06 14:52:13 there is nothing being sent to stdout 2023-07-06 14:53:31 yea 2023-07-06 23:09:03 ikke: s390x stuck on go again :D 2023-07-07 05:50:48 hm, seems to hang twice 2023-07-07 05:51:58 yarn is doing something 2023-07-07 05:52:16 400% cpu usage 2023-07-07 05:53:21 not sure if it's anything useful :P 2023-07-07 05:55:03 don't think i've seen it literally hang 2023-07-07 10:21:27 no output for ages, so it's gone 2023-07-07 10:56:19 oh, apparently they are importing issues in gitlab 2023-07-07 10:58:24 That's how they can create so many issues in such a short time 2023-07-07 12:39:22 Oh https://gitlab.com/gitlab-org/gl-security/security-engineering/security-automation/spam/spamcheck 2023-07-07 20:46:59 nice 2023-07-08 01:21:30 ikke: should pass after another kick without the ui build i guess 2023-07-08 05:21:50 hmm, oom 2023-07-08 05:24:28 the exact same thing passes right before 2023-07-08 05:24:38 same for 3.18 2023-07-08 05:24:41 sometimes it just be like that tho 2023-07-08 05:24:56 it's one of the uhh either the 4g address limit or the vm.max_map_count thing 2023-07-08 05:25:08 probably the former in this case 2023-07-08 05:25:48 is vault passing at least 2023-07-08 05:27:13 yee 2023-07-08 09:51:29 I have dilemma, people asks to add linux kernel for apple silicon with GPU enabled and mesa pkg with support for it 2023-07-08 09:51:55 I have both for long time and both are fine, no big issues 2023-07-08 09:52:58 so question is to push them to aports or not? anyone want to 'encourage' this 2023-07-08 09:54:11 more and more people buy apple silicon to run linux on them so it would be nice if we have these things in alpine for them 'ready' 2023-07-08 09:54:26 most other distros already enabled these 2023-07-08 09:57:19 sounds more like something to put in postmarketos 2023-07-08 09:58:31 people whom I know on asahi community wants to run alpine, no one mentioned pmOS 2023-07-08 09:58:57 pmos is alpine just with some kernel/support packages if one wants to run it that way 2023-07-08 09:59:03 it's the same alpine repos with some extra stuff 2023-07-08 09:59:28 it is not same and you know this very well I think 2023-07-08 09:59:31 we explicitly reject any attempt to put xyz-whatever kernels in alpine forever, so i'm not sure why asahi is some magic exception 2023-07-08 09:59:34 yeah it is the same 2023-07-08 10:00:00 because people wants alpine 2023-07-08 10:00:51 and said people that want alpine should know that all the device-specific alpine-related stuff is in pmos 2023-07-08 10:02:04 most of these people already run alpine on other machine types and simply want to do same on apple silicon 2023-07-08 10:02:56 it is pretty much the same and i already explained why 2023-07-08 10:03:59 I don't see pmOS here https://github.com/AsahiLinux/docs/wiki/SW%3AAlternative-Distros 2023-07-08 10:04:14 and alpine is there for long time 2023-07-08 10:04:50 because you added the kernel to alpine and put it in the docs, yes 2023-07-08 10:05:41 yes, people read IRC logs, some sites and find alpine, and asks for help to install alpine 2023-07-08 10:06:10 pmOS people could make their packages from alpine if they want 2023-07-08 10:06:45 in alpine we already have m1n1, asahi u-boot and kernel 2023-07-08 10:07:26 we can even enable/add gpu flavor in current linux-asahi 2023-07-08 12:46:06 fun, dendrite is crashing again 2023-07-08 17:27:39 :/ 2023-07-08 17:38:21 there was an update for it iirc 2023-07-08 17:38:24 maybe they fixed it 2023-07-08 17:39:14 Was just pulling new images 2023-07-08 17:39:57 but still crashing 2023-07-08 17:40:22 Tried to remove the offending event from the DB, but no luck 2023-07-08 17:41:40 I guess I'll have to report it 2023-07-08 18:02:29 Someone is creating accounts with all kinds of dot variations of the same gmail account (but the suffix is always the same) 2023-07-08 19:31:02 Ok, nice, got grpc to my custom spamcheck server working 2023-07-08 19:31:06 ~ 2023-07-08 19:31:08 (on gitlab-test) 2023-07-08 19:31:09 progress 2023-07-08 19:31:16 It just logs for now 2023-07-08 19:36:00 ok, enabled it now on gitlab.a.o 2023-07-08 19:37:04 lol 2023-07-08 20:35:05 no data as of yet :( 2023-07-08 20:37:22 like it just doesn't work? 2023-07-08 20:38:03 No idea 2023-07-08 20:38:12 I tested it on gitlab-test, created an issue, and saw it logged 2023-07-08 20:38:20 but maybe no issues were created yet 2023-07-08 20:38:56 ah 2023-07-08 20:38:57 yeah 2023-07-08 20:38:58 no issues 2023-07-09 04:18:27 there was one spam one somewhere 2023-07-09 04:18:36 see if that got picked up 2023-07-09 04:18:40 also riscv stuck 2023-07-09 05:19:13 just got 3 messages, none of them spam 2023-07-09 05:30:11 so importing issues just bypasses the spamchecker, great 2023-07-09 05:30:59 it was one off and on your new sign tool repo 2023-07-09 05:31:06 yeah, that one I saw 2023-07-09 05:31:13 I mean, got a notification for in email 2023-07-09 05:31:19 but nothing in spamcheck 2023-07-09 05:31:21 ahh 2023-07-09 05:31:42 that one couldn't have been imported, could it 2023-07-09 05:31:46 We could block post requests to /import_csv 2023-07-09 05:31:49 i thought that was random new repos only 2023-07-09 05:31:58 yeah i don't see a reason to allow imports at all i guess 2023-07-09 05:31:59 You need to be developer on the project 2023-07-09 05:32:06 yeah, so that issue was not an import 2023-07-09 05:32:13 but still went thru without being logged in spamcheck? 2023-07-09 05:32:49 Just saw 3 issues, all for aports 2023-07-09 05:33:06 weirx 2023-07-09 05:33:08 weird* 2023-07-09 05:35:16 Just created an issue myself manually, and I do see that one 2023-07-09 06:02:14 https://gitlab.alpinelinux.org/import_csv :) 2023-07-09 06:06:09 I've adjusted it to only match /-/issues/import_csv now 2023-07-09 06:11:33 :D 2023-07-09 06:27:42 still had to kick rv64 2023-07-09 06:29:52 done 2023-07-09 14:37:34 AH, seeing spam messages now 2023-07-09 14:38:03 and import_csv being blocked :) 2023-07-09 16:59:34 :D 2023-07-09 17:25:15 "POST /lillyrubio/watch/-/issues/import_csv HTTP/1.1" 403 548 "https://gitlab.alpinelinux.org/lillyrubio/watch/-/issues" 2023-07-09 17:26:51 much wow, such empty: https://gitlab.alpinelinux.org/lillyrubio/watch/-/issues 2023-07-09 17:27:05 poggers 2023-07-09 17:27:11 well that's half the puzzle 2023-07-09 17:27:16 getting close 2023-07-09 17:27:21 the other half is the non-imported issue spam 2023-07-09 17:27:29 yes, will work on that 2023-07-09 17:27:36 good work! 2023-07-09 17:28:09 I found it through rspamd, saw a mail about "imported issues" 2023-07-09 17:28:52 otherwise I would not have figured how they added the issues 2023-07-11 12:39:46 what keys are in alpine-keys 2023-07-11 12:40:18 The keys used by the builders, or that have been used in the past 2023-07-11 12:40:22 how so? 2023-07-11 12:42:24 don't see any description except dir hierarchy 2023-07-11 12:45:47 The APKBUILD shows what keys belong to what arch 2023-07-11 12:46:31 and also, https://alpinelinux.org/releases.json 2023-07-11 12:49:23 ikke: thanks 2023-07-11 14:22:48 ikke: the arm ci should probably not be jobs=24 unless it's shared with the builders 2023-07-11 14:23:47 The vms only have 24 cores 2023-07-11 14:24:03 ah 2023-07-11 14:24:28 I divided the cores over the vms 2023-07-11 14:24:56 Not sure if oversubscribing makes sense since they all build generally at the same time 2023-07-11 14:26:44 most builds aren't super linear with using all the cores at once 2023-07-11 14:26:52 + sometimes armhf disabled etc 2023-07-13 13:33:19 psykose: https://gitlab.alpinelinux.org/alpine/infra/compose/pg-upgrade 2023-07-13 13:34:45 aw neat 2023-07-13 13:34:47 good work :D 2023-07-13 13:35:40 Wanted to test gitlab 16 locally, but need to migrate the DB :D 2023-07-13 13:37:45 ~ 2023-07-13 14:01:39 almost at 12k users again :( 2023-07-13 14:01:48 need to clean up all these unverified accounts 2023-07-13 16:45:01 upgraded my local instance succesfully to gitlab 16 :) 2023-07-13 16:45:07 now going to work on gitlab-test 2023-07-13 17:07:07 interesting, this time it's only vigir23 being unreachable 2023-07-14 03:57:24 ~ 2023-07-14 03:57:26 yeah it's a bit flake 2023-07-14 05:03:00 riscv64 is also stuck 2023-07-14 10:13:05 gitlab-test is now running gitlab 16 2023-07-14 11:29:46 Trying to figure out why Gitlab::HTTP has issues with tls SNI, but Net::HTTP does not, and neither does HTTParty (which Gitlab::HTTP uses) 2023-07-14 11:30:05 HTTParty.get 'https://version.gitlab.com' -> success 2023-07-14 11:30:28 Gitlab::HTTP.get('https://version.gitlab.com') -> self-signed certificate (due to missing SNI) 2023-07-14 11:31:11 Net::HTTP.get(URI('https://version.gitlab.com')) -> success 2023-07-14 12:01:13 psykose: I think I'm closer. https://gitlab.com/gitlab-org/gitlab/-/blob/v16.0.7-ee/lib/gitlab/http_connection_adapter.rb#L50 returns an URI object where the hostname is an ip address 2023-07-14 12:01:22 So that would disable SNI 2023-07-14 13:12:21 ttps://gitlab.com/gitlab-org/gitlab/-/issues/413528 2023-07-14 13:12:23 https://gitlab.com/gitlab-org/gitlab/-/issues/413528 2023-07-14 13:17:18 Ok, there is an option to disable dns rebinding prevention, which fixes the issue 2023-07-14 13:17:23 or works around 2023-07-14 13:22:04 https://github.com/ruby/net-http/issues/141 2023-07-14 15:39:35 One downside of gitlab 16 is that all access tokens have an expiry date now :* 2023-07-14 15:40:11 That's going to be really anoying :/ 2023-07-14 17:17:35 gitlab upgraded to 16.0. MR still appear empty immediately after opening them 2023-07-14 17:19:03 merge requests are working 2023-07-14 17:46:13 ~ 2023-07-14 17:46:32 good work :) 2023-07-14 17:59:12 If 12 months from new things break, it's because access tokens have expired :. 2023-07-14 20:08:08 So the automaintainer says it doesn't find any maintainers on the merge requests 2023-07-14 21:31:19 seems to work again though 2023-07-14 21:31:22 i guess you fixed it 2023-07-14 21:33:56 Nope, didn't change anything 2023-07-14 21:34:16 :) 2023-07-14 21:34:23 love it when that happens :D 2023-07-14 21:34:31 Was just debugging / checking the logs 2023-07-20 09:57:30 anyone have rust pkgs from version 1.67 and up somewhere 2023-07-20 09:58:14 hope is that someone didn't cleaned archive 2023-07-20 09:59:47 ikke: could you kick ppc64le 2023-07-20 10:22:46 i can kick it 2023-07-20 10:23:26 thanks 2023-07-21 09:01:56 cz mirror will have a short downtime tomorrow night. i don't expect anybody to notice it. but just in case.. 2023-07-21 19:23:18 someone misconfigured their spam email address generator :D 2023-07-21 19:23:27 {lucia|mariana|lorena|jimena|dolores|brenda|silvana|renata}{1|2|3|4|5|6|7|8|9}@mamerto.online 2023-07-21 20:35:58 psykose: why we need to rebuild linux-edge with every new gcc 2023-07-21 21:59:43 for out of tree modules 2023-07-22 00:52:34 the kernel build system requires `gcc --version | head -n1` to match for its own build and any later built external modules 2023-07-22 00:52:39 been that way forever 2023-07-22 00:53:25 so if you upgrade gcc and gcc --version changes and you try to build an out of tree module, it fails and just says it expected the previous version 2023-07-22 08:20:08 sorry, my question was not clear, and not technical. I know this. I meant to ask do we as distro provider need to do this. but 'morning is wiser than evening' and I think now all is ok 2023-07-22 08:20:44 and I forgot that we build -dev subpkg :) thanks for reminding me 2023-07-22 19:12:43 psykose: only if you use gcc plugins, right? 2023-07-22 19:13:23 you mean like if you disable all of them in kernel config it won't check entirely later? probably 2023-07-22 19:13:27 idk who disables that tho 2023-07-22 19:13:28 didn't test 2023-07-22 19:14:33 I mean CONFIG_GCC_PLUGINS=n 2023-07-22 19:14:45 yeah 2023-07-22 19:14:47 I guess if you use clang? :p 2023-07-22 19:15:00 well you can also just =n it since i'm not aware of any plugins actually in use.. 2023-07-22 19:15:02 they're all dev stuff 2023-07-22 19:15:17 also hardening 2023-07-22 19:15:25 aha right 2023-07-22 19:15:28 the oneee thing 2023-07-22 19:19:27 gcc plugins are disabled in linux-edge 2023-07-22 19:20:09 when I find free time I'll check all this again 2023-07-23 19:28:16 mps: yeah i checked 2023-07-23 19:28:28 the warning is printed, but the actual error is only for gcc plugin 2023-07-23 19:28:33 so edge doesn't have to be rebuilt after all 2023-07-23 19:28:33 :D 2023-07-23 19:54:31 psykose: thanks for info. you saved me some time ;) 2023-07-24 19:09:46 gitlab going down? or is that another host? 2023-07-24 19:09:57 deu1-dev1 2023-07-24 19:10:05 That's another host 2023-07-24 19:10:57 hosting cgit, algitbot, mirrors.a.o, chat.a.o and secdb 2023-07-24 19:11:10 deu7 is wg.a.o 2023-07-24 19:11:19 ok, seems they need to move it to another host 2023-07-24 19:11:43 yes, I see 2023-07-24 20:07:44 clandmeter: fyi, they'll try to live migrate, in which case it should remain online 2023-07-25 08:34:11 my riscv64 lxc doesn't answer to ping 2023-07-25 08:35:19 it is not started? 2023-07-25 08:52:54 It's not set to auto start. Started it now 2023-07-25 08:53:54 ikke: thank you 2023-07-25 12:41:02 ikke: did you ever enable that gitlab cache 2023-07-25 12:48:13 Yes, it's active 2023-07-25 12:51:06 sweet 2023-07-26 03:33:01 ikke: could you start my rv container too 2023-07-26 04:44:32 done 2023-07-26 04:58:55 thanks 2023-07-26 05:01:25 i think it was because lxc was started before qemu-binfmt 2023-07-26 05:01:49 added rc_need=qemu-binfmt to /etc/conf.d/lxc 2023-07-26 05:03:36 makes sense 2023-07-26 09:51:45 do we have automation script/method to rebuild dependent pkgs when upgrading some of base pkgs? 2023-07-26 09:52:20 or simply rebuild all 2023-07-26 09:53:04 skimming trough glib revdep shows about 450 pkgs 2023-07-26 09:54:06 though in my test upgrading glib doesn't require rebuild of dependent pkgs 2023-07-26 09:57:07 ap revdeps + apkgrel -a 2023-07-26 09:58:38 literally this? 2023-07-26 10:13:39 Not literally 2023-07-26 10:16:27 I would bet that someone have ready script for this 2023-07-26 10:18:57 though I upgraded glib locally and for now didn't noticed any problem with dependent pkgs 2023-07-27 02:51:49 there's no new glib version though 2023-07-27 02:52:00 it also doesn't need any rebuilds 2023-07-27 04:19:31 glib is abi-stable every release, and the odd-versions (2.77) are unstable, to be clear 2023-07-27 12:13:47 I can confirm that also glib 2.77 ABI is stable 2023-07-27 12:18:48 the abi is always stable 2023-07-27 12:18:55 the actual library is not 2023-07-27 12:22:12 well, running edge so ... 2023-07-27 12:28:48 if you mean you built it and use it then yeah go for it 2023-07-27 12:29:01 if you mean upgrade to 'random unstable library prerelease' in aports, then no 2023-07-27 12:29:47 yes, locally ofc 2023-07-27 12:29:53 yeah works fine for that 2023-07-27 12:30:01 it's only the usual stable-abi-forward-new-symbols 2023-07-27 12:30:25 so there's a chance that if you downgrade things built against 2.77 won't work on 2.76 (some define macro used a different symbol, etc) 2023-07-27 12:31:47 right 2023-07-27 12:33:05 but some 'smart' upstream software require this 2023-07-27 12:34:16 is it telegram-desktop? :p 2023-07-27 12:34:23 they need 2.77 glib+glibmm 2023-07-27 12:34:24 annoying 2023-07-27 12:35:21 :) 2023-07-27 12:35:32 'smart' ;p 2023-07-27 12:36:14 hehe 2023-07-27 18:23:04 ikke: do you see the recent spam to -devel 2023-07-27 18:23:09 (just to check the rspamd tuning) 2023-07-27 18:24:59 Last thing I've seen is an empty reply to the wiki relicense thread 2023-07-27 18:25:25 My local spam filter is apparently blocking it 2023-07-27 18:33:03 yep 2023-07-27 18:33:11 you mentioned that before and told me to tell you when it shows up 2023-07-27 18:33:12 :) 2023-07-27 18:33:17 there's been a few 2023-07-27 18:33:19 on -infra too 2023-07-27 18:34:40 Yeah, I'm learning / removing them now 2023-07-27 18:39:04 psykose: where in -infra? 2023-07-27 18:39:16 oh oops 2023-07-27 18:39:17 hmm 2023-07-27 18:39:29 i confused it with mirrors 2023-07-27 18:39:30 haha 2023-07-27 18:39:32 heh 2023-07-27 18:39:33 nvm 2023-07-31 18:37:02 strange `ap revdep perl-xs-parse-sublike-dev` gives 'sh: .: line 5: can't open '/../../*/lemmy/APKBUILD': No such file or directory' 2023-07-31 18:37:20 you need to run it in the specific repo diir 2023-07-31 18:37:22 dir 2023-07-31 18:37:28 for example main/ or community/ 2023-07-31 18:41:21 I run it in testing 2023-07-31 18:42:19 I see, let me check 2023-07-31 18:42:32 and whatever pkg I use msg is sam 2023-07-31 18:42:36 same* 2023-07-31 18:43:17 ah, lemmy-ui is the culprit 2023-07-31 18:44:01 '_translations_commit="$( . $startdir/../../*/lemmy/APKBUILD; echo $_translations_commit )"' ;-) 2023-07-31 18:44:21 yesd 2023-07-31 18:44:31 let me fix that 2023-07-31 18:49:07 mps: if you pull, it should be fixed 2023-07-31 18:49:49 yes it is fixed 2023-07-31 18:49:52 thanks 2023-07-31 19:19:31 this may need some more time from CI https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/49346 2023-07-31 19:19:44 how can I give more CI time? 2023-07-31 19:20:18 iirc, the user had to set that themselves in CI settings 2023-07-31 19:20:30 yes, or I do it for them :P 2023-07-31 19:20:58 that's also a possibility 2023-07-31 19:21:25 ncopa: you can do that in the CI/CD settings on their fork 2023-07-31 19:22:03 I've changed it, but the pipeline needs to be restarted before it takes effect 2023-07-31 19:22:05 https://gitlab.alpinelinux.org/USER/aports/-/settings/ci_cd 2023-07-31 19:22:11 indeed 2023-07-31 19:22:32 thanks!