2023-06-01 03:06:16 psykose: Yeah, didn't think you changed anything 2023-06-01 03:06:25 mps just asked who has access 2023-06-01 03:07:26 mystery network issues 2023-06-01 03:07:37 yup 2023-06-01 03:08:07 wireguard wouldn't work 2023-06-01 04:46:19 riscv is of course stuck again 2023-06-01 06:05:03 To the htop-mobile 2023-06-01 07:46:22 think 3.18-x86 is too 2023-06-01 07:46:27 these are weirdly frequent now 2023-06-01 12:56:12 both of the 3.18-x86*'s, how cute 2023-06-01 13:22:41 Talking about solidarity 2023-06-01 13:26:10 jsapi-tests deadlock, all in futex 2023-06-01 13:26:25 think seen that one before 2023-06-01 13:26:31 just random 2023-06-01 13:26:49 that's often the case with deadlock 2023-06-01 13:26:50 s 2023-06-01 13:29:11 hmm 2023-06-01 13:29:15 it's been a while for the arm stuff 2023-06-01 13:29:18 no word yet i guess 2023-06-01 13:29:26 nope 2023-06-01 13:29:35 I did let them know, but not any update yet 2023-06-01 13:29:50 oh 2023-06-01 13:29:51 >We'll check the setup this afternoon or tomorrow at lunchtime to see what's wrong 2023-06-01 13:31:23 sweet 2023-06-01 13:33:41 https://img.ayaya.dev/B1bqd8dP2gah 2023-06-01 13:33:48 how useful 2023-06-01 13:34:05 "causing high vulnerabilities" lol 2023-06-01 13:34:44 hm 2023-06-01 13:35:01 maybe what they mean to say is they have 3.0.8-r4 on 3.17 aarch64 2023-06-01 13:35:07 because it's not upgraded there 2023-06-01 13:35:14 god i love the flagged page 2023-06-01 13:35:15 not 2023-06-01 18:26:14 long waiting takes long 2023-06-01 18:29:33 hell frozen over etc 2023-06-01 18:30:15 working on building gitlab 15.11 2023-06-01 18:30:54 Error: Cannot find module 'commander' 2023-06-01 18:31:38 sounds like just a ruby import at a glance 2023-06-01 18:32:02 it's nodejs 2023-06-01 18:33:22 same thing for that :-) 2023-06-01 18:33:33 weekly downloads: 112M 2023-06-01 18:33:35 well ok then 2023-06-01 18:34:08 sometimes i wish we had anonymised metrics (just hit count with no other data at all) just for fun 2023-06-01 18:34:22 for cdn only that is 2023-06-01 18:34:29 commander is in devDependencies 2023-06-01 18:34:35 yea 2023-06-01 18:37:58 https://tpaste.us/b1ZW 2023-06-01 18:38:07 bundle exec rake gettext:compile RAILS_ENV=production 2023-06-01 18:41:24 any info about arm machines? 2023-06-01 18:42:48 "We'll check the setup this afternoon or tomorrow at lunchtime to see what's wrong" 2023-06-01 18:43:45 ugh 2023-06-01 19:21:08 needed to move yarn install earlier 2023-06-01 20:04:19 ikke: also the x86_64 ci seems to be out of space 2023-06-02 22:18:42 ikke: no word today i guess? 2023-06-03 05:57:29 psykose: haven't heard anything 2023-06-03 06:00:11 not great 2023-06-03 08:32:09 psykose: honestly, personally I don't care about manually flagging in aports-turbo 2023-06-03 08:32:27 it's annoying because it's one-way 2023-06-03 08:32:47 We could just replace it with instructions to open an issue on alpine/aports 2023-06-03 08:33:08 Keep the anitya integration 2023-06-03 08:33:09 idc if someone incorrectly flags some stuff or the occasional random abcxyz thing, but something that looks like an actual issue, especially one that sounds like something that has to be fixed and not obvious, with no way to then get any more info is just upsetting 2023-06-03 08:33:12 that, and the uh 2023-06-03 08:33:17 copious amounts of massive spam recently 2023-06-03 08:33:36 yeah, anitya is a good thing 2023-06-03 08:33:37 yeah, I think the flagging feature is kinda weird. sometimes people flag things wrongly even. like I remember the GTK 3 package was flagged as not being updated to the GTK 4 release 🙄 2023-06-03 08:33:53 that happens all the time for things with multiple branches lol 2023-06-03 08:34:33 (to emphasise twice, the anitya thing is completely good and something 'required' in some sense or another) 2023-06-03 08:34:49 though i do prefer what void linux does, albeit that's extra files and stuff 2023-06-03 08:34:58 What does void linux do? 2023-06-03 08:36:15 https://github.com/void-linux/void-packages/blob/master/common/xbps-src/shutils/update_check.sh 2023-06-03 08:36:28 then some per-package stuff for filtering and whatnot like https://github.com/void-linux/void-packages/blob/master/srcpkgs/SPIRV-Headers/update 2023-06-03 08:36:59 by concept, it's like anitya but you have it as a tool you run in the repo with your own stuff, not something with a thirdparty thing 2023-06-03 08:37:13 same goal, so nothing special 2023-06-03 08:53:16 I have gitlab 15.11 built locally 2023-06-03 09:01:49 ~ 2023-06-03 09:02:00 now is as good a time as any 2023-06-03 09:32:54 We can remove all the gitaly ruby stuff now 2023-06-03 09:41:46 https://gitlab.alpinelinux.org/alpine/infra/docker/gitlab/-/pipelines/166690 2023-06-03 09:51:28 yay 2023-06-03 10:04:41 for 16.0 we need to upgrade postgres 2023-06-03 10:04:51 ie, export and reimport the db 2023-06-03 10:05:02 making sure all required extensions are enabled 2023-06-03 10:05:19 i don't think you have to export it 2023-06-03 10:05:35 you just have both installed and rn pg_something 2023-06-03 10:05:49 which also does the extension stuff and whatever for you 2023-06-03 10:06:03 We use the postgres container images 2023-06-03 10:06:23 Maybe I did use that the last time 2023-06-03 10:06:36 if you put the db in some volume then all you need is to start some image with old and new version and the volume mounted on something 2023-06-03 10:06:37 conceptually 2023-06-03 10:06:47 the actual command is on some postgres page, you'll find it 2023-06-03 10:07:07 it's a more functional solution than full export+import because of the 50 other things you might miss the tool checks 2023-06-03 10:07:46 could migrate dendrite too to 15 2023-06-03 10:09:16 https://tpaste.us/WxJY 2023-06-03 10:11:08 does gitlab require their own custom migrate 2023-06-03 10:11:11 that's unfortunate 2023-06-03 10:11:54 That's just upgrading to 15.11 2023-06-03 10:12:00 on gitlab-test 2023-06-03 10:12:53 ah 2023-06-03 10:12:56 right 2023-06-03 10:14:02 It does work in the locally built image, wonder what the difference is 2023-06-03 10:35:38 hmm, locally was built with ruby 3.0.0 intead of 3.1.0.. 2023-06-03 10:37:39 strange 2023-06-03 10:37:57 hah 2023-06-03 10:38:04 arm builders being gone means https://gitlab.alpinelinux.org/alpine/infra/docker/abuild-ci/-/pipelines/166701 doesn't work either 2023-06-03 10:38:06 my fault 2023-06-03 10:38:26 yup 2023-06-03 10:40:21 I shortcut my build system to speed things up, but that caused a .env file to be stale 2023-06-03 10:41:14 but that does not explain why bundler does not work 2023-06-03 12:27:17 high quality oem rhodium/gold plating 2023-06-03 12:30:26 heh 2023-06-03 12:35:35 lesigh: extconf.rb:52:in `
': uninitialized constant Gem (NameError) 2023-06-03 12:37:41 sure wish i knew what that means 2023-06-03 12:38:18 You and me both 2023-06-03 12:38:28 trying to manually run bundle install for gitlab 2023-06-03 12:38:35 some gems fail to install 2023-06-03 12:40:55 and crickets on google 2023-06-03 12:49:39 do they have a nice song at least 2023-06-03 17:39:34 ikke: kick ppc64le when you get a chance 2023-06-03 17:40:51 done 2023-06-03 17:41:05 thanks 2023-06-03 17:41:25 for some reason I always get denied using strace on ppc64le as root 2023-06-03 18:21:03 maybe a vm thing on ppc specifically 2023-06-03 18:23:53 yeah 2023-06-03 18:39:05 there's a sysctl that forbids ptrace 2023-06-03 18:39:25 apparently 2023-06-03 18:39:33 kernel.yama_ptrace_scope 2023-06-03 18:40:03 .yama. 2023-06-03 18:41:27 kernel.yama.ptrace_scope = 1 2023-06-03 18:43:23 https://www.kernel.org/doc/Documentation/security/Yama.txt 2023-06-03 18:43:30 you can try = 0 and rebooting (can't lower it otherwise) 2023-06-03 18:44:19 but that should not affect root according to that page 2023-06-03 18:49:51 It works on the host, not inside containers 2023-06-03 18:50:53 ok, this does not help lxc.cap.drop = sys_ptrace :P 2023-06-03 18:54:00 LOL 2023-06-03 18:54:02 :D 2023-06-05 02:24:10 think x86_64 is hung 2023-06-05 02:56:38 nope, passed eventually 2023-06-05 02:56:40 yees 2023-06-05 02:56:42 h 2023-06-05 05:24:23 interesting 2023-06-05 05:28:46 hmm 2023-06-05 05:29:37 4 nginx process in disk sleep 2023-06-05 05:30:11 what are they blocked on 2023-06-05 05:30:16 you mean Z right 2023-06-05 05:30:29 no, D 2023-06-05 05:30:33 Z is suspended 2023-06-05 05:30:44 ah 2023-06-05 05:30:47 right 2023-06-05 05:30:50 hmmmm 2023-06-05 05:30:59 The disk where the distfiles are located on 2023-06-05 05:31:04 i've never seen that so it almost sounds like a read on the disk 2023-06-05 05:31:09 yes 2023-06-05 05:31:10 and the disk failed and something got stuck in the kernel 2023-06-05 05:32:09 pool1 on /srv/distfiles type zfs (rw,xattr,noacl) 2023-06-05 05:33:57 a network attach scsi disk 2023-06-05 05:34:59 a zfs pool apparently 2023-06-05 05:36:42 rebooting the host 2023-06-05 05:36:52 (after upgrading) 2023-06-05 05:44:50 "import zfs pools..." 2023-06-05 05:44:50 also gonna reboot deu7/deu1/gbr2 for kernel/openssl 2023-06-05 05:46:19 hope the pool is not corrupted 2023-06-05 05:46:25 might be 2023-06-05 05:46:29 do you know how to check 2023-06-05 05:46:41 well, first need the vps to boot 2023-06-05 05:46:44 but not really 2023-06-05 05:46:47 have never used zfs 2023-06-05 05:46:57 (clandmeter set this up) 2023-06-05 05:46:59 zpool scrub pool1 2023-06-05 05:47:04 (takes time) 2023-06-05 05:47:15 zpool status -v pool1 2023-06-05 05:47:36 https://docs.oracle.com/cd/E19253-01/819-5461/6n7ht6r6p/index.html for the docs 2023-06-05 05:48:05 i never used it either, but i did read a bunch about it in highschool for some reason 2023-06-05 06:52:34 ~ 2023-06-05 06:53:05 Apparently just waiting long enough helped 2023-06-05 06:53:37 😅 2023-06-05 06:59:02 something probably timed out 2023-06-05 07:41:57 buildlogs are working, so distfiles is mounted 2023-06-05 07:42:59 yep 2023-06-05 08:05:46 morning! any progress with? https://gitlab.alpinelinux.org/alpine/infra/infra/-/issues/10798 2023-06-05 08:05:51 what are our options? 2023-06-05 09:41:56 ikke: could you also kick s390x 2023-06-05 09:56:07 we lost arm? any news? 2023-06-05 10:13:16 mps: no news 2023-06-05 10:14:14 psykose: " * Importing ZFS pool(s) ... [ ok ] " 2023-06-05 10:14:20 so it just took long I suppose 2023-06-05 10:14:25 ncopa: I've been thinking about it 2023-06-05 10:14:28 hmm 2023-06-05 10:14:43 ikke: but why would it randomly start importing 2023-06-05 10:14:48 unless you mean you rebooted before that went missing 2023-06-05 10:15:00 psykose: I have no idea 2023-06-05 10:15:08 :) 2023-06-05 10:15:56 https://tpaste.us/O8Jb 2023-06-05 10:16:27 you should do that upgrade 2023-06-05 10:17:18 can that be done online? 2023-06-05 10:18:13 afaict yes 2023-06-05 10:18:14 https://docs.oracle.com/cd/E19253-01/819-5461/gcikw/index.html 2023-06-05 10:19:26 as long as it's not also / and using grub to boot it's fine 2023-06-05 10:19:50 yeah, it's just additional storage, the main OS is on ext4 2023-06-05 10:20:01 all good then 2023-06-05 10:21:02 ok, done 2023-06-05 10:21:05 it enabled draid 2023-06-05 10:22:25 ARM ETA midweek this week, was postponed due to circumstances 2023-06-05 10:25:39 so we cannot push the openssl security fixes release yet 2023-06-05 10:25:54 sadly not 2023-06-05 10:26:11 ncopa: I'll add my thoughts to that issue 2023-06-05 10:27:20 how long has arm been gone? 2023-06-05 10:27:49 almost a week? 2023-06-05 10:28:24 Host ICMP unreachable6d 2h 6m 2023-06-05 10:29:15 not much we can really do about it except find another infra host for the machines 2023-06-05 10:29:37 which is generally quite expensivve 2023-06-05 10:29:50 I saw azure support ARM, but not sure what kind of hw 2023-06-05 10:30:03 use free oracle vm 2023-06-05 10:30:27 not sure how that is useful considering the host and the hardware is separate 2023-06-05 10:30:34 unless you want to return the servers too 2023-06-05 10:31:02 which is "an option" but idk why we would return like $80k of servers to get some awful azure replacement thing when only the host is an issue 2023-06-05 10:31:58 Azure ARM64 machines are available only as a preview 2023-06-05 10:32:06 pj: yeah, saw that 2023-06-05 10:32:26 psykose: would only be as a backup / additional CI capacity 2023-06-05 10:32:47 well, the capacity presently is fine in theory 2023-06-05 10:32:52 as for the hw, it's ampere altra 2023-06-05 10:33:00 we have more capacity on arm than on any other arch 2023-06-05 10:33:13 for backups sure 2023-06-05 10:33:22 i don't think it's that serious 2023-06-05 10:33:29 the best solution is to find another host if anything 2023-06-05 11:44:16 ikke: pingpong for s390x :) 2023-06-05 12:00:37 thanks 2023-06-05 12:00:52 hm, that didn't send to alpine-commits 2023-06-05 12:01:45 It was build repo / aports-build 2023-06-05 12:11:57 ah 2023-06-05 12:22:08 we still have issues with s390x. did anyone investigate exactly what happens? 2023-06-05 12:22:53 which ones? 2023-06-05 12:23:05 the above one is not s390x specific 2023-06-05 12:23:11 buildrepo has randomly hung on any arch 2023-06-05 12:23:42 oh really 2023-06-05 12:24:02 do we have any way to reproduce it reliable? 2023-06-05 12:24:52 nope 2023-06-05 12:25:00 the hanging syscall is some futex wait 2023-06-05 12:25:09 but no idea which line of code does it in buildrepo 2023-06-05 12:27:32 a wild guess would be in the libmosquitto code 2023-06-05 12:28:27 does it affect stable branches or only edge? 2023-06-05 12:30:22 if you're going to pay for servers, surely hetzner is much cheaper than azure especially once you count bandwidth 2023-06-05 12:34:24 i think i saw it on 3.18 too 2023-06-05 12:34:32 but yeah i think that's a likely reason 2023-06-05 12:34:43 hmm 2023-06-05 12:34:52 actually, it can be between-packages 2023-06-05 12:34:55 like uhh 2023-06-05 12:35:16 msg trigger -> buildrepo -> abuild 1/15 -> buildrepo(loop) -> abuild 2/15 -> hang 2023-06-05 12:35:36 does it even use libmosquitto code after the initial trigger? 2023-06-05 12:35:40 assuming it's not some state 2023-06-05 12:35:45 ncopa: possibly network hiccups 2023-06-05 12:35:56 and yeah it's always been network related 2023-06-05 12:36:02 during the arm issues 2023-06-05 12:36:17 in the middle of a build network went down, when it comes back up it's hanging on something 2023-06-05 12:36:36 that in itself is probably fine but there's a missing timeout 2023-06-05 12:36:47 on the action that fails 2023-06-05 13:08:09 i'd be excited to host an arm builder^^ 2023-06-05 13:11:41 (im running a small colo/dc in ch) 2023-06-05 13:15:31 small bathroom* 2023-06-05 13:15:35 but yes, +1 from me 2023-06-05 13:17:29 ehm, not anymore :D 2023-06-05 13:17:40 its proper now 2023-06-05 13:17:47 a whole bathhouse? 2023-06-05 13:18:50 actually you are onto something, i should open an onsen to repurpose the wasted heat 2023-06-05 13:19:02 see? massive brain 2023-06-05 13:19:26 but more seriously yes if you know how to host some servers and they have working v4+v6 networking and you respond it sounds fine 2023-06-05 13:19:49 that said it's not up to me 2023-06-05 13:34:14 sure, i should have everything 2023-06-05 14:40:04 nu, ch as in switzerland and not china? 2023-06-05 14:40:56 yes 2023-06-05 14:41:02 china is cn :) 2023-06-05 14:41:13 i'll probably visit the place at some point 2023-06-05 14:41:46 nu: wasn't this place also 10gbit networking 2023-06-05 18:12:41 yes, swissland 2023-06-05 18:13:46 yup, its 10 with an optional 25 2023-06-05 22:32:35 why dl-5 forces redirect from http to https 2023-06-06 07:57:13 probably the revproxy 2023-06-06 11:37:43 (alerts after 7 days) 2023-06-06 12:10:26 https://www.reddit.com/r/AlpineLinux/comments/141f7bk/build_servers_for_arm_still_down_for_the_last_5/ 2023-06-06 12:10:46 i wonder if we should make some official statement somewhere and report the status 2023-06-06 12:11:17 you should 2023-06-06 12:13:25 what is the current status? do we know what actually happened? 2023-06-06 12:17:42 from what I recall: issues with routers 2023-06-06 12:18:26 or this might be other issue with arm before 2023-06-06 12:29:09 ncopa: ungleich didn't have time to investigate yet (personal circumstances / workload), so no official cause yet. 2023-06-06 12:29:25 ok 2023-06-06 15:30:38 psykose: interesting, dendrite no longer crashes 2023-06-06 15:30:49 Up 34 hours 2023-06-06 15:31:04 Might be related to a room I was in 2023-06-06 15:31:18 That room was upgraded, and since it appeared to be stable 2023-06-06 16:45:45 interesting 2023-06-06 18:01:59 time for the ole update asap eh 2023-06-06 20:24:27 gitlab-test runs gitlab 15.11.6 now 2023-06-06 20:28:57 \o/ 2023-06-06 20:30:14 signin works 2023-06-06 20:31:25 mrs work 2023-06-06 20:31:57 ship it 2023-06-06 20:34:35 tomorrow 2023-06-06 23:35:09 ikke: riscv also seems like it went poof 2023-06-07 04:41:33 buildrepo 2023-06-07 04:42:23 classic 2023-06-07 04:42:42 it's time to lab the git 2023-06-07 04:42:47 or do you have coffee to fetch first 2023-06-07 04:47:33 I do have to work first 2023-06-07 04:47:54 work also good 2023-06-07 08:05:02 good morning 2023-06-07 08:05:19 anyone heard anything about the arm machines? 2023-06-07 08:08:49 nope 2023-06-07 08:22:25 ^ 2023-06-07 08:28:31 :-/ 2023-06-07 08:33:26 rip alpine arm port 2016-2023 2023-06-07 08:34:00 memorial at akershus castle tomorrow 9pm 2023-06-07 14:45:09 aports-turbo now on 3.18 2023-06-07 14:45:13 thanks to ptrc 2023-06-07 14:45:37 Oh wow 2023-06-07 14:45:56 issue was new luajit changed semantics of some return type to be like.. lua5.3 2023-06-07 14:46:02 old 3.11 luajit was still like 5.1 2023-06-07 14:46:06 that broke the import 2023-06-07 14:46:12 (had figured out the ssl issue before) 2023-06-07 14:46:25 so it worked but never updated packages 2023-06-07 14:46:40 also fixed url encode chars 2023-06-07 15:09:50 psykose: is that require("foo") vs foo = require("foo")? 2023-06-07 15:10:02 no 2023-06-07 15:10:04 oh ok 2023-06-07 15:10:33 os.execute returns bool instead of status 2023-06-07 15:10:36 old code did ==0 2023-06-07 15:10:49 ah, ok, so thinks it's failing 2023-06-07 15:11:42 https://www.lua.org/manual/5.3/manual.html#pdf-os.execute 2023-06-07 15:11:43 yep 2023-06-07 15:12:03 5.1: https://www.lua.org/manual/5.1/manual.html#pdf-os.execute 2023-06-07 15:12:23 luajit makes this much harder because it's.. 5.1 with an extension or two on paper 2023-06-07 15:12:26 but then also random shit like this 2023-06-07 15:12:37 fun™ 2023-06-07 15:12:50 ptrc: maybe there's a full list of differences somewhere? 2023-06-07 15:12:54 you should work on it more :-) 2023-06-07 15:12:57 would be nice to speed it up.. 2023-06-07 16:33:52 Gitlab has been upgraded to 15.11.6 :) 2023-06-07 16:37:17 webhook is broken 2023-06-07 16:37:22 for push 2023-06-07 16:37:33 doesn't send to msg 2023-06-07 16:37:40 manual msg trig works tho 2023-06-07 16:38:36 Hook execution failed: SSL_connect returned=1 errno=0 peeraddr=[2a01:7e00:e000:7eb:1::2]:443 state=error: certificate verify failed (self-signed certificate) 2023-06-07 16:39:05 well well 2023-06-07 16:40:46 it works from the gitlab container 2023-06-07 16:41:03 sounds like wrong resolution then 2023-06-07 16:41:10 i.e. it reaches the wrong endpoint 2023-06-07 16:41:11 for that ip 2023-06-07 16:41:12 resolves to the same address 2023-06-07 16:41:12 or w/e 2023-06-07 16:41:15 hrm 2023-06-07 16:41:28 / # curl -v https://webhook.alpinelinux.org 2023-06-07 16:41:29 * Trying [2a01:7e00:e000:7eb:1::2]:443... 2023-06-07 16:41:35 maybe was sporadic 2023-06-07 16:41:53 Happens when I test the webhook in the backend 2023-06-07 16:42:34 strange 2023-06-07 16:45:36 Maybe something ruby related? 2023-06-07 16:46:05 well 2023-06-07 16:46:09 hm 2023-06-07 16:46:14 can you do a request from ruby? 2023-06-07 16:46:16 in the container 2023-06-07 16:46:20 i.e. start a ruby shell 2023-06-07 16:46:24 do http from it with the utils 2023-06-07 16:48:20 works fine 2023-06-07 16:48:45 https://tpaste.us/MRJm 2023-06-07 16:49:55 bleh 2023-06-07 16:50:00 really not sure then :/ 2023-06-07 16:50:14 it is indeed probably the exact thing it uses being a different set of ca certs 2023-06-07 16:50:22 but, well, 'self-signed' gives me a clue that it's not 2023-06-07 16:50:28 cause that would just be failed sig or whatever 2023-06-07 16:50:38 self-signed implies not related to ca bundle but some other check, which is usually routing 2023-06-07 16:50:48 i.e. webserver returns wrong cert for other domain etc 2023-06-07 16:51:30 https://tpaste.us/EJXR 2023-06-07 16:52:40 nothing stands out to me 2023-06-07 16:58:08 This did switch from openssl1 to openssl3 2023-06-07 17:07:14 that should only affect insecure protocol stuff 2023-06-07 17:07:20 idk why it would return a self signed error 2023-06-07 17:07:40 hmm 2023-06-07 17:07:44 wireshark describes it as an unknown CA 2023-06-07 17:07:55 ah 2023-06-07 17:07:58 ok 2023-06-07 17:08:01 that's the issue then 2023-06-07 17:08:05 who is the ca, what is the cert? 2023-06-07 17:08:15 can you paste it? 2023-06-07 17:08:30 well 2023-06-07 17:08:32 it works for us 2023-06-07 17:08:57 openssl s_client -showcerts webhook.alpinelinux.org:443 looks fine 2023-06-07 17:09:16 Trying to see if I can find it in the packet 2023-06-07 17:11:47 I miss the server_name extension in the client hello 2023-06-07 17:12:21 https://imgur.com/a/AehrndW 2023-06-07 17:15:07 need that 2023-06-07 17:15:20 i think 2023-06-07 17:16:07 client does tls 1.0 2023-06-07 17:16:29 uhh 2023-06-07 17:16:31 well 2023-06-07 17:16:36 that's all disabled 2023-06-07 17:16:37 hmm, or is that legacy, in the handshake it mentions 1.2 2023-06-07 17:16:53 i don't know why that gives something as useless as 'selfsignedcertificate' but it shouldn't do that 2023-06-07 17:16:59 there's workarounds but they shouldn't be used 2023-06-07 17:17:27 if it's cli then it's -legacy 2023-06-07 17:22:04 if I use curl, then I see the server_name extension in the packet 2023-06-07 17:22:56 The certificate itself is encrypted, so I cannot inspect it 2023-06-07 17:25:26 I also see requests to cloudflare failing with the same error 2023-06-07 17:26:29 something sounds broken in the client 2023-06-07 17:26:56 yes 2023-06-07 17:27:14 One option would be to revert to ruby 3.0 / alpine 3.16 2023-06-07 17:29:13 https://gitlab.alpinelinux.org/alpine/infra/docker/gitlab/-/commit/e67e1b53536ba6a7dba5afa2ff937e8c2ddae986 2023-06-07 17:35:38 I have a temporary work-around for the hooks 2023-06-07 17:39:10 my guess is 16 might fix that just because things are a bit newer 2023-06-07 17:39:31 Yeah, might be the case 2023-06-07 17:40:08 But wondering in what manner gitlab is fiddling with openssl that it would somehow prevent SNI from working 2023-06-07 17:42:00 idk tbh but there's no good workarounds except trying an upgrade and testing it in -test unless we get good at debugging 2023-06-07 17:42:01 :D 2023-06-07 17:42:22 I also have gitlab running locally 2023-06-07 17:46:59 Locally I get the error as well 2023-06-07 17:55:38 Wondering if it's feasible to upgrade the openssl gem in the image 2023-06-07 17:56:02 i think so 2023-06-07 17:56:12 shouldn't break stuff, but also might not fix it 2023-06-07 17:57:07 nothing interesting in 3.0.2/3.1 changelog 2023-06-07 17:57:09 but try 2023-06-07 17:58:16 Also trying alpine 3.18 2023-06-07 18:23:42 3.18 does not fix it 2023-06-07 18:26:29 rip 2023-06-07 18:26:37 does 3.1 openssl gem 2023-06-07 18:35:13 nope 2023-06-07 18:37:49 confirmed the request comes from the gitlab container 2023-06-07 18:38:01 (as opposed to gitally for example) 2023-06-07 18:39:53 rip 2023-06-07 18:41:15 Hmm, I can also try 3.1-alpine3.16, to see if it's alpine / openssl or ruby 2023-06-07 18:42:36 that has a bunch more stuff too, like a very different ruby right 2023-06-07 18:42:40 but if it works that's fine 2023-06-07 18:43:02 No, ruby version is the same 2023-06-07 18:43:09 so changing just one part 2023-06-07 18:43:30 now I switched both to alpine 3.17, as to ruby 3.1 2023-06-07 18:45:39 the version yes but the build no 2023-06-07 18:45:58 right 2023-06-07 18:46:03 unless you mean you were using 3.1-alpine3.17 and not the apk add ruby one 2023-06-07 18:46:07 but the same with alpine3.18 2023-06-07 18:46:08 in which case yeah the same 2023-06-07 18:46:09 more or less 2023-06-07 18:46:20 yes, was using 3.1-alpine3.17 2023-06-07 18:46:35 ie, we have always been using upstream ruby images 2023-06-07 18:47:08 aha 2023-06-07 18:47:11 right 2023-06-07 19:22:49 3.1-alpine3.16 failed 2023-06-07 19:23:13 so it's not the alpine version 2023-06-07 19:25:02 that means ossl1.1 fails too 2023-06-07 19:25:54 It's still using the openssl 3.0.1 gem, but that shouls not matter I suppose 2023-06-07 19:26:30 That comes with the ruby 3.1 image 2023-06-07 19:27:29 ye 2023-06-07 19:41:29 guess that wednesday was a lie 2023-06-07 19:43:27 and here i was staying awake for 36 hours just in case they came back so i could fix anything that failed and get everything uploaded 2023-06-07 19:43:31 classic swiss networking 2023-06-07 19:43:46 it was 2023-06-07 19:43:50 around wednesday 2023-06-07 19:44:15 i don't get it 2023-06-07 19:44:23 how does this company manager like 8 datacenters 2023-06-07 19:44:28 but have 0 people to look at 1 networking issue 2023-06-07 19:56:55 where was the x86_64 dev container hosted? 2023-06-07 19:58:53 equinix 2023-06-07 19:59:03 heh 2023-06-07 19:59:04 expected 2023-06-07 19:59:29 oh, you mean dev.a.o? 2023-06-07 19:59:35 that's on nld8 or 9 2023-06-07 19:59:51 no, the big one 2023-06-07 19:59:58 with the epyc thing, also has riscv builder 2023-06-07 20:00:12 where i build chromium/electron a million times with zero issues 2023-06-07 20:00:13 :D 2023-06-07 20:00:15 yeah, thats equinix 2023-06-07 20:00:17 hehe 2023-06-07 20:17:06 ruby:3.0-alpine3.16 works 2023-06-07 20:17:08 so it's ruby 3.1 2023-06-07 20:17:36 nice 2023-06-07 20:17:48 which ruby 3.1 2023-06-07 20:18:05 ruby upstream 2023-06-07 20:18:11 3.1.x 2023-06-07 20:18:27 .3? 2023-06-07 20:18:43 ruby 3.1.4p223 2023-06-07 20:18:59 interesting 2023-06-07 20:19:48 nothing interesting in those 2023-06-07 20:20:10 note that using irb, it does work 2023-06-07 20:22:13 https://bugs.ruby-lang.org/projects/ruby-master/repository/git/revisions/0b303c683007598a31f2cda3d512d981b278f8bd 2023-06-07 20:22:16 https://github.com/ruby/openssl/issues/606 2023-06-07 20:22:28 looks like the issue 2023-06-07 20:23:00 perhaps applying https://github.com/ruby/openssl/pull/640 fixes it 2023-06-07 20:23:06 for the openssl gem 2023-06-07 20:23:21 or just https://github.com/ruby/openssl/commit/fc4629d246 2023-06-07 20:23:22 not sure 2023-06-07 20:23:49 https://github.com/ruby/ruby/commit/0b303c68 itself is not in release either 2023-06-07 20:24:10 is ruby 3.1 image on openssl3 and 3.0 on 1.1? that would be the easy explanation 2023-06-07 20:24:47 I tested 3.1 on alpine 3.16, which also failed 2023-06-07 20:25:26 And, I think the self-signed error is a symptom 2023-06-07 20:25:46 The fact that it doesn't send a server_name prevents the correct certificate from being returned 2023-06-07 20:25:59 perhaps 2023-06-07 20:31:34 yes, can replicate it 2023-06-07 20:31:39 echo | openssl s_client -connect webhook.alpinelinux.org:443 -noservername 2023-06-07 20:32:00 Verify return code: 18 (self-signed certificate) 2023-06-07 20:36:41 So, because no server_name is provided, the webserver returns a self-signed certificate 2023-06-07 20:36:45 so that message is correct 2023-06-07 21:05:38 ah yeah 2023-06-07 21:05:46 i remember seeing something similar 2023-06-07 21:05:55 happens when one has multiple hosts on one ip 2023-06-07 21:06:07 it's required to do sni then for anything to work obviously 2023-06-07 21:06:19 so yeah the bug is noservername passed by something 2023-06-07 21:07:02 yeah, sni / virtualhosts are quite common 2023-06-08 19:38:32 ikke: could you add pkgs-test record too? i noticed that there isn't one for the aports-turbo-test to work 2023-06-08 19:40:23 https://gitlab.alpinelinux.org/alpine/infra/linode-tf/-/merge_requests/69 2023-06-08 19:40:44 thumbs up 2023-06-08 19:42:33 done 2023-06-08 19:42:47 thanks 2023-06-08 19:43:00 now to somehow never need to use it again, all that work for nothing 2023-06-08 19:43:01 lol 2023-06-09 12:09:37 I'd like to create a ci job for lua-aports. for each specified LUAVERSION (eg 5.1, 5.2, 5.3 and 5.4) it should install lua-$luaver lua$luaver-filesystem and lua$luaver-busted 2023-06-09 12:09:40 and run make check 2023-06-09 12:11:47 i think you should stick to one version 2023-06-09 12:12:09 writing lua for all versions at once is quite painful 2023-06-09 12:12:33 unless you want to add checks for them and handle the differences, but that would have no real benefit? 2023-06-09 12:12:39 i'd like to know if it runs on them 2023-06-09 12:12:48 yes, and then what? 2023-06-09 12:12:57 if it doesn't i would say it makes no sense to fix 2023-06-09 12:13:02 ok 2023-06-09 12:13:11 so only a singla version it is then 2023-06-09 12:13:12 it makes more sense to add 5.2 (current?), get the checks going 2023-06-09 12:13:24 then, change to 5.3, run checks/test etc, upgrade 2023-06-09 12:13:27 then I'd like to bump it to lua5.4 2023-06-09 12:13:29 yea 2023-06-09 12:13:40 but you start doing that (as you said before iirc) by doing checks on current first 2023-06-09 12:13:53 well, i guess it doesn't matter much 2023-06-09 12:14:01 you could also just make it 5.4 and go from there, up to you 2023-06-09 12:15:53 maybe it's not as bad as i'm thinking of for multiple version compat, but conceptually we don't need to allow more than one version (because we will only ever use it with one for our setup) 2023-06-09 12:15:58 for libraries it makes sense of course 2023-06-09 12:16:13 this is more of an internal tool 2023-06-09 12:16:55 well, it is a library too 2023-06-09 12:17:37 theoretically :-) 2023-06-09 12:24:03 iirc that should stop the spamming 2023-06-09 12:26:39 i have added some tests using busted 2023-06-09 12:26:44 to lua-aports 2023-06-09 12:27:36 good start! 2023-06-09 12:28:19 classic merge commits 2023-06-09 12:46:33 yeah, i need fiure out how to disable that 2023-06-09 12:47:41 Setting merge strategy under merge request settings page, but I usually adjust the default commit template for merge requests 2023-06-09 12:57:47 i changed the default to ff for the repo 2023-06-09 12:58:22 now looks like the usual https://img.ayaya.dev/44sCmB3GHk54 2023-06-09 12:59:04 github/fork/Ikke hehe 2023-06-09 13:02:09 ? 2023-06-09 13:02:17 https://gitlab.alpinelinux.org/alpine/lua-aports/-/merge_requests/3 2023-06-09 16:25:15 ncopa: you could configure something like https://github.com/JohnnyMorganz/StyLua to get a uniform style if you want 2023-06-09 16:25:25 takes like a minute 2023-06-09 16:25:35 i am using stylua, manually 2023-06-09 16:25:42 aha 2023-06-09 16:25:49 should save the config 2023-06-09 16:26:09 i suppose I should make it execute on save 2023-06-09 16:26:33 i've done some good progress with lua-aports, and found a bug (parsing /etc/abuild.conf) while at it 2023-06-09 16:26:58 i also foudn out that lua-penlight is pretty good for convenince functions 2023-06-09 16:27:03 yes 2023-06-09 16:27:05 penlight is nice 2023-06-09 16:27:15 write file, create directories recursively etc 2023-06-09 16:27:42 i think next i'll create tests for aports/db.lua 2023-06-09 16:28:02 and finally tests for ap and buildrepo. those will likely be using kyua/atf 2023-06-09 16:29:45 psykose: writing tests is a good way to get familiar with a code base ;) 2023-06-09 16:30:05 its weekend here now though 2023-06-09 16:30:19 my usual experience is that writing tests is a good way to have to figure out how to write tests 2023-06-09 16:30:25 i don't know why my head refuses to understand it 2023-06-09 16:30:34 it just feels so.. like backwards and weird 2023-06-09 16:30:56 i do try when i find the time however 2023-06-09 16:31:08 was meaning to do some abuild ones 2023-06-09 16:31:35 i got lot of help from chatgpt on today 2023-06-09 16:31:56 chatgpt wrote many of the tests. I just had to fix the bugs :) 2023-06-09 16:32:41 tbh, on of my major regrets in my professional life is that I realized the value of tests way too late 2023-06-09 16:33:48 bdd style, interesting 2023-06-09 16:52:57 +1 to "I realized the value of tests way too late".... it is probably post-mortem late for me. 2023-06-10 15:04:58 Woohoo 🎉 2023-06-10 15:05:05 ncopa: psykose ^ 2023-06-10 15:25:17 nice 2023-06-10 15:43:57 still flapping it seems 2023-06-10 15:56:30 annoyin 2023-06-10 15:58:03 https://build.alpinelinux.org/buildlogs/build-3-17-armv7/main/zfs-lts/zfs-lts-5.15.116-r0.log is a github.com resolve failure 2023-06-10 15:58:09 so we need all the magic dns fixes again 2023-06-10 15:58:24 yes, same with bb 2023-06-10 15:58:57 wanted to check, but the connection is not stable enough 2023-06-10 15:59:51 can't wait to now have failing random network issues for days 2023-06-10 16:00:14 I think there is still some peer flapping 2023-06-10 16:00:18 they had issues with peering 2023-06-10 16:14:20 Nico confirmed that it's a flapping peer, but will have to continue tomorrow 2023-06-10 16:14:37 yep 2023-06-10 16:15:00 I did invite you to the matrix room, in case you are interested 2023-06-10 16:15:13 i don't even have a matrix account 2023-06-10 16:15:51 There exists one on chat.alpinelinux.org with your avatar 2023-06-10 16:17:37 is our dendrite so broken that even deleting the account doesn't work? 2023-06-10 16:17:39 impressive 2023-06-10 16:18:22 (i explicitly pressed settings->delete account, or wherever it was) 2023-06-10 16:18:26 lol 2023-06-10 16:20:12 a quick google shows that dendrite doesn't support this 2023-06-10 16:20:32 well, deactivate should be 2023-06-10 16:20:33 It's a feature™ 2023-06-10 16:21:18 "You can check out any time you like, but you can never leave" 2023-06-10 16:22:05 :) 2023-06-10 16:26:33 how do we fix this dns stuff where www.google.com fails to resolve for the busybox test 2023-06-10 16:28:44 Need to know what the issue is first 2023-06-10 16:29:28 I'll try to setup mosh so that at least the connection stays 2023-06-10 16:30:34 maybe one day i'll succeed at doing that myself too 2023-06-10 16:32:24 aaaand its gone 2023-06-10 16:34:22 oh, I did already setup mosh on that server :) 2023-06-10 16:36:55 mosh helps a lot :( 2023-06-10 16:36:56 :) 2023-06-10 16:37:07 :( 2023-06-10 16:37:08 :) 2023-06-10 16:37:35 psykose: it might just be it failing to connect when the connection is down 2023-06-10 16:37:41 yeah 2023-06-10 16:37:47 sounds like it 2023-06-10 16:38:18 When the connection is there, at least on the host I can connect to google.com via ipv4 2023-06-10 16:39:54 looks completely gone now 2023-06-10 16:57:55 just comes back for like 30 second increments lol 2023-06-10 17:03:18 did you ever get to enabling that gitlab cache 2023-06-10 17:04:50 psykose: it should be active now 2023-06-10 17:04:56 ~ 2023-06-10 17:04:58 Haven't checked 2023-06-10 17:51:49 can't believe i wish they would plug the LTE modem back in so we can get a better isp 2023-06-10 17:51:59 (/joke) 2023-06-10 19:31:20 psykose: I'm sorry it takes a bit longer, I am currently sick and did not manage the handover before I got sick last week. 2023-06-10 19:31:52 that's alright 2023-06-10 19:36:58 ikke: think buildrepo might also hang a few times in between this so you should check that 2023-06-10 19:37:11 when it's reachable that is 2023-06-10 21:12:09 did one pass across edge and 3.18 2023-06-10 21:16:36 was anything stuck? 2023-06-10 21:16:46 yes 2023-06-10 21:16:50 bleh 2023-06-10 21:17:17 well, at least main/ is done for 3.18 on armv7/aarch64 2023-06-10 21:17:22 that's most of the importance 2023-06-11 17:01:18 how's the aarch64 runners/builders? I see that tiny-cloud-3.0.0 is now available on edge for everything except aarch64... https://pkgs.alpinelinux.org/packages?name=tiny-cloud&branch=edge 2023-06-11 17:02:11 needs a backport to 3.18 before i can (finally) get the new cloud images out 2023-06-11 17:03:15 can't you not backport it because it has a million changes 2023-06-11 17:10:14 maybe "backport" isn't exactly the right word, but update the APK on stable-3.18 branch to not be an 3.0.0-rc ... of course once i've confirmed with ncopa that doing so won't negatively impact the experimental alpine-auto-installler stuff (most of the changes were for that anyways) 2023-06-11 18:22:07 yeah, the rc that's in stable-3.18 isn't working 100% anyways 2023-06-11 18:47:47 tomalok: the connection is still unstable, so the progress is slow 2023-06-11 18:49:19 they're all frozen presently too 2023-06-11 19:17:44 ikke & psykose - thanks for the update. just curious what the specs/reqs are for runners? i somehow expect it's not as simple as just spinning up a couple instances with connectivity, but ... ya never know 2023-06-11 19:18:29 temporarily no, it's not that simple 2023-06-11 19:18:42 in terms of long term theoretically it could be anything but that doesn't really make sense 2023-06-11 19:18:56 It does need at least some performance 2023-06-11 19:19:07 it, should be able to build chromium in a decent amount of time 2023-06-11 19:19:35 We're also working out ideas to make the architecture a bit more decentralized 2023-06-11 19:19:53 looking at somewhere between a fraction of CPU and 128 core bare metal? ;) define decent amount? 2023-06-11 19:20:01 tomalok: it's hard to define 2023-06-11 19:20:04 the current one is 80 cores 2023-06-11 19:20:27 and 256G mem 2023-06-11 19:20:32 in aws it's $2k/mo for the 64 core one with the worst pricing 2023-06-11 19:20:51 do those 80 cores handle multiple jobs at a time? 2023-06-11 19:20:55 the servers are already donated, there's nothing to provide 2023-06-11 19:20:59 yeah it's all the arm builders 2023-06-11 19:21:41 each builder will do at most one build at the time 2023-06-11 19:21:55 but each arch and each release (and edge) is its own builder 2023-06-11 19:22:08 s/is/has/ 2023-06-11 19:22:09 thinking about extra capacity during outages and/or heavy build times... 2023-06-11 19:22:37 The current architecture isn't flexible with adding extra capacity 2023-06-11 19:22:41 at least for the builders 2023-06-11 19:22:46 CI would work just fine 2023-06-11 19:22:50 (nod) 2023-06-11 19:29:56 This is one part of what we need to adjust: https://gitlab.alpinelinux.org/alpine/abuild/-/issues/10114 2023-06-11 19:43:34 of many 2023-06-12 07:14:44 morning 2023-06-12 07:15:24 re tiny-cloud for 3.18, i think we can make an exception, and backport it to 3.18-stable. 2023-06-12 10:30:13 algitbot: retry master 2023-06-12 10:31:18 didn't helped with aarch64 builder 2023-06-12 10:32:55 mps: the network is still unstable 2023-06-12 10:33:28 https://imgur.com/a/Dzgx5YD 2023-06-12 10:33:29 I see, failed to fetch nsd pkg 2023-06-12 12:11:58 hm... network to arm machines is very unstable 2023-06-12 12:17:35 Yes, i already reported that to telmich 2023-06-12 12:17:51 See the image above 2023-06-12 12:20:25 armv7 buildrepo is deadlocked 2023-06-12 12:20:38 (gdb) bt 2023-06-12 12:20:38 Backtrace stopped: previous frame identical to this frame (corrupt stack?) 2023-06-12 12:20:38 #0 0xf7921012 in __clock_gettime64 () from /lib/ld-musl-armhf.so.1 2023-06-12 12:20:38 #1 0xf5e44c32 in ?? () from /usr/lib/libmosquitto.so.1 2023-06-12 12:20:48 its in libmosquitto 2023-06-12 12:21:57 but i lost connection now. wanted to get a backtrace with symbols 2023-06-12 12:22:47 Mosh is setup on the host 2023-06-12 12:23:02 So if you use mosh, your connection should remain 2023-06-12 12:24:40 seems like connction stays up, but it is only available for a few seconds evey second minute or so 2023-06-12 12:25:54 ikke: I cannot use mosh over jumphost and only way to login to arm is using jumphost 2023-06-12 12:26:43 ncopa: they have a BGP peer flapping 2023-06-12 12:26:57 Nico would look at it he said 2023-06-12 12:32:35 thanks. I hope he feels better 2023-06-12 14:38:12 im working on lua-mosquitto and lua-mqtt-publish 2023-06-12 14:38:20 i think i may have a fix for the hangs 2023-06-12 15:04:26 what was it 2023-06-12 15:55:44 a design error 2023-06-12 15:56:41 https://github.com/ncopa/lua-mqtt-publish/blob/master/mqtt/publish.lua#L63 2023-06-12 15:56:54 it does not handle server disconnection properly 2023-06-12 15:59:05 aha 2023-06-12 15:59:52 you found it pretty fast :) 2023-06-12 15:59:53 good work 2023-06-12 18:47:13 trickier to write a good test for it 2023-06-12 18:53:07 have to cycle the arm builders again twice to get that fix in for buildrepo 2023-06-12 18:53:10 assuming they're reachable 2023-06-12 18:53:20 i cannot connect currently 2023-06-12 18:53:39 earlier i had connection for 10-15 seconds every 3 min or so 2023-06-12 18:54:06 i have only pushed the lua mqtt stuff to edge so far 2023-06-12 18:57:06 we will have problems uploading the packages also 2023-06-12 19:32:36 build-edge-{aarch64,armhf,armv7} now have the lua-mqtt-publish and lua-mosquitto update 2023-06-12 19:32:59 if any of those hang in buildrepo, please let me know 2023-06-12 19:33:19 i have not updated the build-3-18-* yet 2023-06-12 19:33:27 i wanted to test it in edge first a bit 2023-06-12 19:34:01 I'll monitor it as well 2023-06-12 19:34:14 the build-3-18-* may still hang 2023-06-12 19:34:25 nod 2023-06-12 19:35:08 looks like build-3-18-aarch64 currently is building linux-rpi 2023-06-12 19:35:15 and build-3-18-armhf php82 2023-06-12 19:35:29 sorry php81 2023-06-12 19:37:24 if build-edge-a* survives the night with hang, i'll backport the lua-mosquitto/mqtt-publish fixes to 3.18 branch 2023-06-12 19:37:33 without* hang 2023-06-13 07:03:42 ikke: could you cycle 3.18 2023-06-13 07:03:59 (twice on second hang too for the buildrepo fixes that are backported) 2023-06-13 11:24:08 many of the tests fails due to network issues 2023-06-13 11:29:13 build-3-18-armv7 should now have the update lua-mqtt*/mosquitto 2023-06-13 11:48:57 I think we should create a site first in netbox for the new server 2023-06-13 11:49:26 im creating a device, and was wondering what site i should put it in... 2023-06-13 11:49:46 hmm, we could use private2-eindhoven 2023-06-13 11:49:55 I'll rename it later 2023-06-13 11:50:02 or now 2023-06-13 11:50:45 creqted here: please help adjust the data: https://netbox.alpin.pw/dcim/devices/46/ 2023-06-13 11:51:46 working on it 2023-06-13 14:23:07 im stopping build-3-18-aarch on che-bld-1 2023-06-13 15:10:53 i guess the network is a bit better? 2023-06-13 15:10:56 :D 2023-06-13 15:12:35 build-edge-aarch64 should be up and running now 2023-06-13 15:16:16 im working build-3-18-armhf now 2023-06-13 15:17:04 not sure if it makes sense, but we might need to be able to limit the amount of jobs to prevent OOM 2023-06-13 15:21:38 argh... something is broken 2023-06-13 15:21:47 zlib give bad signature 2023-06-13 15:22:01 for edge aarch64 2023-06-13 15:23:33 ok i think we only need to puge cdn cache 2023-06-13 15:23:38 purge* 2023-06-13 15:23:42 Ok, I can do that 2023-06-13 15:24:06 for v3.18/main/aarch64/zlib-* 2023-06-13 15:24:11 possibly bzip2* 2023-06-13 15:25:44 done 2023-06-13 15:25:54 repo-tools fastly purge pkg --release v3.18 --origin zlib --arch aarch64 2023-06-13 15:26:06 (also for bzip2) 2023-06-13 15:26:29 thanks 2023-06-13 15:30:17 should we let equinix know that we have something for the interrim? 2023-06-13 15:35:56 how is the network status for che-bld-1 now? 2023-06-13 15:36:02 it seems to be improved? 2023-06-13 15:36:09 maybe not 2023-06-13 15:36:35  2023-06-13 15:36:35 1 / 769 (0%) 2023-06-13 15:36:56 just annoying to re-build everything that was built last day 2023-06-13 15:36:57 https://imgur.com/a/WvpREg3 2023-06-13 15:37:44 thanks 2023-06-13 15:38:19 ok in continue with build-edge-armhf 2023-06-13 15:38:25 i* 2023-06-13 15:53:11 this is just super annoying. build-3-18-armhf on che-* is done building. the question is what takes longest time: upload files from che-* or rebuild 70 packages on new builder 2023-06-13 15:53:24 good question 2023-06-13 15:58:57 taking my chances with uploading from che right now 2023-06-13 16:49:21 build-3-18-armhf and build-edge-armhf are moved to temp new server 2023-06-13 21:24:34 ikke: dont forget to set the barrier thingy 2023-06-13 21:28:46 Oh right 2023-06-13 21:32:14 have to restart armhf after that too for the current build 2023-06-13 21:37:16 (since it's stuck on rust) 2023-06-13 22:06:17 i killed rustc on build-edge-armhf and restarted it 2023-06-13 22:06:23 not sure why it hanged? 2023-06-13 22:09:07 missing kernel sysctl config from above 2023-06-13 22:10:16 mostly all written in https://gitlab.alpinelinux.org/alpine/aports/-/issues/14667 2023-06-13 22:29:04 yeah, all written in there 2023-06-13 22:29:17 every armhf instance needs that setting on the kernel (host for lxc, vm kernel for vms, etc) 2023-06-13 22:29:29 it gets lost every time anyone touches anything for some reaosn 2023-06-14 03:17:36 ikke: edge-aarch64 and edge-armhf are stuck (latter probably still needs sysctl) 2023-06-14 04:38:49 I've configured the sysctl 2023-06-14 04:43:01 And the network for che-bld-1 is stable again as well 2023-06-14 08:08:25 so the temp arm server is up and running, and the che arm servers are also back. how do we want to proceed? 2023-06-14 08:09:00 currently build-{3-18,edge}-{aarch64,armhf} are running on the temp server 2023-06-14 08:09:08 armv7 is running on che server 2023-06-14 08:10:34 do we want move over build-{edge,3-18}-armv7 to temp server? 2023-06-14 08:13:37 I guess that depends on how we want to proceed 2023-06-14 08:15:58 that is my question. how do we want to proceed? 2023-06-14 08:16:38 im copying over the built packages build che build-edge-aarch64, so we dont need rebuild everything again 2023-06-14 08:20:37 We don't have the capacity to run both builders and CI on the backup server 2023-06-14 08:24:52 i copied built packages from che. previously build-edge-armhf had 357 packages to build. now its 97 2023-06-14 08:25:28 i suppose my main question is: do I move over build-edge-armv7 and build-3-18-armv7? 2023-06-14 08:25:29 Maybe we keep using che for now and keep the other server as backup? 2023-06-14 08:25:33 ok 2023-06-14 08:25:43 The che server had more resources 2023-06-14 08:26:19 so i move back build-{3-18,edge}-{aarch64,armhf} to che? 2023-06-14 08:26:33 or we keep edge on the temp server? 2023-06-14 09:36:23 i moved back build-3-18-armhf and build-3-18-aarch64 to che 2023-06-14 09:43:32 👍 2023-06-14 09:52:54 currently only build-edge-aarch64 and build-edge-armhf runs on temp server. once the builds are done, and builders idle, i'll move them back to che 2023-06-14 13:51:35 i moved build-edge-armhf back to che 2023-06-14 15:09:34 Crazy question, but I have a $$ job that requires NHRP. You all use opennhrp, or Quagga, or FRR? 2023-06-14 15:11:45 quagga as part of dmvpn 2023-06-14 15:11:54 +1 thanks. 2023-06-14 15:15:46 I have to build a L2 network across 2 continents, and both endpoints are behind IPv4 NAT. VxLAN, of course, but crossing the NAT boundary is "fun" 2023-06-14 15:16:08 yup, that's not trivial without stun/turn 2023-06-14 15:16:40 unless you can do port forwarding 2023-06-14 15:17:58 nope. Think of both endpoints as "hotel rooms" - they are not hotel rooms, but basically the same 2023-06-14 15:18:31 Then you need something in betweeen 2023-06-14 15:20:00 yeah. I'll look further into TURN/STUN - that is possible; but I think the NHRP solution is more "elegant" - as tteras would say. 2023-06-14 15:20:17 but doesn't that require a direct connection? 2023-06-14 15:21:01 no side can initiate the connection 2023-06-14 15:21:32 No. NHRP + NAT + ipsec is on port 4500. You hit the hub, and the spokes can figure it out 2023-06-14 15:22:07 right, the hub is the intermediary then 2023-06-14 15:22:29 +1 2023-06-14 15:43:51 ikke: can you please announce the release on fosstodon: https://alpinelinux.org/posts/Alpine-3.15.9-3.16.6-3.17.4-3.18.2-released.html 2023-06-14 15:43:59 ncopa: absolutely 2023-06-14 15:44:06 thank you! 2023-06-14 15:44:29 im doing the celebrate part now \o/ 2023-06-14 15:45:38 https://fosstodon.org/@alpinelinux/110543341697622374 2023-06-14 18:03:42 it looks like build-edge-aarch64 is outputting stuff to the wrong place? 2023-06-14 18:03:45 ( https://build.alpinelinux.org/ ) 2023-06-14 18:09:51 yeah 2023-06-14 18:10:56 it is currently running on a temp arm server. The plan is to move it back to che-bld-1 once its idling 2023-06-14 18:11:14 maybe i should just switch it back tomorrow 2023-06-14 18:18:45 it's currently stuck i think 2023-06-14 18:18:46 isn't it 2023-06-14 18:18:48 aarch64 that is 2023-06-14 21:45:02 i think the new-setup arm builders are broken in some way, like they don't pull git after failing and restarting 2023-06-14 21:45:10 as well as that message issue above 2023-06-14 21:47:06 which arch? 2023-06-14 21:47:36 from what I understand from ncopa, only one builder is on the new box 2023-06-14 21:47:46 aarch64 2023-06-14 21:47:47 dunno 2023-06-14 21:48:03 after retry it was still building the old apkbuild a few times 2023-06-14 21:50:51 also the logs don't seem to upload 2023-06-14 21:50:58 i guess that one makes sense 2023-06-14 21:53:37 it also got stuck again so you can jiggle it 2023-06-14 22:02:04 algitbot: ping 2023-06-14 22:05:01 upgrades successful 2023-06-14 22:11:09 who pulled the plug? 2023-06-14 22:11:11 :) 2023-06-15 02:10:43 was me updating as the success above says 2023-06-15 06:23:21 also the edge aarch64 uploads don't work presently 2023-06-15 06:23:24 should move the builders back 2023-06-15 06:34:38 There was some config missing, I've added it, but maybe have to restart the builder 2023-06-15 06:36:00 perhaps 2023-06-15 06:36:46 i forget how the build looks like from gcc's perspective 2023-06-15 06:36:54 by default it should not do color when stderr is not a terminal 2023-06-15 06:36:57 yet all the logs have color 2023-06-15 06:37:11 i wonder if the way we wrap it somehow makes it have a terminal 2023-06-15 08:36:47 ikke: s390x ran out of space so it's stuck 2023-06-15 08:53:45 i can delete stuff 2023-06-15 09:03:49 should be fixed now 2023-06-15 09:04:08 i wonder if we should just delete the build logfile after uploading it 2023-06-15 09:10:17 don't see why not 2023-06-15 09:11:29 should fix the logs getting dumped into build.a.o mqtt too 2023-06-15 09:12:06 also have to restart after fixing the space 2023-06-15 09:48:07 network seems stable now 2023-06-15 09:48:18 ci hasn't flapped or anything 2023-06-15 10:13:19 finally. this means time to upgrade linux-edge and some other pkgs 2023-06-15 10:19:40 access over WG to arm containers isn't yet solved? 2023-06-15 10:19:57 over VPN 2023-06-15 10:21:17 There is no dmvpn 2023-06-15 10:22:05 this makes harder for me to work 2023-06-15 10:22:18 yes, but not a lot we can do about it 2023-06-15 10:22:39 /o\ 2023-06-15 10:22:56 linux does not support multipath gre over ipv6 2023-06-15 10:23:23 ah yes 2023-06-15 10:23:44 and they have only ipv6 2023-06-15 10:24:28 so there is a direct ipv6 address you can connect to 2023-06-15 10:24:36 but if you don't have ipv6 that still needs something in between 2023-06-15 10:26:03 yes, for now I'm using one of my dev (private) machine as jumphost 2023-06-15 10:26:40 bad thing is mosh doesn't support jumphost 2023-06-15 11:16:07 3.18-s390x also has to be restarted 2023-06-15 11:24:29 how so? 2023-06-15 11:24:35 it seems to be building? 2023-06-15 11:34:15 well 2023-06-15 11:34:24 it was literally in git pull for three hours 2023-06-15 11:34:26 and now it's building 2023-06-15 11:34:34 no exaggeration 2023-06-15 11:37:16 I just opened htop, nothing more 2023-06-15 11:47:00 :) 2023-06-15 12:02:44 ikke: should fix the aarch64 uploads 2023-06-15 12:03:00 for edge 2023-06-15 12:49:34 restart mqtt-exec.aports-build 2023-06-15 12:49:38 See if that helps 2023-06-15 13:34:03 don't think it worked 2023-06-15 13:35:43 I set some options for mqtt-exec afterwards, so need to restart it again 2023-06-15 13:37:14 :/ 2023-06-15 13:45:13 hehe 2023-06-15 13:45:25 It's flapping again 2023-06-15 13:58:53 Oh no 2023-06-15 15:07:44 ncopa: did you add an ssh key for build-edge-aarch64 to dl-master? 2023-06-15 15:08:23 I suppose not 2023-06-15 15:08:27 that's why it cannot upload 2023-06-15 15:21:46 probably not 2023-06-15 15:21:52 let me fix that 2023-06-15 15:24:50 should be fixed now 2023-06-15 15:33:07 ok, thanks 2023-06-15 15:47:54 ncopa: shouldn't the key be added to the buildozer user? 2023-06-15 16:12:26 I've added a key to dl-master and distfiles, so uploading packages and buildlogs should work 2023-06-15 19:31:34 I can ping it, but not ssh into it 2023-06-15 19:32:57 It seems to be stable again for a bit 2023-06-15 19:51:03 nld9-dev1 seems to be oom 2023-06-15 19:51:09 gonna reboot it 2023-06-15 19:55:45 yep 2023-06-15 19:55:49 hit 2 webkits at once, classic 2023-06-15 19:55:55 just don't restart 3.18 and should be ok 2023-06-15 19:56:04 euh 2023-06-15 19:56:08 the host is ookmj 2023-06-15 19:56:13 that build system is very annoying because just using clang would fix it, but there's like no way to do it 2023-06-15 19:56:24 I cannot login to the host 2023-06-15 20:00:00 it's back 2023-06-15 20:02:30 no stale dependencies on edge 2023-06-15 20:03:09 qt5-qtwebengine dependencies on 3.18, removed them 2023-06-15 20:05:33 curious, there was something taking constantly a minimal of ~20G of ram on that host 2023-06-15 20:07:18 I've lost some weight so I'm no longer ~20G ;-) 2023-06-15 20:07:39 minimal: I was waiting for you to reply :P 2023-06-15 20:08:53 so I'm unlike the Spanish Inquisition, not unexpected lol 2023-06-15 20:21:34 correct 2023-06-16 05:50:27 lots of spam issues were created on the council project :/ 2023-06-16 05:50:36 luckily just 2 users, who I've yeeted 2023-06-16 07:22:10 im moving build-edge-aarch64 back to che now 2023-06-16 08:28:07 build-edge-aarch64 moved back to che-bld-1 2023-06-16 16:12:28 nice 2023-06-16 16:40:37 psykose: need to reboot / upgrade che-bld-1, waiting for it finish building 2023-06-16 16:40:52 should be soon 2023-06-16 16:41:59 That should fix the iptables issue 2023-06-16 18:00:16 going to reboot che-bld-1 2023-06-16 18:56:21 that's a long reboot 2023-06-16 18:56:37 it's already back 2023-06-16 18:56:46 is it 2023-06-16 18:56:49 missing on build.a.o 2023-06-16 18:56:50 but the aarch64 edge builder does not receive an ipv6 address 2023-06-16 18:57:02 was just looking at it 2023-06-16 19:01:10 fixed 2023-06-16 19:01:28 ~ 2023-06-16 19:01:41 is that a flag? 2023-06-16 19:01:52 not sure tbh 2023-06-16 19:01:56 k 2023-06-16 19:02:05 maybe a tuturu 2023-06-16 19:16:05 I need to upgrade che-ci-1 as well, but I don't want to break the network again 2023-06-16 19:16:13 There are some things that do not restore properly on boot 2023-06-16 20:08:11 ikke: is there no captcha on sign up? 2023-06-16 20:09:59 There was, but people were not happy with it 2023-06-16 20:17:19 because? 2023-06-16 20:44:33 Recaptcha 2023-06-16 20:58:33 hcaptcha is less annoying, right? 2023-06-16 21:00:23 does gitlab support it? 2023-06-16 21:11:43 don't think so 2023-06-16 21:11:53 not ootb that is 2023-06-16 21:12:02 you can probably hook anything up with too much work 2023-06-16 21:25:04 i don't really care about 'people complained' too much tho 2023-06-16 21:25:11 after deleting spam for like 10 hours so far 2023-06-16 21:25:27 maybe they should get 100k issues opened on their gitlab instance so they know what it's like first :') 2023-06-17 06:40:53 count so far in the past 9 hours: probably deleted the same spam account made & remade & remade roughly 150 times 2023-06-17 07:25:50 seems blocking a few email addresses has somewhat stopped that 2023-06-17 07:30:55 How do you block e-mail addresses? 2023-06-17 07:31:02 or just block the account instead of deleting?? 2023-06-17 07:31:28 you can restrict signups from certain email addresses with a regex 2023-06-17 07:31:37 what would blocking the account accomplish 2023-06-17 07:31:53 except keep around thousands of spam accounts 2023-06-17 07:32:06 they make the next one when the sign in fails 2023-06-17 07:32:09 (i tested) 2023-06-17 07:32:17 and roll the username number or w/e 2023-06-17 07:32:41 so you just get allcinema allcinemaa allcinemaaa allcinema4 2023-06-17 07:32:42 etc 2023-06-17 07:32:53 rite 2023-06-17 07:33:15 some of them are stupid and it would probably work but most spam 'waves' that aren't just one account all roll 2023-06-17 07:33:30 ah I see, blocked domains for sign-up 2023-06-17 07:33:37 and e-mail restrictions 2023-06-17 07:33:39 + blocked emails that takes a regex 2023-06-17 07:33:47 (it's an actual regex, one-per-line doesn't work) 2023-06-17 07:35:10 seems dead now after deleting it for hours so i guess those emails worked 2023-06-17 14:43:37 spam continues 2023-06-17 19:52:04 indeed 2023-06-17 19:52:22 what if we just pressed on on the captcha button 2023-06-17 20:14:51 Depends on if these spam waves are fully automated or not 2023-06-17 20:21:09 I was thinking about auto suspending accounts that create a lot of issues in a short time 2023-06-17 20:30:01 can work as long as we log the suspend somewhere to a notify 2023-06-18 00:58:17 I'd rather not receive 700+ spam emails 2023-06-18 01:12:03 psykose: https://lists.alpinelinux.org/~alpine/devel/%3C2S4UGX2K2XUWW.383MIJ9QU1RGG%40unix.is.love.unix.is.life%3E#%3CCAO54GHC8+hd4MWoLXOFV-hU5VXfE_RRfu-eOvTTrV9OuZF=WhA@mail.gmail.com%3E 2023-06-18 01:13:05 what about it 2023-06-18 01:13:23 pj: http://flights.google.com/ 2023-06-18 01:14:39 psykose: https://live.staticflickr.com/4214/34988479533_2306a240bc_o.jpg 2023-06-18 08:58:34 dl-master will be rebooted 2023-06-18 13:40:39 Date and time: 2023-06-19 01:15 CEST, so monday night 2023-06-18 15:22:08 sus: https://gitlab.alpinelinux.org/filmactual8 2023-06-18 17:06:16 psykose: apperently there is an option to enable an external spamcheck server (which we could provide ourselves), but it's under-documented and the component they provide is proprietary / obfuscated 2023-06-18 19:30:18 psykose: did you yeet some users? 2023-06-18 20:06:01 yeah 2023-06-18 20:06:36 every filmactual* is spam like that one 2023-06-18 20:07:49 did i hit an actual person somehow 2023-06-18 20:08:32 no, was just suddenly missing users 2023-06-18 20:08:43 ie, was watching the latest user list 2023-06-18 20:08:46 ah 2023-06-18 20:08:49 yeah i clean it up daily 2023-06-18 20:08:56 I've been removing as well today 2023-06-18 20:09:03 But mostly when there is spam 2023-06-18 20:09:15 they usually have a pattern and it's like the same similarish usernames every time 2023-06-18 20:09:21 right 2023-06-18 20:12:11 "1allcinema" :-) 2023-06-18 20:15:23 I enabled recaptcha, let's see if we get complains 2023-06-18 20:15:45 and likewise, if it actually helps against the spam 2023-06-18 20:17:26 yeah 2023-06-18 20:17:52 the most comprehensive thing i can think of is requiring confirmations for signups and nonconfirmed just gets deleted after 30days, with a textbox of 'why are you signing up' 2023-06-18 20:17:59 the downside is.. lag to confirm for people that sign up 2023-06-18 20:18:02 no perfect solutions 2023-06-18 20:18:05 yeah 2023-06-18 20:18:26 i don't think it would be too much work on our side given volume of confirms but i imagine the lag would get some complaints 2023-06-18 20:19:01 Especially if you want to report some issue 2023-06-18 20:19:10 can pull the wind right out of your sails 2023-06-18 20:19:41 yep :/ 2023-06-18 20:25:10 There are also a large number of users who never signed in 2023-06-18 20:26:21 30+ pages 2023-06-18 20:34:41 all of those sound like an easy delete via api 2023-06-18 20:34:55 unless there's some way to use gitlab without signing in once that i don't know of 2023-06-18 20:35:02 for notifications etc 2023-06-18 20:35:11 but i'd imagine even a token requires a sign in 2023-06-18 20:35:29 Yes 2023-06-18 23:14:33 well 2023-06-18 23:14:41 assuming it's actually enabled, they can still sign up fine 2023-06-18 23:15:05 yeah, it is 2023-06-18 23:15:07 welp 2023-06-18 23:15:27 another movie issue spammer an hour ago 2023-06-19 04:38:40 psykose: I have a feeling it's someone(s) creating these accounts 2023-06-19 04:38:50 so things like recaptcha don't really help 2023-06-19 04:52:09 that would explain it but they must be.. really dedicated 2023-06-19 04:53:28 sigh 2023-06-19 04:55:50 i also don't understand the usecase 2023-06-19 04:55:59 what is the point of opening an issue with 'watch movie here' to link to someone 2023-06-19 04:56:07 they could just link the same link where they would link the issue 2023-06-19 04:56:18 at least the crypto scams are crypto scams 2023-06-19 04:56:21 or ci jobs.. ci jobs 2023-06-19 04:57:04 I assume some form of SEO link building? 2023-06-19 04:57:56 hmm 2023-06-19 04:58:04 i'll admit i don't know how that works really 2023-06-19 04:58:17 but since it's publically visible it might affect it indeed 2023-06-19 05:00:03 does gitlab add rel="nofollow" to external urls? 2023-06-19 05:01:30 nofollow noreferrer noopener 2023-06-19 05:01:46 https://github.com/leitbogioro/Tools 2023-06-19 05:01:49 example link from an issue 2023-06-19 05:02:17 so should not help for that 2023-06-19 05:02:37 though not sure if those spammers are aware 2023-06-19 05:03:51 a third of the accounts don't even confirm the email 2023-06-19 05:03:59 and another third+ never post anything 2023-06-19 05:20:18 They could subscribe to issues though 2023-06-19 05:21:28 yeah 2023-06-19 09:15:20 the flagged page is just endless comedy https://img.ayaya.dev/iJ9yY4sI7WFG 2023-06-19 13:18:35 guess the captcha didn't help whatsoever 2023-06-19 13:18:39 same endless spam like always 2023-06-19 13:21:17 That was my suspicion 2023-06-19 13:23:21 sadness emoji 2023-06-19 13:26:33 it means there's some actual spam house people doing this lol 2023-06-19 13:26:40 which is even more weird 2023-06-19 13:26:48 that or recaptcha is just that easy to bypass 2023-06-19 13:27:56 Yup 2023-06-19 13:30:40 I've heard about some service for solving captchas 2023-06-19 13:32:22 hmm 2023-06-19 13:32:29 yeah it could be that, one layer off 2023-06-19 13:32:39 not people spamming directly but a bot using people to solve captchas 2023-06-19 13:32:41 heh 2023-06-19 13:34:33 yeah realtime captcha solving farmed out to lowly paid humans has been around for quite a while 2023-06-19 13:35:43 in the same way that Google uses "us" via recaptcha to train its image recognition engines etc 2023-06-20 15:50:22 arm is hard-down atm :( 2023-06-20 15:50:40 the net like before or in some other way 2023-06-20 15:52:52 ? 2023-06-20 15:53:12 hard down like always or is something different 2023-06-20 15:53:25 network, yes 2023-06-20 15:57:23 unlucky 2023-06-21 05:40:19 psykose: can you check why the mirror status is no updated? The cron does run and I don't see any errors 2023-06-21 05:40:33 running it again also does not fix it 2023-06-21 12:12:32 Seems to be a bigger issue at ungleich 2023-06-21 15:38:48 why they don't replace this bad router 2023-06-21 15:44:24 It's not the router that's the issue 2023-06-21 15:47:59 what is then 2023-06-21 15:53:27 Not sure if it's the same time, but last time it was peering issues 2023-06-21 16:18:07 whatever it is it is not good 2023-06-22 03:25:41 ikke: s390x lost networking for a bit so you have to kick buildrepo on 3.15-3.17 2023-06-22 05:25:22 done 2023-06-22 05:26:40 thanks 2023-06-22 05:26:47 now here's hoping arm comes back 2023-06-22 05:48:58 ungleich did recover yesterday, but not our arm servers 2023-06-22 05:49:29 transferrence of plague 2023-06-22 09:00:15 is there chances/options to move arm builder back to equinix 2023-06-22 09:00:28 builders* 2023-06-22 09:01:08 was already rejected 2023-06-22 09:01:48 by whom? 2023-06-22 09:02:25 equinix 2023-06-22 09:02:31 ah 2023-06-22 09:03:55 then remove them from sponsor list :-> 2023-06-22 09:04:25 we have a bunch of other stuff from equinix 2023-06-22 09:04:26 small revenge ;) 2023-06-22 09:04:51 ah, what ones 2023-06-22 09:05:30 the nld- stuff iirc 2023-06-22 09:07:03 not easy to live beggars life 2023-06-22 09:08:16 ACTION starting to hate open/free source movement, i.e. where it ending 2023-06-22 09:09:11 mps: the arm servers were provided by arm and they used their hosting space at equinix 2023-06-22 09:09:46 aha, now understand 2023-06-22 09:10:29 so main problem is hosting, not machines 2023-06-22 09:10:46 Correct 2023-06-22 09:12:05 idk how this works, but maybe we should have ARM Ltd on sponsors list 2023-06-22 09:12:36 oh, maybe also starfive now ;) 2023-06-22 09:14:11 kernel 6.3.9 have fix for nouveau and I can't upgrade it for alpine because arm machines are unreachable 2023-06-22 09:14:29 this makes me somewhat 'bad' 2023-06-22 09:17:37 sure you can 2023-06-22 09:17:43 we already know it most likely builds :p 2023-06-22 09:17:47 if not, no real issue 2023-06-22 09:20:12 I can but don't like to do such things blindly 2023-06-22 10:41:10 ikke: the amount of spam is getting quite unbearable honestly 2023-06-22 10:42:12 psykose: like how 2023-06-22 10:42:23 I don't get a chance to see what's happening :)O 2023-06-22 10:42:37 every day there's like over 100 accounts roughly of which half start spamming issues in some place lol 2023-06-22 10:42:57 they pick a random repo, usually they make their own or fork something 2023-06-22 10:43:07 latest one was https://gitlab.alpinelinux.org/acf/acf-alpine-baselayouthttps://gitlab.alpinelinux.org/acf/acf-alpine-baselayout 2023-06-22 10:43:10 https://gitlab.alpinelinux.org/acf/acf-alpine-baselayout * 2023-06-22 10:43:19 (already deleted all the issues) 2023-06-22 10:43:28 https://gitlab.alpinelinux.org/imhocuno 2023-06-22 10:43:57 yeah, that was from just now 2023-06-22 10:44:08 but then i checked the real repo and it had like 500 movies as always 2023-06-22 10:44:42 they generally do hundreds per account but some of them do like 4-5 issues 2023-06-22 10:45:48 wonder if we can do something with https://imgur.com/a/Ou98WSM 2023-06-22 10:47:43 perhaps yes, though i don't really know anything about it 2023-06-22 10:47:57 There is hardly anythign documented 2023-06-22 10:48:07 I feel they keep ot obscure on purpose 2023-06-22 10:50:45 yeah 2023-06-22 10:52:36 I saw an example somewhere that shows they provided a grpc url 2023-06-22 11:07:22 the spam checker that I used for snippets was quite effective 2023-06-22 11:07:53 but snippets are rarely used by non-spam users, so less chance for false positives 2023-06-22 11:46:44 hm 2023-06-22 11:46:46 what's up with that 2023-06-22 11:46:56 it was fixed earlier when you did it 2023-06-22 11:47:14 It appeared something was taking very long 2023-06-22 11:47:19 but couldn't find yet what 2023-06-22 11:47:35 Sounds at least like something is missing a timeout 2023-06-22 11:48:06 yeah 2023-06-22 11:49:11 ./generate-json.lua debug 2023-06-22 12:34:16 looks like they're up to one fork every 5 minutes and 50 accounts/hour 2023-06-22 12:35:00 disabling sign-ups for a bit 2023-06-22 12:37:20 just set it to ack-required mode 2023-06-22 12:37:28 same thing but more controlled 2023-06-22 12:37:47 well, that also doesn't stop accounts 2023-06-22 12:37:49 up to you 2023-06-22 12:44:13 wait, still new accounts comming? 2023-06-22 12:44:19 how 2023-06-22 12:46:10 hmm 2023-06-22 12:46:16 maybe you turnt it off wrong 2023-06-22 12:46:43 i don't see a signup though 2023-06-22 12:46:46 on https://gitlab.alpinelinux.org/users/sign_in 2023-06-22 12:47:05 ah 2023-06-22 12:47:08 https://gitlab.alpinelinux.org/users/sign_up exists 2023-06-22 12:47:16 just not linked from sign_in? 2023-06-22 12:47:29 is it normally? 2023-06-22 12:47:34 is that all it disables (???) 2023-06-22 12:47:46 oof 2023-06-22 12:48:34 wait actuall 2023-06-22 12:48:35 wtf 2023-06-22 12:48:40 that's true, it doesn't turn it off 2023-06-22 12:48:41 what 2023-06-22 12:48:43 did something break 2023-06-22 12:48:56 ? 2023-06-22 12:50:02 i.e. the page is there 2023-06-22 12:51:29 hm, no, it's there but signing up doesn't work 2023-06-22 12:51:35 how are these accounts making it through 2023-06-22 12:56:54 maybe they were in some queue? but it was quite long 2023-06-22 12:57:28 I don't see any requests for /users/sign_up atm 2023-06-22 13:02:49 knowhyslili is from 1m ago 2023-06-22 13:03:05 from me posting that that is 2023-06-22 13:03:12 is there a log of how that got through 2023-06-22 13:03:18 4 new accounts 2023-06-22 13:03:18 maybe stale email confirm? 2023-06-22 13:03:31 but it creates before ac 2023-06-22 13:03:32 ack 2023-06-22 13:03:35 yes 2023-06-22 13:03:59 i am real confused 2023-06-22 13:04:56 maybe should check requests by ip 2023-06-22 13:05:57 Through github 2023-06-22 13:06:04 but there is no github identity 2023-06-22 13:06:09 huhh 2023-06-22 13:06:21 POST /users/auth/github 2023-06-22 13:06:55 oh yeah, now I see a github identity 2023-06-22 13:07:03 https://gitlab.alpinelinux.org/admin/users/daeba0aso/identities 2023-06-22 13:09:08 apparently they are using the graphql api 2023-06-22 13:09:22 "POST /api/graphql HTTP/1.1" 200 283 "https://gitlab.alpinelinux.org/alpine/docker-abuild/-/forks/new"" 2023-06-22 13:09:38 or, I suppose that could be the front-end doing that 2023-06-22 13:09:41 give the referrer 2023-06-22 13:09:57 hmm 2023-06-22 13:10:24 So they have spamloads of github accounts to burn 2023-06-22 13:10:38 just block github signup (?) for a bit? 2023-06-22 13:10:45 but also have to find out why sign_up does not block everything 2023-06-22 13:12:12 You can still sign in with github/lab on the sign_in page 2023-06-22 13:12:54 yes, but what i mean is it means sign_up blocks only same-instance, even though that sign-in-with-github conceptually is not actually different 2023-06-22 13:13:08 backend-wise it still requires you to 'make an account' with it and all, it's still a signup 2023-06-22 13:13:26 idk why they would have 'disable signups' disable only one type 2023-06-22 13:13:31 I guess they want to allow users to sign in through oauth but prevent local sign up 2023-06-22 13:13:37 yeeah 2023-06-22 13:14:23 The problem with disabling that is that we disable it for everyone who is using gitlab/github 2023-06-22 13:14:59 because they tied making a new signup identity to signing in at all? 2023-06-22 13:16:19 You only have "sign in via github/lab" 2023-06-22 13:16:26 yeah, so yes 2023-06-22 13:16:27 if your account does not exist yet, it's JIT creatd 2023-06-22 13:16:58 there is no separate "create account via github/gitlab" flow 2023-06-22 13:17:57 I now see that most of these spam accounts use gitlab/github as identity provider 2023-06-22 13:20:54 Let's see what happens if we require approval 2023-06-22 13:24:02 that does not affect gitlab/github 2023-06-22 13:24:49 :/ 2023-06-22 13:24:53 it just bypasses everything 2023-06-22 13:25:24 do you also get admin email for approval request 2023-06-22 13:25:27 it also bypasses the captcha I suppose 2023-06-22 13:25:31 I have no idea 2023-06-22 13:25:35 i got one 2023-06-22 13:25:39 https://img.ayaya.dev/llkzWhYedj2d 2023-06-22 13:25:42 ah, yes, me too 2023-06-22 13:25:54 is that for identity or native signup 2023-06-22 13:26:13 "This user has no identities" 2023-06-22 13:43:32 im getting spam 2023-06-22 13:43:51 we all are :P 2023-06-22 15:46:41 gitlab 16.1 relased 2023-06-22 15:51:31 upgrade time 2023-06-22 15:51:35 no spam to deal with in downtime! 2023-06-22 15:52:17 heh 2023-06-22 15:52:21 First need to go to 16.0 2023-06-22 15:52:27 and need to upgrade postgres 2023-06-22 15:52:35 in reverse order 2023-06-22 15:53:30 can you even do multiple postgres at once 2023-06-22 15:53:33 or is 12->15 doable 2023-06-22 15:53:37 we 2023-06-22 15:53:40 er* 2023-06-22 15:53:44 you get what i mean 2023-06-22 15:54:46 Easiest is to reate a docker image that has both installed and directories bind mounted in the correct location 2023-06-22 15:54:50 and then run the upgrade 2023-06-22 15:56:47 psykose: yes, you can skip some versions when upgrading postgresql, I did it few times. ofc before that read NEWS and look at important changes 2023-06-22 16:00:25 neat 2023-06-22 16:01:56 gitlab says it supports anything 13.6+ 2023-06-22 16:02:06 >= 2023-06-22 16:03:16 I'm planning next week upgrade postgresql from ver 11 to 15 on one of my servers 2023-06-22 16:07:30 there's a redis upgrade too you can do i think 2023-06-22 16:08:44 We use redis:6-alpine 2023-06-22 16:09:11 ok, 7 is released 2023-06-22 16:09:34 for a while yeah 2023-06-22 16:09:35 gitlab page says redis 6.0 is required 2023-06-22 16:09:57 rip 2023-06-22 16:10:14 not sure if that means 6.0 or higher 2023-06-22 16:10:17 there's like almost 0 breaking changes at all from 6->7 2023-06-22 16:10:52 we're running 6.2.12 now 2023-06-22 16:11:12 so clearly not 6.0 onlt 2023-06-22 16:11:42 https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/7276 2023-06-22 16:12:20 i can't see anything from there linked to 'doesnt work' seems like just omnibus package change 2023-06-22 16:12:43 yeah 2023-06-22 16:12:51 as an armchair moron i would try it but it's probably a bunch of effort to actually test for you 2023-06-22 16:12:58 for me it's just a s/redis:6/redis:7 :P 2023-06-22 16:13:14 actual breaking is https://github.com/redis/redis/releases/tag/7.0.0 2023-06-22 16:13:32 the lua print() and some error messages 2023-06-22 16:13:42 they've done worse on stable 2023-06-22 16:17:01 in other news 2023-06-22 16:17:05 spam of infinity continues 2023-06-22 16:18:05 What can we do about it? 2023-06-22 16:19:11 Not sure if we can reliably block those users from creating accounts 2023-06-22 16:19:37 nothing afaict 2023-06-22 16:19:38 But I think it is feasible to detect spam issues and block users 2023-06-22 16:20:09 without the magic allseeing spam plugin of death that can just do if signup = github and not account.alreadyexists() block 2023-06-22 16:26:11 ikke: if the IPs/IP-ranges are "predictable" stick a haproxy/nginx/whatever proxy in front and filter out those IPs? 2023-06-22 16:26:18 minimal: they are random 2023-06-22 16:26:25 all over the place 2023-06-22 16:26:39 ok then randomly filter out traffic ;-) 2023-06-22 16:27:47 is there anything predictable about the requests? certain HTTP headers? GET/POST data (e.g. account name referenced) 2023-06-22 16:29:26 I'd say arbitrary 2023-06-22 16:29:43 Some names are obviously spam, but others are harder to classify 2023-06-22 16:30:10 there's "predictable" spam but there are users that fall into any metrix 2023-06-22 16:30:13 metric* 2023-06-22 16:30:35 name==username is an easy one that gets 90% of the spam for instance 2023-06-22 16:31:06 like https://gitlab.alpinelinux.org/janajakok1 2023-06-22 16:32:11 blocking @gmail.com and @outlook.com is a great way to reduce spam 2023-06-22 16:32:42 blocking all users as well 2023-06-22 16:33:31 now you're cooking 2023-06-22 16:33:52 seems they're hitting lots of gitlab instances: https://git.rwth-aachen.de/janajakok1 2023-06-22 16:34:01 see the banner at the top that page 2023-06-22 16:34:34 yeah, lots of gitlab projects are affected 2023-06-22 16:36:18 oh wow 2023-06-22 16:36:24 thousands 2023-06-22 16:37:40 thousands of what? 2023-06-22 16:38:09 Zulus ;-) 2023-06-22 16:39:00 well know quote from the film Zulu 2023-06-22 16:39:38 minimal: "Chaka Zulu"? 2023-06-22 16:41:05 Zulu with Michael Caine 2023-06-22 16:41:36 s/Chaka/Shaka/ 2023-06-22 17:10:58 ok, we do get user create events via webhooks 2023-06-22 17:12:37 in the meantime i'll keep clicking delete account 45x/hour 2023-06-22 19:43:06 Anyhting more automatic requires those users to at least create an issue 2023-06-22 19:43:18 that's a much higher spam signal than anything else 2023-06-23 17:42:28 spam still going eh 2023-06-23 17:42:32 yeaha 2023-06-23 17:42:34 working on it 2023-06-23 17:42:49 But having data helps 2023-06-23 18:49:58 gitlab is quite frozen presently i think 2023-06-23 18:50:11 too many deletes piled up and now nothing moves 2023-06-23 18:50:48 e.g. /activity pages don't update, the webhook doesn't trigger for commits 2023-06-23 18:51:16 and the deletes don't delete 2023-06-23 18:59:22 sounds like sidekiq being busy? 2023-06-23 18:59:56 4k jobs scheduled 2023-06-23 19:03:16 yea 2023-06-23 19:03:48 doesn't seem to move though 2023-06-23 19:05:28 "sidekiq 6.5.7 gitlab [0 of 10 busy] stopping" 2023-06-23 19:06:30 no cpu usage 2023-06-23 19:06:46 nothing in the logs 2023-06-23 19:07:42 yeah, i guess something crashed and it's just stuck on nothing 2023-06-23 19:10:04 ok, kicked the process, now things seem to be happening 2023-06-23 19:10:37 enqueued is now 0 2023-06-23 19:10:51 nice 2023-06-26 08:24:15 ikke: heard anything for arm network? 2023-06-26 08:25:15 Nope :⁠-⁠\ 2023-06-26 08:26:04 Ok, just got an email reply 2023-06-26 08:34:27 :( 2023-06-26 12:14:38 ikke: did you look at that tls1.3 dl-cdn toggle thing? 2023-06-26 12:16:18 Not yet, had other priorities ;⁠) 2023-06-26 12:16:45 I mean, i did check where I can enable it 2023-06-26 12:17:05 just yeet it 2023-06-26 12:17:06 :D 2023-06-26 12:17:29 And commence chaos 2023-06-26 12:17:41 i bet it will have 0 bad effects 2023-06-26 12:17:47 and a few good ones 2023-06-26 12:19:03 at least that's my experience with enabling client opt-in stuff, unless their implementation hangs forever on it or w/e 2023-06-26 12:19:20 ACTION had to set crond to rsync his dev lxcs to local mirror 2023-06-26 12:19:33 but didn't :-( 2023-06-26 12:19:38 I'm not afraid of enabling tls 1.3, just want to make sure changing it on fastly doesn't break something 2023-06-26 14:19:42 riscv64 is also stuck i think 2023-06-26 15:47:03 psykose: there is a button called 'add tls activation', where you can add tls 1.3, but wanted to be sure that does not do anything unexpected 2023-06-26 15:47:10 it's not just a checkbox to enable 2023-06-26 15:47:16 yeah 2023-06-26 15:47:19 wanna show? 2023-06-26 15:47:47 https://imgur.com/a/cwv8iVR 2023-06-26 15:47:55 based on docs 'add tls activation' is to like enable tls at all 2023-06-26 15:48:04 ah 2023-06-26 15:48:08 ok, that's just a bonus toggle 2023-06-26 15:49:07 theoretically you could add them all but i don't think apk makes use of 0rtt anyway 2023-06-26 15:51:16 same with http3 2023-06-26 15:51:38 yep 2023-06-26 15:53:59 One thing I'm wondering about is that it shows different CNAMEs for each tls configuration 2023-06-26 15:54:15 We now use d.sni.global.fastly.net, tls1.3 shows j.sni.global.fastly.net 2023-06-26 15:56:06 and 1.2 is still the former? 2023-06-26 15:56:20 i forget if we used that or not 2023-06-26 15:58:02 i don't think it would harm anything assuming our own domains aren't just a link to d.sni which means it's "not working" or w/e 2023-06-26 15:59:19 They are 2023-06-26 15:59:46 hmm 2023-06-26 15:59:53 how are you ''meant'' to use that in general 2023-06-26 16:00:05 if it's separate cnames 2023-06-26 17:06:29 psykose: that's what I was wondering as well 2023-06-26 17:07:36 "Negotiation of the TLS protocol will only happen if the requesting client also supports TLS 1.3. If a request comes from an older client, Fastly’s default behavior is to downgrade to TLS 1.2." 2023-06-26 17:09:08 that's just default normal tls stuff when everything is on the same host 2023-06-26 17:09:11 doesn't sound very usefl 2023-06-26 17:09:14 useful* 2023-06-26 17:09:19 cname stuff is.. different 2023-06-26 17:09:45 That seems to indicate that if we enable tls1.3 and point the cname to j.sni, and a client only supports tls1.2, it should still work 2023-06-26 17:10:25 easy to test i guess 2023-06-26 17:11:15 just curl --tls-max 1.2 something 2023-06-26 17:14:26 I see here that j.sni supports tls1.3 and tls1.2 2023-06-26 17:14:40 But it does not provide our certificate yet 2023-06-26 17:14:55 curl --resolve dl-cdn.alpinelinux.org:443:151.101.2.132 https://dl-cdn.alpinelinux.org/alpine/ 2023-06-26 17:15:06 fails certificate check (I haven't activated it yet) 2023-06-26 17:16:23 I have activated it now 2023-06-26 17:17:08 ok, now it works (with curl) 2023-06-26 17:18:46 tls1.2 works as well 2023-06-26 17:28:57 nice 2023-06-26 17:29:00 good work 2023-06-26 17:29:19 so next step is to switch dl-cdn cname 2023-06-26 17:30:30 dualstack works as well 2023-06-26 17:49:55 https://gitlab.alpinelinux.org/alpine/infra/linode-tf/-/merge_requests/70 2023-06-26 20:29:01 ok, it got switched 2023-06-26 20:29:04 seems to still work 2023-06-27 06:29:19 That's some good news 2023-06-27 06:46:42 yay 2023-06-27 07:53:59 f 2023-06-27 07:58:19 f 2023-06-27 08:01:50 heh, had enough time for rsync 2023-06-27 10:58:37 ikke: think the generate-json debug thing you left ran every day and keep running 2023-06-27 10:58:41 the load stayed at 15 there forever 2023-06-27 10:58:49 That happened before 2023-06-27 10:59:05 A missing timeout probably 2023-06-27 10:59:30 yeah 2023-06-27 11:07:03 i actually forgot if the debug was meant to be there or not 2023-06-27 11:07:08 https://img.ayaya.dev/vw1ZNQzOzAfA 2023-06-27 11:07:10 line 2 2023-06-27 11:07:15 well, 3 2023-06-27 11:07:31 afaik, that has always been there, just gives extra output 2023-06-28 11:24:22 hi! what is the status of the arm servers? 2023-06-28 11:25:14 They were briefly back, but unreachable again. Nico mentioned he was not able to fix the issue yet, but will go to the DC tomorrow 2023-06-29 05:27:12 hmm, think that sidekiq is having issues again 2023-06-29 05:28:03 no, still getting logs 2023-06-29 05:41:15 124 pages of users that never signed in: https://gitlab.alpinelinux.org/admin/users?page=124&sort=last_activity_on_asc 2023-06-29 05:41:20 since 2019 2023-06-29 05:44:57 many were from the migration from redmine I suppose 2023-06-29 05:49:19 yea 2023-06-29 05:49:34 anything in past year though is probably just spam 2023-06-29 11:09:37 arm machines are still missing? 2023-06-29 11:09:46 yes 2023-06-29 11:11:09 i got a private message from jirutka earlier this week. "Hi, why are ARM CI runners still offline?" 2023-06-29 11:11:36 and: "I didn't found any information in the infra group on GitLab or in ML. When I asked Alice about it a few weeks ago, she also didn't know anything. Also, some projects on GL are heavily spammed, repeatedly. Do you think this is all right?" 2023-06-29 11:11:48 maybe we should post something for the public 2023-06-29 11:12:22 I did create https://gitlab.alpinelinux.org/alpine/infra/infra/-/issues/10800/ yesterday 2023-06-29 11:12:35 what happened, why it has taken so long to resolve, what the current status is, and what the plans are to avoid it happening again 2023-06-29 11:12:39 I'm hoping somewhere today the issue is fixed 2023-06-29 11:13:02 Well, we cannot avoid these issues from happening 2023-06-29 11:13:36 Brb 2023-06-29 11:21:55 "Do you think this is all right?" heh 2023-06-29 11:32:56 ncopa: was thinking about defining some internal SLA for builders, where, if for some reason we are not able to update packages for x amount of time, we are going to escalate (ie, post messages, etc) 2023-06-29 11:37:06 And yes, we are aware of the spam and (psykose mostly) is cleaning it up, while I'm working on a more automated solutioin 2023-06-29 11:37:38 'some projects' being spammed is quite funny because i think i've deleted like 50k issues in hundreds of places 2023-06-29 11:38:07 doesn't really stop however sadly 2023-06-29 11:38:12 nope 2023-06-29 11:39:06 psykose: could you for a while keep some of the spam if it's in a dedicated project? I can use it to test with 2023-06-29 11:39:20 sure 2023-06-29 11:39:32 i'll archive the next movie watch spam link thing 2023-06-29 11:46:06 still didn't figure that one out 2023-06-29 11:47:04 8139 root 0:00 {generate-json.l} /usr/bin/lua5.3 ./generate-json.lua debug is still running 2023-06-29 11:47:06 it doesn't finish 2023-06-29 11:47:37 strace endless: 2023-06-29 11:47:39 read(94, "", 4096) = 0 2023-06-29 11:47:41 read(94, "", 4096) = 0 2023-06-29 11:47:45 yeah 2023-06-29 12:19:54 would be nice to handle the mailing spam on infra and the like 2023-06-29 12:20:01 perhaps some filter.. 2023-06-29 12:20:14 There is rspamd running there 2023-06-29 12:21:23 hm 2023-06-29 12:21:27 why does it catch nothing then 2023-06-29 12:21:36 well 2023-06-29 12:21:39 maybe it catches 90% 2023-06-29 12:21:39 :D 2023-06-29 12:23:51 https://imgur.com/a/8g5dPX3 2023-06-29 12:24:38 should teach it about the rhodium gold plated 2023-06-29 12:25:26 https://github.com/rspamd/rspamd/discussions/3690 2023-06-29 12:33:09 psykose: ok, set something up, see if it helps 2023-06-29 12:33:18 for email or accounts? 2023-06-29 12:33:26 email 2023-06-29 12:33:28 okie 2023-06-29 12:33:39 whadya add? i think there's like three kinds of spam going around you can see in history 2023-06-29 12:33:44 rhodium 2023-06-29 12:33:54 let me know if I should add more 2023-06-29 12:34:05 alright, will do 2023-06-29 12:34:22 tbh i usually just delete it without checking most of the time, so might not realise for a bit if it's still to @infra 2023-06-29 12:34:23 :D 2023-06-29 12:34:52 Yeah, I delete everything as well, but I can check rspam history 2023-06-29 12:34:56 at least for subjects 2023-06-29 12:35:26 alright 2023-06-29 12:41:02 It's nice that the file is life reloaded, so can easily add more 2023-06-29 12:41:11 live* 2023-06-29 12:43:34 mhm 2023-06-29 12:45:44 luxury 2023-06-29 12:46:04 rhodium 2023-06-29 12:47:13 Hmm, apparently the bayes filter is not active, don't see that symbol mentioned 2023-06-29 12:47:26 Never learned enough for it to become active 2023-06-29 12:47:46 (0, 0) 2023-06-29 12:48:07 I run rspamd myself, so have a tiny bit of experience 2023-06-29 14:25:46 wow, Equinix can sponsor one server at least for now 2023-06-29 14:26:13 that's good 2023-06-29 14:26:55 rebooting deu5 (build.a.o) after upgrade to alpine 3.18 2023-06-29 14:27:22 ack 2023-06-29 15:02:11 what'd they give 2023-06-29 15:03:50 1x Ampere Altra Q80-30 80-Core processor @ 3.00GHz 256GB RAM 2x 960GB NVME 2x 25Gbps 2023-06-29 15:04:47 nice 2023-06-29 15:04:57 same thing :D 2023-06-29 15:05:12 already working on setting up the containers there 2023-06-29 15:05:41 ikke: \o/ 2023-06-29 15:08:26 3.18 first :D 2023-06-29 15:08:41 heh 2023-06-29 16:34:36 lol, my rspamd instance blocked the last spam e-mail to alpine-mirrors, but the alpine rspamd filter did not :D 2023-06-29 16:34:47 so I didn't get the spam e-mail, but did get the gitlab notification 2023-06-29 16:35:36 the bayes classifications are slowly increasing, so maybe that will help in the future 2023-06-29 16:46:02 yeah 2023-06-29 16:46:07 hopefully 2023-06-29 16:46:29 It should help to bump the classification just above the rejection score 2023-06-29 19:36:42 algitbot: retry 3.18-stable 2023-06-30 05:59:58 algitbot: retry 3.18-stable 2023-06-30 08:40:05 looks like 3.18 armhf builder hags on mercurial 2023-06-30 08:41:30 hangs* 2023-06-30 08:52:14 It's building some rust thing 2023-06-30 08:58:01 ikke: armv7 and aarch64 finished 3.18. what is with edge? 2023-06-30 08:58:48 btw, latest synapse on armv7 3.18 destroyed my synapse database 2023-06-30 08:59:31 I still need to setup the edge builders 2023-06-30 09:00:34 they are slow now but a lot better than not working 2023-06-30 09:13:04 ikke: you need to add the sysctl for armhf too 2023-06-30 09:13:12 that's why it's hanged :p 2023-06-30 09:13:18 Ah yeah 2023-06-30 09:13:25 Always forget that 2023-06-30 10:31:10 setting up / syncing edge builders now 2023-06-30 10:32:35 ~ 2023-06-30 10:33:16 still need to figure out why the builders are not reporting build errors 2023-06-30 10:34:08 ikke: you should get back to nu 2023-06-30 10:34:15 also there's 2 spam repos for you https://gitlab.alpinelinux.org/lindagonez/watch https://gitlab.alpinelinux.org/angelinajones/watch 2023-06-30 10:34:30 psykose: we have already been in contact with nu 2023-06-30 11:51:01 hm ok, so when running it manually it hangs after roughly 5k indexes fetched 2023-06-30 11:51:08 stuck on 8325 left 2023-06-30 11:54:34 but 8031 on rerun 2023-06-30 11:54:36 random each time 2023-06-30 11:58:23 seems to not be a dead url since the last thing it gets stuck on fetching works 2023-06-30 13:50:22 really have no idea :( 2023-06-30 15:14:57 a bunch of chinese ips 2023-06-30 15:15:07 all spammed the .xz .bz2 snapshot urls 2023-06-30 15:15:55 docker compose logs | grep -F .tar.xz | wc -l 2023-06-30 15:15:56 862 2023-06-30 15:15:59 docker compose logs | grep -F .tar.bz2 | wc -l 2023-06-30 15:16:00 871 2023-06-30 15:16:11 hehe 2023-06-30 15:32:55 they're still going it's at over 4k each now 2023-06-30 15:33:06 same IPs or different IPs? 2023-06-30 15:36:32 all some 113.* pool 2023-06-30 15:36:36 less now 2023-06-30 15:36:38 guess they gave up 2023-06-30 15:37:01 i just disabled snapshots above as idk why they were enabled at all 2023-06-30 15:48:19 nah, they still trying 2023-06-30 15:48:23 but it's just regular traffic now so w/e 2023-06-30 15:59:17 yay edge aarch64 is up 2023-06-30 15:59:41 :) 2023-06-30 15:59:44 :3 2023-06-30 16:00:05 wondered how soon you would pick that up 2023-06-30 16:02:07 hmm 2023-06-30 16:02:23 there's ncurses-terminfo-base pre-installed on the builder 2023-06-30 16:02:31 so it fails to add ncurses-dev as it wouldn't upgrade so breaks 2023-06-30 16:02:44 no easy solution for that except running `apk upgrade` on the host once 2023-06-30 16:03:19 done 2023-06-30 16:03:30 thanks 2023-06-30 16:03:53 part of me wants to make abuild just do `apk add -u ..` 2023-06-30 16:04:09 but another part wants to just remove the normal build mode and make it rootbld only which would just fix all of these dumb issues 2023-06-30 16:05:50 can rootbld already install locally built packages from another repo? 2023-06-30 16:08:02 yeah that was fixed 2023-06-30 16:08:08 ok 2023-06-30 16:08:21 and broke some other usecase, but it works inside aports 2023-06-30 16:08:39 for anything in .rootbld-repositories you get /home added as well as the remote 2023-06-30 16:08:49 i forget if that works or not without any global repos 2023-06-30 16:28:13 no errors from armhf 2023-06-30 16:31:24 why deos that fail 2023-06-30 16:31:27 the log just says success 2023-06-30 16:32:03 also gotta apk upgrade 2023-06-30 16:32:11 yeah, wanted to see it fail 2023-06-30 16:45:51 armv7 is now back as well 2023-06-30 16:46:43 Alpine is no longer 'armless it appears ;-) 2023-06-30 16:47:06 ahuh :) 2023-06-30 16:47:27 No CI yet, though 2023-06-30 17:30:13 Why is gitlab anouncing a security release for 15.11.1, while we're already on 15.11.6 2023-06-30 17:30:52 oh, 15.11.10, typo I suppose in the e-mail 2023-06-30 17:34:53 :) 2023-06-30 17:35:18 There is a "HIGH SEVERITY" redos vulnerability 2023-06-30 17:41:17 love those 2023-06-30 17:41:19 not 2023-06-30 17:41:21 should update 2023-06-30 17:44:14 https://gitlab.alpinelinux.org/alpine/infra/docker/gitlab/-/pipelines/169382 2023-06-30 17:48:24 spinny 2023-06-30 18:07:02 that's a big big queue 2023-06-30 18:07:11 426 community/