2022-04-01 08:42:21 could anyone with ppc64le access look why openssh 8.9_p1 fails on CI https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/31642 2022-04-01 10:34:40 anyone has experience witih upgrading old alpine lxc hosts to alpine 3.15? I'm getting an error 2022-04-01 10:34:44 lxc-start: mcc: lxccontainer.c: wait_on_daemonized_start: 869 No such file or directory - Failed to receive the container state 2022-04-01 10:35:11 sorry that was wrong error meg 2022-04-01 10:35:13 msg 2022-04-01 10:35:25 lxc-start: mcc: cgroups/cgfsng.c: controllers_available: 343 The @kernel controller found 2022-04-01 10:35:25 lxc-start: mcc: cgroups/cgfsng.c: __initialize_cgroups: 3341 No such file or directory - One or more requested controllers unavailable or not delegated 2022-04-01 10:36:25 ncopa: I haven't had big issues, but maybe check if the container config is up-to-date 2022-04-01 10:39:21 the config looks ok 2022-04-01 10:41:17 the lxc service should pull it in, but can you check if the cgroups service is running? 2022-04-01 10:48:59 mps: https://www.mail-archive.com/openssh-bugs@mindrot.org/msg15490.html 2022-04-01 10:49:52 ikke: thank you 2022-04-01 10:50:18 I just searched for the error message 2022-04-01 10:50:43 ikke: yes cgroups 'service' is started 2022-04-01 10:51:04 * Mounting cgroup filesystem ... [ ok ] 2022-04-01 10:52:49 ikke: I thought problem is something in alpine :\ 2022-04-01 10:53:52 ncopa: someone else with the same error message, but no solution afaics https://forums.gentoo.org/viewtopic-p-8694993.html?sid=9ffca1fae89daf3a443b183eae59f343 2022-04-01 10:54:19 they suggest running lxc-checkconfig 2022-04-01 10:55:45 error comes from here, it seems: https://github.com/lxc/lxc/blob/master/src/lxc/cgroups/cgfsng.c#L3384 2022-04-01 10:56:52 Cgroup v1 systemd controller: missing 2022-04-01 10:58:16 what is the module that provides /proc/config.kz again? 2022-04-01 10:59:25 modprobe configs 2022-04-01 10:59:29 yea, thanks 2022-04-01 10:59:41 that module is missing on nld5 as well 2022-04-01 10:59:46 do we have any lxc host that runs alpine v3.15? 2022-04-01 10:59:53 yes 2022-04-01 10:59:56 nld5 nld3 2022-04-01 11:00:07 my edge workstation runs lxc as well and works. but I use cgroup2 2022-04-01 11:00:49 LXC version 4.0.11 2022-04-01 11:01:19 maybe lxc 4.0.12 introduces a regression? 2022-04-01 11:02:05 Or kernel version? 2022-04-01 11:02:27 5.15.32-0-lts 2022-04-01 11:05:13 there seems to be a few cgroup related fixes in upstream lxc 4.0 branch 2022-04-01 11:06:13 https://discuss.linuxcontainers.org/t/lxd-4-23-unable-to-start-nested-containers/13416 2022-04-01 11:11:08 I'm thinking lately again about deploying discourse for Alpine Linux 2022-04-01 11:20:19 ikke: what are reasons? 2022-04-01 11:20:44 maybe I have archived it 2022-04-01 11:40:09 somethign is weird. i cannot reproduce the lxc issue in a vm 2022-04-01 12:11:08 ok. this is just super weird. starting a newly created lxc container on the upgraded/broken server fails 2022-04-01 12:11:21 but it works on a fresh vm with same kernel version and same lxc version 2022-04-01 12:11:29 something is very fishy here 2022-04-01 12:31:42 ha! i found the problem 2022-04-01 12:33:34 # Force lxc to use cgfs instead of new cgfsng. 2022-04-01 12:33:34 # This is a workaround for https://github.com/lxc/lxc/issues/1095. 2022-04-01 12:33:34 lxc.cgroup.use = @kernel 2022-04-01 12:33:49 that was in /etc/lxc/lxc.conf 2022-04-01 12:49:54 aha 2022-04-01 13:38:50 in upgraded util-linux there is 'lfd' as better lsof for linux 2022-04-01 13:39:09 can generate output in json 2022-04-01 13:39:16 very nice thing 2022-04-01 13:40:01 interesting 2022-04-01 13:41:51 and `mount(8) now supports a new option --mkdir as shortcut for X-mount.mkdir` 2022-04-01 13:42:30 ah, so it would create the target dir? 2022-04-01 13:43:16 yes 2022-04-01 13:44:00 but this one is very good `mount(8) (and libmount) now supports new mount options X-mount.subdir= to mounting sub-directory from a filesystem instead of the root directory.` 2022-04-01 13:44:38 so you can cherry pick from the middle of a filesystem? 2022-04-01 13:44:39 :o 2022-04-01 13:44:56 ikke: here is changes if you want to look all https://git.kernel.org/pub/scm/utils/util-linux/util-linux.git/tree/Documentation/releases/v2.38-ReleaseNotes 2022-04-01 13:45:28 rails: iiuc yes 2022-04-01 13:54:12 oh nice, dmesg get --json option also 2022-04-01 15:05:48 technically it was always possible to mount from middle of filesystem using --bind, but i guess now it is easier 2022-04-01 15:06:23 bind is very powerful, not many people know that you can unmount original filesystem afterwards and it will still work 2022-04-01 15:06:51 simmilar to --mkdir, you can create a dir beforehand, but it's easier to just do it 2022-04-01 15:07:28 mmhmm 2022-04-01 15:08:27 It's easier for it to do it* 2022-04-01 15:14:59 to use --bind whole filesystem must be mounted first somewhere and then use --bind 2022-04-01 15:16:11 yes 2022-04-01 15:16:19 so mount, bind, unmount 2022-04-01 15:17:20 iiuc X-mount.subdir could be used without mounting whole filesystem 2022-04-01 15:18:14 yes, so more convenient 2022-04-01 15:19:36 X-mount.subdir=directory 2022-04-01 15:19:36 Allow mounting sub-directory from a filesystem instead of the root 2022-04-01 15:19:39 directory. For now, this feature is implemented by temporary 2022-04-01 15:19:42 filesystem root directory mount in unshared namespace and then bind 2022-04-01 15:19:45 the sub-directory to the final mount point and umount the root of 2022-04-01 15:19:48 the filesystem. The sub-directory mount shows up atomically for the 2022-04-01 15:19:51 rest of the system although it is implemented by multiple mount(2) 2022-04-01 15:19:54 syscalls. This feature is EXPERIMENTAL. 2022-04-01 15:19:56 sorry but easier than tpaste 2022-04-01 20:39:40 fyi, gitlab has been upgraded to 14.8.5 2022-04-04 08:31:18 ncopa: morning 2022-04-04 08:31:21 i know you are busy 2022-04-04 08:31:34 but if you find a few minutes, could you look at lua-mosquitto? 2022-04-04 08:31:55 it does not work on 3.15, and its related to -DLUA_COMPAT_APIINTCASTS 2022-04-04 08:32:19 ncopa: https://github.com/flukso/lua-mosquitto/issues/32 2022-04-04 08:33:33 clandmeter: when / where do you encounter this? 2022-04-04 08:34:00 its blocking upgrade of one of our infra containers 2022-04-04 08:34:19 ikke: you mean how to reproducec? 2022-04-04 08:34:20 alpine-msg? 2022-04-04 08:34:44 no webhooks.a.o 2022-04-04 08:34:48 ah ok 2022-04-04 08:34:55 it bridges gitlab to mqtt 2022-04-04 08:35:03 yes 2022-04-04 08:35:27 without it, alpine will vanish in thin air. 2022-04-04 08:35:44 just to make the issue a bit more important ;-) 2022-04-04 08:36:09 oh.. ok :) 2022-04-04 08:36:22 do you have a simple reproducer? 2022-04-04 08:36:27 sure 2022-04-04 08:36:33 just load the module in 5.3 2022-04-04 08:36:46 i can make the error go away 2022-04-04 08:36:51 but it introduces another one 2022-04-04 08:37:04 make -C "$builddir-$lver" LUAPKGC=lua$lver CFLAGS="$CFLAGS -DLUA_COMPAT_APIINTCASTS" 2022-04-04 08:37:36 but its related, missing a symbol 2022-04-04 08:37:38 do rebuild lua or lua-mosquitto? 2022-04-04 08:37:43 the later 2022-04-04 08:37:48 ok 2022-04-04 08:37:51 but maybe we need to rebuild both, dunno 2022-04-04 08:38:00 its lazy loading i guess 2022-04-04 08:38:23 another option is to build it against lua5.4, not sure if that would fix anything 2022-04-04 08:38:50 i think it makes sense to fix it for 5.3 2022-04-04 08:38:56 instaed to ductape our infra :) 2022-04-04 08:39:05 but i think 5.4 has the same issue 2022-04-04 08:39:09 anything > 5.2 2022-04-04 08:39:10 I guess you found this? https://github.com/flukso/lua-mosquitto/issues/30 2022-04-04 08:39:30 yes 2022-04-04 08:39:41 thats the DLUA_COMPAT_APIINTCASTS 2022-04-04 08:40:45 author only fixed it for rockspec 2022-04-04 08:41:53 What are the other issues that you get after setting this? 2022-04-04 08:52:05 I suggest this: https://tpaste.us/jPnm 2022-04-04 08:57:24 i pushed it to edge. clandmeter: can you please git it a quick test? i'll backport it to 3.15 once you can confirm it does not break everything 2022-04-04 08:59:15 yes that was another option on how to fix it :) 2022-04-04 08:59:34 not sure why the author didnt fix the code himself, so i didnt touch it. 2022-04-04 15:31:19 I got this on riscv64 lxc `bwrap: Creating new namespace failed, likely because the kernel does not support user namespaces. bwrap must be installed setuid on such systems.` trying to build new u-boot 2022-04-04 15:31:40 does that means it will not pass on builders 2022-04-04 15:33:08 We don't support unprivileged user namespaces anyway 2022-04-04 15:33:15 It's a kernel patch 2022-04-04 15:34:01 unprivileged user namespaces is the default. the patch is to *disable* unprivileged user namespaces 2022-04-04 15:35:10 ikke: it passes on aarch64, armv7, and x86_64 lxc containers, only failed in riscv64 lxc 2022-04-04 15:35:34 docker and i think lxc prohibit nested namespaces to minimize attack surface (which is kind of ironic) 2022-04-04 15:36:25 I think it must be qemu-riscv64 which is used on this machine 2022-04-04 15:40:04 ok, lets try on CI 2022-04-04 15:42:25 hmm, we don't have riscv64 CI /o\ 2022-04-04 20:20:09 I think 'we' are first distro with linux-asahi kernel (apple silicon M1) 2022-04-05 07:23:28 Nice, mirror request in Japan, 100Gbps, 200Gbps planned 2022-04-05 10:40:25 i want 1Tbps :D 2022-04-05 10:42:01 :) 2022-04-06 15:34:32 i'm refactoring ssl_client we use with busybox and wonder if I should create gitlab.alpinelinux.org/ncopa/ssl_client or gitlab.alpinelinux.org/alpine/ssl_client 2022-04-06 15:34:40 or just publish it on github for now 2022-04-06 15:35:20 latter 2022-04-06 15:42:11 ncopa: if it's an experiment, maybe start under ncopa, we can always move it later 2022-04-06 15:44:29 oh, you already created it on github 2022-04-06 15:44:38 yup 2022-04-07 13:38:29 im planning to start set up 3.16 buildservers tomorrow. are we ok with disk space etc? 2022-04-07 13:44:41 nld5-dev1 is 97% but not sure if that matters :p 2022-04-07 13:45:21 does not matter I think 2022-04-07 14:48:34 no 2022-04-07 16:32:29 ncopa: usa9 is the most tricky, 100G available atm 2022-04-08 06:54:10 ikke: usa9 is arm i guess? 2022-04-08 06:55:13 yes 2022-04-08 07:13:41 send an update to arm 2022-04-08 07:14:02 i will go to office next week, ill see if i can get our older thunderx online 2022-04-08 07:14:12 we could move aarch64 stuff to it 2022-04-08 07:28:23 clandmeter: just fyi, we still do have 500G unassigned 2022-04-08 07:28:50 ah ok 2022-04-08 07:28:53 so no hurry 2022-04-08 07:30:29 well, its good to send a reminder anyway 2022-04-08 07:31:11 yup 2022-04-08 07:31:28 i guess you received the meeting invite 2022-04-08 07:32:34 yes 2022-04-08 08:01:30 psykose: we discussed few times on irc and mailing list that we want to get rid of the -static subpackages (probably before you joined alpine) 2022-04-08 08:02:35 and yet, we keep running into issues where packages get accidentally get built against static dependencies 2022-04-08 08:02:43 I mean, -static libs not binaries where it makes sense 2022-04-08 08:03:59 ikke: that is because BDFL is meek :D 2022-04-08 08:04:27 or coucil 2022-04-08 08:04:45 s/coucil/council/ 2022-04-08 08:04:45 mps meant to say: or council 2022-04-08 08:06:31 i have cleaned up my arm containers a bit 2022-04-08 08:26:33 whats wring with arm? 2022-04-08 08:26:44 oh, space 2022-04-08 08:26:54 We have a single machine for 3 arches 2022-04-08 08:27:18 christ 2022-04-08 08:27:27 whats the resource use on them? 2022-04-08 08:27:37 say, per arch 2022-04-08 08:28:37 aarch64 for all releases + edge is 284G 2022-04-08 08:32:26 if i had my arm box up i'd have enough, bleh 2022-04-08 10:50:45 https://ftp.openbsd.org/pub/OpenBSD/OpenSSH/portable/ChangeLog is major release from 8.x to 9.0 2022-04-08 10:51:41 should we or not upgrade it for 3.16-stable or keep 8.9 2022-04-08 11:05:43 i think we want openssh 9.0 for 3.16 2022-04-08 11:13:35 ok 2022-04-08 11:14:20 clandmeter: nmeum: could you try to build latest u-boot on riscv bare metal, if you have time 2022-04-08 11:15:06 mps: I should have some time tomorrow, feel free to ping me again if I forget about it :D 2022-04-08 11:15:10 I've got info that it builds on HiFive bare metal 2022-04-08 11:15:19 without the patch? 2022-04-08 11:15:51 I think so, but you could test with it and without it 2022-04-08 11:16:20 without the patch it was racy in the past, so I would rather like to check that the problem was properly addressed upstream 2022-04-08 11:16:28 which patch 2022-04-08 11:16:28 ACTION wants to buy big riscv machine 2022-04-08 11:17:03 psykose: https://gitlab.alpinelinux.org/alpine/aports/-/blob/master/main/u-boot/hifive-unmatched-ramdisk.patch 2022-04-08 11:17:09 a 2022-04-08 11:17:10 ah 2022-04-08 11:17:40 i mean with latest binutils it doesn't build at all regardless 2022-04-08 11:17:55 we should build linux-headers from 5.17 kernel 2022-04-08 11:17:58 this is the upstream "solution" for the issue but looks like it wasn't merged so far? https://patchwork.ozlabs.org/project/uboot/list/?series=258110&state=* 2022-04-08 11:19:01 what -march are we using to compile u-boot? 2022-04-08 11:19:09 because that build failure should not happen with -march=rv64gc 2022-04-08 11:19:52 https://github.com/riscv-collab/riscv-gnu-toolchain/issues/1043#issuecomment-1086117719 2022-04-08 11:20:01 I can look into that later, should be easy to fix 2022-04-08 11:20:18 nmeum: someone on #riscv told me that it builds on HiFive, but I don't know what -march is set 2022-04-08 11:20:55 nmeum: yes, I think this issue where problem happens 2022-04-08 11:21:11 it ignores the -march entirely 2022-04-08 11:21:20 i already fixed it by adding the subarch settings 2022-04-08 11:21:29 but it's harcoded to imac by default 2022-04-08 11:21:42 in the file i patched in the open mr, if you want to see where it is 2022-04-08 11:22:01 or more specifically, it ignores the -march for that section of code that fails to compile, not every part of the build 2022-04-08 11:22:11 you can see in the V=1 output 2022-04-08 11:22:30 hmm, I thought it is qemu issue 2022-04-08 11:22:42 no, this is not a qemu issue 2022-04-08 11:22:50 it's a binutils change issue in the threads i linked 2022-04-08 11:23:06 yes, yes, I see now 2022-04-08 11:24:07 technically it's an isa change, but yeah 2022-04-08 11:24:36 indeed, the other fix is -misa to the old one 2022-04-08 13:22:24 nmeum: psykose: I pushed fix for u-boot on riscv64, hope it will work 2022-04-08 13:23:18 mps: https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/32892 2022-04-08 13:24:25 ikke: not this, I found something which will be merged in u-boot for next release 2022-04-08 13:25:00 #u-boot and #riscv people helped me a lot 2022-04-08 13:25:57 ncopa: !32928 2022-04-08 13:26:30 this openssh version got quantum computers resistence ;) 2022-04-08 13:31:59 > This release switches scp(1) from using the legacy scp/rcp protocol to using the SFTP protocol by default. 2022-04-08 13:32:00 neat 2022-04-08 13:32:46 copy-data is nice 2022-04-08 13:39:45 yes, this is also very nice feature 2022-04-08 13:44:31 merged it 2022-04-08 13:44:46 now big kernel upgrade 2022-04-09 10:04:55 rebooting deu1 for kernel/new ssl/docker/zlib and new 3.15 algitbot 2022-04-09 10:07:28 algitbot: ping 2022-04-09 10:07:42 both back up :) 2022-04-09 10:15:40 thx :) 2022-04-09 10:47:07 psykose: re tls offload to card in kernel, I don't trust hardware to do crypto, do you? 2022-04-09 10:50:00 yes 2022-04-09 10:50:25 ..do you disable the aes instructions in the cpu too for hardware offload? 2022-04-09 10:50:45 everything is 'hardware doing crypto' 2022-04-09 10:56:53 I don't want to nitpick here 2022-04-09 17:28:04 mps: !33027 2022-04-09 17:33:50 psykose: looking 2022-04-09 17:34:03 also https://github.com/linux-nvme/libnvme/pull/337 2022-04-09 17:35:12 aha, nice 2022-04-09 17:35:58 MR looks fine 2022-04-09 17:36:13 lets wait CI to finish 2022-04-09 17:36:46 and https://github.com/linux-nvme/nvme-cli/pull/1494 2022-04-09 17:36:48 done 2022-04-09 17:37:35 I see, very nice from you :) 2022-04-09 17:37:51 thank you, really 2022-04-09 17:39:30 also i assume 2022-04-09 17:39:32 -etc/nvme/hostid 2022-04-09 17:39:32 -etc/nvme/hostnqn 2022-04-09 17:39:34 is expected 2022-04-09 17:40:24 psykose: I have non-stadard nvme machine so couldn't check and test all options 2022-04-09 17:40:54 machine with non-standard nvme* 2022-04-09 17:41:15 list works for me https://img.ayaya.dev/o7AICBidpSR8 2022-04-09 17:41:20 of course, no idea what i'd really use it for 2022-04-09 17:41:21 haha 2022-04-09 17:43:28 'nvme list' lists all devices 2022-04-09 17:44:08 psykose: https://tpaste.us/WQQY 2022-04-09 17:44:15 yea 2022-04-09 17:44:30 that usage does not seem to be actual usage, maybe some other form of block-usage 2022-04-09 17:44:50 now all internet knows your and mine nvme serial numbers :) 2022-04-09 17:45:31 fw-log 2022-04-09 17:45:53 heh 2022-04-09 17:45:58 they're not very useful anyway 2022-04-09 17:46:23 smart-log 2022-04-09 17:46:53 https://img.ayaya.dev/GNZWne5nzPzK 2022-04-09 17:47:00 wow, a whole 310 kelvin 2022-04-09 17:47:01 :) 2022-04-09 17:47:43 hah, mine is 384 2022-04-09 17:47:58 hmm, 304 2022-04-09 17:59:04 `nvme version` => nvme version 2.0, on my local machine 2022-04-09 18:01:49 hah, nice `No transport address for 'apple-nvme'` :( 2022-04-09 18:02:28 previus version worked 2022-04-09 18:04:45 hm 2022-04-09 18:04:56 don't think the build is missing anything, so either a configuration or upstream-changes problem 2022-04-09 18:05:24 I think they don't added apple subsystem 2022-04-09 18:05:32 didn't yet* 2022-04-09 18:14:31 you said it worked in last version :p 2022-04-09 18:14:34 anyway, merged 2022-04-09 18:18:43 heh, I have archived old version 2022-04-11 08:41:41 I mean, I saw you posting about this idiot, what is his beef with Alpine? 2022-04-11 08:41:49 and do you know he has a user account on the wiki? 2022-04-11 08:43:26 what is a bob pokose mckaygerhard 2022-04-11 08:43:47 maybe I should have brought this up privately instead 2022-04-11 11:43:04 did you post this in the wrong channel 2022-04-11 11:53:31 I think it was just a misunderstanding, people that need to know are already aware of piccoro 2022-04-11 12:25:29 I asked Ariadne the wrong question on Mastodon, and she inadvertently directed me here 2022-04-11 12:26:07 I stupidly asked "are you hands on with the wiki and the github" 2022-04-11 12:26:43 because I thought that was the best way to bring up the subject of PICCORO still having a userpage on the wiki and apparently an account on the github as well? 2022-04-11 12:26:49 yes but this is the infra channel 2022-04-11 12:26:54 yes, the wrong place 2022-04-11 12:27:07 but the question I phrased, she sent me here 2022-04-11 12:27:14 because I asked the wrong question 2022-04-11 12:27:40 ah i see 2022-04-11 12:28:10 disregard :) 2022-04-11 12:28:33 sorry you have to deal with the fallout that person creates for the project 2022-04-11 12:28:48 (piccoro, that is) 2022-04-11 12:28:54 i mostly find him funny, as much as it removes my braincells repeatedly 2022-04-11 12:29:12 but that's also because i'm not responsible for anything he fucks up :p 2022-04-11 12:29:50 I love how his github io page directs people to irc.freenode.net as the location of the "oficial" alpine chat 2022-04-11 12:30:03 websites are hard to update 2022-04-11 12:30:15 (/s) 2022-04-11 12:31:06 to run this around, MRs are hard to merge (/s) 2022-04-11 12:31:13 s/run/turn 2022-04-11 12:31:13 panekj meant to say: to turn this around, MRs are hard to merge (/s) 2022-04-11 12:31:17 my god what is it this time 2022-04-11 12:31:38 this reminds me 2022-04-11 12:31:54 I had a rampant "professional bug tester" who "found" my project on github 2022-04-11 12:32:01 their avatar even said professional bug tester 2022-04-11 12:32:09 they nagged me incessantly 2022-04-11 12:32:12 psykose: i meant mirror website as a response to "websites are hard to update" 2022-04-11 12:32:15 I closed lots of issues thanks to them 2022-04-11 12:32:21 psykose: time of the web ;) 2022-04-11 12:32:25 mirrors are up to date 2022-04-11 12:32:32 i filled out the whole backlog 2022-04-11 12:32:35 a month ago 2022-04-11 12:32:37 then I got annoyed and identified issues as "not an issue" 2022-04-11 12:32:41 although I think we get pretty offtopic in here 2022-04-11 12:32:45 they got fed up and edited all their issues to oblivion 2022-04-11 12:32:52 offtopic is fine if nobody needs to do anything important 2022-04-11 12:33:01 okay, I'll stop 2022-04-11 16:51:10 bah... how do I create a merge request for ariadnes https://gitlab.alpinelinux.org/ariadne/alpine-gcc-patches ? 2022-04-11 16:51:25 i mean its a public repo 2022-04-11 16:51:26 I somehow always end up creating a merge request for my own fork 2022-04-11 16:51:31 o 2022-04-11 16:51:36 https://gitlab.alpinelinux.org/ncopa/alpine-gcc-patches/-/merge_requests/2 2022-04-11 16:51:44 gotta go to mine and then click new merge request 2022-04-11 16:53:03 third time i managed to do it correctly. need to manually set the target https://gitlab.alpinelinux.org/ariadne/alpine-gcc-patches/-/merge_requests/11 2022-04-11 16:53:22 the default links in CLI and in gitlab UI will make the MR for my own fork only 2022-04-11 16:54:36 yeah, they're always wrong for some reason 2022-04-11 16:54:51 wrong branch target btw 2022-04-11 16:55:00 you want to commit that to my gcc 12 tree :p 2022-04-11 16:56:41 it's because it doesn't allow to create public fork repo and gitlab is nice enough to protect you from leaking code to public repo 2022-04-11 16:57:04 welp 2022-04-11 16:57:30 it would be nice if alpine developers could just create public repos 2022-04-11 16:59:00 Ariadne: i guess you have the patch and I can just push a workaroudn directly to aports? 2022-04-11 16:59:10 need to get the 3.16 builders bootstrapped 2022-04-11 16:59:17 ack 2022-04-11 16:59:53 lets hope it does not ICE on me this time 2022-04-11 18:17:10 is it allowed to add -debug subpkg without explanation or create issue 2022-04-11 18:17:35 yes 2022-04-11 18:18:39 psykose: how do you know 2022-04-11 18:26:01 the reason is pretty much always 'something crashed and i need debug symbols for it', otherwise nobody would be adding it 2022-04-11 18:26:14 i've personally sigsegv'd iwd maybe 8 times 2022-04-11 18:26:51 ikke: perhaps it's time to restart nld5 given the 101 days of uptime :p 2022-04-11 18:27:06 really? I didn't had it long time 2022-04-11 18:27:37 i haven't used it in a while given being on desktop 2022-04-11 18:27:45 but some months/half a year before, yeah 2022-04-11 18:27:52 just sometimes randomly crashed, shrug 2022-04-11 18:28:06 but I'm asking about policy, i.e. it is changed in last time (because I don't follow much alpine policy in last time) 2022-04-11 18:28:12 i just set it to supervisor=supervise-daemon and forgot about it 2022-04-11 18:28:17 the policy is there isn't one 2022-04-11 18:28:43 no, policy was to add -debug only in exceptional cases 2022-04-11 18:29:09 yes, and that means there is no policy 2022-04-11 18:29:24 because policies like that only serve endless bikesheds and people just end up doing whatever they want 2022-04-11 18:29:26 no, this was policy 2022-04-11 18:29:29 there is no grounds for 'exceptional case' 2022-04-11 18:30:12 in this world 'common sense' exist though it is fading away nowadays 2022-04-11 18:30:45 and the common sense dictates that it's easier to have debug symbols somewhere than to not, to aid in whatever debugging 2022-04-11 18:30:54 the only reason we don't have them near-everywhere is infrastructure issues 2022-04-11 18:31:36 I remember all this 2022-04-11 18:31:46 indeed 2022-04-11 18:32:16 but again, I asked is the 'unwritten policy' is changed in last time 2022-04-11 18:32:28 one day when we get properly split package artifacts, we can set up debuginfod or whatever, and have way more symbols, and then we can forget this discussion :p 2022-04-11 18:32:37 i don't think it is, but random (very small) debug packages are completely fine 2022-04-11 18:32:49 if you add -dbg to some largeish c++ program they will be like 80MB 2022-04-11 18:32:49 I agree 2022-04-11 18:32:51 in this case it's tiny 2022-04-11 18:33:24 but I ask then why there is no issue report or comment in MR 2022-04-11 18:33:33 i could go add a firefox-dbg and get back to you, i bet it's going to be like 1GB :p 2022-04-11 18:33:44 hehe 2022-04-11 18:33:49 which would be more than.. every c program in main/ combined 2022-04-11 18:34:08 someone already tried with FF-dbg and made big mess 2022-04-11 18:35:31 psykose: I'm not against to merge -dbg for ell and iwd, just want to know rationale 2022-04-11 20:00:54 The policy was and still is that we do it on a case by case basis. That does not mean people have to create a formal issue to request it. It's up to the maintainer to decide mainly. We did say we will not mass enable it on all aports. 2022-04-11 20:35:00 ikke: thanks for explanation 2022-04-11 20:35:48 though I would prefer rationale in MR or issue report 2022-04-12 08:53:38 ncopa: do you think we can upgrade usa9 (arm host) before starting the builders? 2022-04-12 08:57:24 sure 2022-04-12 08:58:01 i have already bootstrapped gcc, but have gettext tests failing now 2022-04-12 09:13:28 ncopa: just to be sure, do you have a backup of all builders keys? (the new ones)? 2022-04-12 09:16:15 maybe 2022-04-12 09:16:59 let me make a backup 2022-04-12 09:19:24 i have a backup now 2022-04-12 09:19:27 Ok good 2022-04-12 09:19:33 so I can upgrade + reboot now? 2022-04-12 09:19:41 yup 2022-04-12 09:28:11 ok, going to reboot now 2022-04-12 09:34:57 oh fun 2022-04-12 09:35:13 mount: mounting /dev/mapper/vg0-lv_root on /sysroot failed: No such file or directory 2022-04-12 09:40:07 ncopa: seems to be the same issue we already have with nlplug-findfs 2022-04-12 09:41:09 do you have console? 2022-04-12 09:41:22 yes 2022-04-12 09:41:58 /dev/mapper does not exist in the emergency shell 2022-04-12 09:42:11 is dm-mod loaded? 2022-04-12 09:42:32 dm_mod 2022-04-12 09:42:47 lsmod does not return it 2022-04-12 09:42:52 is lvm binary there? 2022-04-12 09:42:59 yes 2022-04-12 09:43:13 after manually running nlplug-findfs, the entries are there 2022-04-12 09:43:35 hum. ok 2022-04-12 09:43:59 so you can manually mount /dev/vg0/lv_root /sysroot and exit and it will continue boot 2022-04-12 09:44:09 but its weird that it didnt find it at boot 2022-04-12 09:44:35 seems like a timing issue 2022-04-12 09:44:46 do you have the dmesg? 2022-04-12 09:44:48 same issue we had on usa7 2022-04-12 09:45:15 ncopa: yes 2022-04-12 09:45:48 i would like to reproduce it in a vm if possible 2022-04-12 09:46:06 so far, we only had this issue on bare metal 2022-04-12 09:49:00 is it alpine 3.15? 2022-04-12 09:49:09 yes 2022-04-12 09:49:17 but this already started with 3.14 iirc 2022-04-12 09:49:36 https://gitlab.alpinelinux.org/alpine/aports/-/issues/12325 2022-04-12 09:50:40 ncopa: dmesg output: https://tpaste.us/qaa9 2022-04-12 09:52:23 host is at least up again by manually mounting it 2022-04-12 09:57:43 there are no sda/sdb etc in the dmesg? 2022-04-12 09:59:35 https://tpaste.us/x55w 2022-04-12 10:00:28 it uses nvme 2022-04-12 10:00:54 https://tpaste.us/yvvx 2022-04-12 10:01:08 The gitlab runner vms in qemu do not start yet 2022-04-12 10:02:59 [ 0.871029] nvme0n1: p1 p2 2022-04-12 10:02:59 [ 5.891804] Mounting root: failed. 2022-04-12 10:03:55 kernel gets nvme p1 p2, but nlplug-findfs probably dont get the uevent for some reason 2022-04-12 10:03:58 I can manually start the vms with the same command line, but it fails in openrc 2022-04-12 10:04:18 ncopa: yes, that's what we've noticed the last time, but when nlplug-findfs is executed later, it does 2022-04-12 10:05:17 i wonder what happens if we add modules=nvme... to boot command line 2022-04-12 10:05:55 It's not in there atm 2022-04-12 10:15:50 i know. can we add it there and reboot and see what happens? 2022-04-12 10:22:32 Yes 2022-04-12 10:22:58 i might also want to test a modified nlplug-findfs 2022-04-12 10:23:30 how do I regenerate the grub config again? 2022-04-12 10:23:55 apk fix grub :P 2022-04-12 10:24:30 rebooting 2022-04-12 10:25:24 ok 2022-04-12 10:25:52 im reading the nlplug-findfs code and try to find where things might go wrong 2022-04-12 10:41:48 Last time someone suggested the event buffer might be too small 2022-04-12 10:43:06 yeah, but I doubt that is it. it is currently 512k, and i think last time the read messages were < 300k 2022-04-12 10:43:32 https://gitlab.alpinelinux.org/alpine/aports/-/issues/12325#note_139126 2022-04-12 10:43:36 says 271k 2022-04-12 10:44:09 what I now thinkis that maybe nvme sends multipart messages? 2022-04-12 10:44:16 nlplug-findfs does nto handle that 2022-04-12 10:44:31 or that a single message is bigger than 16k, but that is also unlikely 2022-04-12 10:49:07 did it boot second time? 2022-04-12 11:03:07 ncopa: It seems so 2022-04-12 11:03:23 it did 2022-04-12 11:18:25 any idea why 'mapper/vg0-gitlab--runner.* qemu:kvm 0660' in /etc/mdev.conf does not work? 2022-04-12 11:20:23 ikke: it always works for me 2022-04-12 11:20:59 though not on gitlab-runner because I don't run it 2022-04-12 11:31:37 They are lvm partitions 2022-04-12 11:31:48 it should change the owner for them, but it does not happen 2022-04-12 11:40:03 ah, that. have no idea then 2022-04-12 12:22:21 ikke: did mdev start? 2022-04-12 12:29:54 * WARNING: mdev has already been started 2022-04-12 12:36:24 ikke: what is the permissions of it? 2022-04-12 12:45:24 ok. i know why 2022-04-12 12:50:47 the kernel uevent does not generate the mapper/vg0-gitlab--runner.* 2022-04-12 12:50:59 it generates /dev/dm-* 2022-04-12 12:52:45 i guess one of them are just symlinks 2022-04-12 12:52:57 ncopa: I have an existing MR that adds /dev/dm-* to mdev.conf, !32330 2022-04-12 12:58:21 mdev-like-a-boss: 2022-04-12 12:58:22 # Stop creating x:x:x:x which looks like /dev/dm-* 2022-04-12 12:58:22 [0-9]+\:[0-9]+\:[0-9]+\:[0-9]+ root:root 660 ! 2022-04-12 13:08:38 minimal: yeah, i will have a look at that at some point. I have a few nitpicks 2022-04-12 13:08:43 i havent had time to test it 2022-04-12 13:09:14 but it does not solve ikke's problem. it iwll create the disk/by-* symlinks. not the /dev/mapper/* 2022-04-12 13:09:31 ncopa: I've tested it as much as I can with LVM/LUKS etc on QEMU emulating NVME, SATA, etc 2022-04-12 13:10:02 yes, I was just pointing it out when you mentioned /dev/dm-* 2022-04-12 13:15:13 minimal: did you test if it creates equal symlinks as udev? 2022-04-12 13:18:17 ncopa: it create most of the same symlinks as udev - it cannot do all of them as udev has some helper programs to extract some device info from ata/scsi devices. Also for the partlabel devices I "fixed" the created symlinks to replace any spaces with underscores whereas on a Debian box here I see "\x20" in such symlinks 2022-04-12 13:24:50 minimal: ncopa: mdev-like-a-boss have storage-device helper script 2022-04-12 13:25:29 minimal: I think it is somewhat important that the symlinks created by med can be re-used with udev 2022-04-12 13:25:44 so that you dont get different symlinks when you install udev 2022-04-12 13:25:46 I didn't tested LVM so I don't know if it solves current problem, but I think it could adapted 2022-04-12 13:25:54 so mdev -> udev should be compatible 2022-04-12 13:26:13 the other way around is not so important for the reasons you mentioned 2022-04-12 13:26:39 due to missing helper progs 2022-04-12 13:27:27 so mdev does not need to create all the symlinks udev creates. its ok if it only creates a subset of them. But those that are created should have the same names as when created with udev 2022-04-12 13:27:44 otherwise things will break when you switch mdev -> udev 2022-04-12 13:27:50 so you think its better for mdev to generate a symlink like "EFI\x20System\x20Partition" rather than change eudev to fix the problem there? 2022-04-12 13:28:16 uh 2022-04-12 13:28:24 that is a good question. what does our eudev implementation do currently? 2022-04-12 13:28:58 I'd need to double check - that example is from a Debian system running (a somewhat old) udev from systemd 2022-04-12 13:29:14 (don't think it is good idea to follow udev every nonsense) 2022-04-12 13:29:15 we do /dev/disk/by-partlabel/EFI\x20system\x20partition 2022-04-12 13:29:34 sorry for annoyance 2022-04-12 13:29:52 so we should keep it 2022-04-12 13:30:25 i agree that EFI\x20system\x20partition is annoying, but its even more annoying if your machine stops to boot on upgrades 2022-04-12 13:30:57 if we fix it in eudev we need to make sure that we don't break any current installs 2022-04-12 13:33:49 ncopa: just checking my notes now as I have noticed that eudev is behind some upstream systemd udev changes and wondering if maybe they've fixed this 2022-04-12 13:34:13 I was intending to raise a eudev MR at some time to add missing/changed stuff from systemd 2022-04-12 13:37:12 i dont see by-partlabel in my majaro or fedora vms 2022-04-12 13:38:24 ok. im pretty sure systemd uses \x20 2022-04-12 13:38:43 hmm, by-partlabel is in both eudev and systemd' udev 2022-04-12 13:39:32 ncopa: yeah I had a look at systemd's udev and it doesn't appear to use OPTIONS="string_escape=replace" for the partlabel to safely handle such names 2022-04-12 13:39:53 https://github.com/systemd/systemd/blob/main/src/test/test-fstab-util.c#L168 2022-04-12 13:40:35 it does use it for some other stuff (like by-id/nvme-*) which eudev does not - that was one of the things I'd intended to MR eudev to align it with systemd-udev 2022-04-12 13:42:39 ncopa: ok, I'll take out the space->underscore code, retest here locally, and then update the MR 2022-04-12 13:51:32 minimal: to be honest. my biggest worry with it is how do we test it? 2022-04-12 13:52:06 if I change that code 6 month in the future, how do I make sure I don't break anything? 2022-04-12 13:52:33 I'm using VMs here so I create a partition with 1 or more spaces in its label and check the symlink is named as expected 2022-04-12 13:52:43 because once we ship this, people will start rely on it 2022-04-12 13:52:55 and once users rely on it, we cannot break it 2022-04-12 13:53:34 so my thinking is, maybe we shouln't create more of those symlinks than absolutely neccessary 2022-04-12 13:53:46 only implement whats needed. not what is nice-to-have 2022-04-12 13:53:52 agreed. I thought we'd agree to leave the "\x20" thing as it is for compatibility with systemd/eudev 2022-04-12 13:54:15 yeah. sure. I'm just thinking out loud here. raising my worries 2022-04-12 13:54:27 well one reason for me doing this is preparation for getting cloud-init to run without eudev - it uses some /dev/disk/by-* entries 2022-04-12 13:54:41 thats a valid usecase 2022-04-12 13:55:06 should probably mention cloud-init in the commit message 2022-04-12 13:55:22 and there's likely other software out there that expects these type of symlinks to exist so having them will make porting to Alpine easier 2022-04-12 13:55:46 zfs needs disk/by-* symlinks 2022-04-12 13:56:03 but I think disk by-uuid or similar works for zfs, which is why i implemented some of them 2022-04-12 14:00:47 ncopa: so what do you want to do? I would have thought a drip-by-drip approach of adding such entries would be more likely to be problematic long-term than adding a reasonably close (to eudev/systemd) set of entries in one go 2022-04-12 14:01:21 best-case woudl be to have some testsuite to test it 2022-04-12 14:01:33 that it does what is expected 2022-04-12 14:01:56 which would be qemu basically as I'm not sure if its easy in other hypervisors to emulate NVME/SATA/SCSI/etc 2022-04-12 14:05:02 we don't currently have a testsuite for eudev for such entries (and it has diverged/lagged behind systemd' udev) 2022-04-12 14:07:10 imo, libudev-zero should replace eudev for alpine, but I don't want to impose it 2022-04-12 14:07:39 libudev-zero would be nice if we could. but there are many things that won't work 2022-04-12 14:09:03 minimal: i think I have an idea 2022-04-12 14:09:09 for me only usb-modeswitch is (uhm) issue, but even in it I managed my device to work 2022-04-12 14:09:38 looking at the persistant-storage script 2022-04-12 14:09:56 we can feed it with an MDEV=something 2022-04-12 14:10:07 it will try lookup /sys/something/... 2022-04-12 14:10:29 we could make it possible to override /sys 2022-04-12 14:10:39 or maybe use ../sys/something 2022-04-12 14:10:51 ncopa: from a quick scan of cloud-init it uses by-label, by-uuid, and by-partuuid 2022-04-12 14:11:10 and create dummy sys/class/block/something/entry 2022-04-12 14:12:00 then we execute MDEV=something $pathto/presistant-storage from some temp dir 2022-04-12 14:12:49 we may also need to override blkid binary with a fake 2022-04-12 14:14:29 ncopa: persistant-storage perhaps could be changed to add a "debug mode" where it prefixes things like /sys references with a specified directory? likewise for references to blkid 2022-04-12 14:14:40 exactly 2022-04-12 14:15:07 ok, I can have a look at that 2022-04-12 14:16:07 for example: https://tpaste.us/YrrP 2022-04-12 14:17:16 then we can create dummy file /tmp/tests/sys/block/.... and call SYSFS=/tmp/tests/sys ./persistant-storage 2022-04-12 14:18:27 yeah, I'll have a go at that locally and see where I get (plus revert the "\x20" fix) 2022-04-12 14:18:55 have you used bats? 2022-04-12 14:19:18 not so far, its been on my list of things to look at :-) 2022-04-12 14:20:29 separate issue: any chance you look look at mkinitfs (repo) MRs 102, 103, and 104 for consideration before 3.16 release? 2022-04-12 14:26:44 i will look over them before the release 2022-04-12 15:30:46 minimal: i have a simple test for ptpdev 2022-04-12 15:30:59 i wonder if i should continue with current persistent storage 2022-04-12 15:31:17 it will make conflict with your work, but it may also help you to add your own tests if you can rebase it 2022-04-12 15:39:50 ncopa: ok 2022-04-12 19:11:59 clandmeter: psykose fyi, I keep notes about our hosts in netbox 2022-04-12 19:12:14 what kind of notes 2022-04-12 19:12:46 https://netbox.alpin.pw/extras/journal-entries/ 2022-04-12 19:14:07 i can't read those 2022-04-12 19:14:15 fileutil.h:30:41: error: ISO C++17 does not allow dynamic exception specifications 2022-04-12 19:14:23 what package 2022-04-12 19:14:45 if it's source-highlight that is already fixed 2022-04-12 19:14:55 aarch64 just didn't pick up the patch 2022-04-12 19:15:18 yes, source-highlight 2022-04-12 19:15:42 it'll pass now 2022-04-12 19:25:01 psykose: can you see this? https://netbox.alpin.pw/dcim/devices/31/journal/ 2022-04-12 19:25:09 https://img.ayaya.dev/6RRaFN3qsIvT.png 2022-04-12 19:25:18 first one https://img.ayaya.dev/Rg79UzXhfmRR.png 2022-04-12 19:28:55 psykose: can you try again? 2022-04-12 19:29:05 can see both 2022-04-12 19:29:22 good 2022-04-12 19:30:00 very neat :) 2022-04-12 19:30:05 if i run into anything i suppose i can add an entry 2022-04-12 19:30:11 maybe i will add the weird cgit thing 2022-04-12 19:31:54 unless that's not very host relate 2022-04-12 19:32:18 It's more application related 2022-04-12 19:32:31 was thinking about tracking that better in netbox, but haven't yet so far 2022-04-12 19:32:36 right 2022-04-12 19:54:07 ikke: 2022-04-12 19:54:08 Building with 48 jobs 2022-04-12 19:54:08 To add an exception for this directory, call: 2022-04-12 19:54:08 fatal: unsafe repository ('/builds/alpine/aports' is owned by someone else) 2022-04-12 19:54:08 git config --global --add safe.directory /builds/alpine/aports 2022-04-12 19:54:35 https://gitlab.alpinelinux.org/alpine/aports/-/jobs/688763 2022-04-12 19:54:49 hmm 2022-04-12 19:54:54 never seen that one before 2022-04-12 19:55:14 fails on retry as well 2022-04-12 19:55:29 only x86* 2022-04-12 20:28:01 aarch does the same now 2022-04-12 20:28:32 git has just been upgraded, related? 2022-04-12 20:30:14 it actually could be 2022-04-12 20:30:21 sounds related 2022-04-12 20:30:28 do they upgrade to latest git themselves? 2022-04-12 20:30:35 it addresses a cve 2022-04-12 20:30:41 perhaps it has some perm check 2022-04-12 20:30:43 and it fails on us 2022-04-12 20:30:46 psykose: yes, it does a system upgrade 2022-04-12 20:30:49 before building 2022-04-12 20:30:52 right 2022-04-12 20:30:56 i think it's the new cve fix then 2022-04-12 20:31:37 yes 2022-04-12 20:31:41 https://lore.kernel.org/git/20220412180510.GA2173@szeder.dev/T/#t 2022-04-12 20:32:32 https://github.com/git/git/commit/bdc77d1d685be9c10b88abb281a42bc620548595 https://github.com/git/git/commit/8959555cee7ec045958f9b6dd62e541affb7e7d9 2022-04-12 20:32:34 one of these 2022-04-12 20:32:44 uhh 2022-04-12 20:32:48 not sure how to proceed.. 2022-04-12 20:33:51 Sounds like something that could affect all gitlab ci users 2022-04-12 20:34:09 yeah 2022-04-12 20:35:45 wondering what user that repository is owned by 2022-04-12 20:35:49 if you want a hack fix 2022-04-12 20:35:52 you can use the git config command 2022-04-12 20:35:56 and add it to the ci .yml 2022-04-12 20:36:04 --add safe.directory .. 2022-04-12 20:36:16 just do it at the start i guess, shrug 2022-04-12 20:36:30 First want to know why it is owned by a different user 2022-04-12 20:36:36 right 2022-04-12 22:16:56 https://gitlab.alpinelinux.org/alpine/aports/-/jobs/689070#L55 `fatal: unsafe repository ('/builds/alpine/aports' is owned by someone else)` 2022-04-12 22:17:02 what's this 2022-04-12 22:19:18 the messages right above will tell you 2022-04-12 22:19:41 ikke: the user will still be different even with the override, so you could check it out later and fix it for now 2022-04-12 22:20:05 not clear to me 2022-04-12 22:21:01 (too tired to read and decipher) 2022-04-12 22:21:24 tl;dr git changed some stuff for security permissions 2022-04-12 22:21:32 the checks fail on the git repo passed into the ci docker containers 2022-04-12 22:21:37 then you see that message 2022-04-12 22:21:40 oh, general error on all ME 2022-04-12 22:21:48 s/ME/MR/ 2022-04-12 22:21:48 mps meant to say: oh, general error on all MR 2022-04-12 22:21:56 yeah 2022-04-12 22:22:05 because the containers self-upgrade and use the latest deps 2022-04-12 22:22:07 ok, I see 2022-04-12 22:22:13 and the dep is git, which is always there 2022-04-12 22:22:49 going to bed, good night 2022-04-12 22:26:55 sleep well 2022-04-12 22:28:21 will try, thanks 2022-04-12 23:37:13 ikke: also i don't remember if it was intended, but ppc builders don't have any logs 2022-04-13 00:40:08 the same goes for s390x 2022-04-13 00:40:28 the other builders do for 3.16.. so weird 2022-04-13 00:41:18 ah, https://build.alpinelinux.org/buildlogs/ every release branch is missing them 2022-04-13 08:30:10 psykose: re logs: that's not intended 2022-04-13 09:50:09 The repository is cloned as root 2022-04-13 09:50:31 I suppose we can chown the repo as the current user first to fix this 2022-04-13 11:27:58 Fixed a push to alpine-gitlab-ci, CI should work again, now 2022-04-13 11:56:29 what to do with mutt in 3.15-stable? it is 2.1.5 and new one 2.2.3 have cve fix. make secfix upgrade to 2.2.3 because nothing depends on it 2022-04-13 12:07:09 Yeah, think that should not be an issue 2022-04-13 12:09:54 ok 2022-04-13 12:09:59 thanks 2022-04-13 12:24:58 what to do with !31675 2022-04-13 12:25:44 `this is strange, both (sqllite and sqlite-tcl) have this check in prepare() function` 2022-04-13 12:26:22 like some kind of strange dependency 2022-04-13 12:28:06 f91951d498cc0c4e98a3ca7bbfcebd6f654b24b2 2022-04-13 12:28:26 Apparently to fix a circular dependency 2022-04-13 12:29:44 maybe build them in fixed order, i.e. first build sqlite and then sqlite-tcl 2022-04-13 12:30:07 and remove this warning check? 2022-04-13 12:30:19 This warning check is to make sure they each build the same version 2022-04-13 12:30:31 and not that people bump one and forget to bump the other 2022-04-13 12:30:37 like you try to do now 2022-04-13 12:30:53 ok, but which one first? 2022-04-13 12:30:57 both at the same time 2022-04-13 12:31:01 in the same MR 2022-04-13 12:31:31 this is what I think, but again first one will issue this warning 2022-04-13 12:31:45 No, not if you both upgrade them in the same MR 2022-04-13 12:31:54 it looks at the version that is present in the APKBUILD file 2022-04-13 12:32:05 hm, lets try with rootbld 2022-04-13 12:32:31 it should be one commit? 2022-04-13 12:32:37 No, can be 2 commits 2022-04-13 12:32:52 ok, makes sense 2022-04-13 14:25:59 ikke: is there any chance you can throw me the 3.13-ppc logs by hand 2022-04-13 14:27:08 for ruby? 2022-04-13 14:28:28 yeah 2022-04-13 14:30:10 psykose: https://tpaste.us/x557 2022-04-13 14:30:15 index issue.. 2022-04-13 14:30:20 haha 2022-04-13 14:30:42 why doesn't it update to 2.30.3 2022-04-13 14:30:43 weird 2022-04-13 14:31:11 maybe the index was not updated? 2022-04-13 14:31:21 it is on the thing 2022-04-13 14:31:24 https://dl-cdn.alpinelinux.org/alpine/v3.13/main/ppc64le/ 2022-04-13 14:31:27 everything is 2.30.3 here 2022-04-13 14:31:38 an apk update should sync this, unless something is wrong.. 2022-04-13 14:31:39 and it's on the builder, everything should be local 2022-04-13 14:31:43 hmm 2022-04-13 14:31:55 weird that it would upload the correct index but be broken locally 2022-04-13 14:46:43 seems there's no x86_64 runners 2022-04-13 14:46:46 or someone stole them :p 2022-04-13 14:47:17 They're busy 2022-04-13 14:47:31 One job triggered by you :) 2022-04-13 14:48:43 last i checked i couldn't find any running 2022-04-13 14:48:44 heh 2022-04-13 14:49:13 https://gitlab.alpinelinux.org/omni/aports/-/jobs/689574 https://gitlab.alpinelinux.org/nibon7/aports/-/jobs/689776 2022-04-13 14:49:38 oh 2022-04-13 14:49:40 duplicate ci runs 2022-04-13 14:49:41 groan 2022-04-13 14:51:19 hmm, 3.13 has perl-git 2.30.3-r0 in the index on the builder 2022-04-13 14:51:45 https://tpaste.us/yvve 2022-04-13 14:53:08 locally, sometimes when i build packages, for some reason it doesn't find things in the index either despite them being there 2022-04-13 14:53:09 huh, it uses upstream repos 2022-04-13 14:53:14 i don't think i ever figured it out 2022-04-13 14:54:30 that's better 2022-04-13 14:54:37 it used dl-cdn, not the local repo 2022-04-13 14:55:06 heh 2022-04-13 15:04:24 I think buildlogs should work now 2022-04-13 15:04:28 for ppc 2022-04-13 15:05:20 s390x has the same issue iirc 2022-04-13 15:09:22 should be fixed as well 2022-04-13 15:10:37 yes, ppc64le uploaded logs now 2022-04-13 15:12:46 neat 2022-04-13 15:12:47 thanks! 2022-04-14 03:06:48 3.16 x86* has been stuck on grpc forever, probably hung 2022-04-14 07:28:11 lol, was it actually waiting forever on a connection with no timeout 2022-04-14 07:28:12 hahah 2022-04-14 09:12:49 let me guess, grpc is still stuck like before 2022-04-14 09:23:47 yes 2022-04-14 22:20:51 fatal error: cannot write PCH file: No space left on device on armv7/aarch64 sometimes 2022-04-14 22:23:50 now all arm for edge/3.16 fails to pull git 2022-04-14 22:23:56 i think they all ran out of space 2022-04-14 22:52:47 same for 3.14, hehe 2022-04-15 00:14:52 and now it's back 2022-04-15 00:26:22 and it's out of space again 2022-04-15 00:29:32 yep, 3.14-3.15-3.16-edge all out of space 2022-04-15 06:12:32 i deleted .cache on build-edge-a* 2022-04-15 06:13:23 .cache/go-build was significantly big 2022-04-15 06:14:08 there are also a $HOME/go directory which has a few gigs with binaries and stuff 2022-04-15 06:27:11 ah, right 2022-04-15 06:27:29 a bunch of aports forget to set the go dir to srcdir 2022-04-15 06:28:41 i wonder if there can be a feature_option added to abuild or something patched on the builders to clean those? 2022-04-15 06:29:06 rm -r of ~/go ~/.cargo/.. is safe, since it's the same as when one sets it to srcdir 2022-04-15 06:29:13 then it doesn't matter if an aport does it or not 2022-04-15 06:29:43 and i hope nobody is building regular go stuff on the builders that might get their things deleted like that :p 2022-04-15 06:30:08 (i.e., it should be safe since nobody is using things in that home folder) 2022-04-15 06:30:15 or it could set GOPATH for whole abuild 2022-04-15 06:32:15 also works, but i would prefer it to be more opt in 2022-04-15 06:32:45 i was thinking of adding `-g`, similar to `-r` for rust in abuild 2022-04-15 06:32:55 wait, no, no abuild, newapkbuild 2022-04-15 06:34:26 psykose: export GOPATH=${GOPATH:-} or whatever was the syntax 2022-04-15 06:35:44 `export GOPATH="${GOPATH:-default"` or `: ${GOPATH:=default}` (this doesn't export) 2022-04-15 06:36:39 tbf, this shouldn't be a problem if was done in rootbld 2022-04-15 06:36:54 indeed not 2022-04-16 18:58:31 added 100G to usa9-dev1 rootfs 2022-04-17 07:20:04 what is with some CIs, no disk space? 2022-04-17 07:20:13 git perms 2022-04-17 07:20:22 aha 2022-04-17 07:20:58 'Gitlab delenda est' 2022-04-17 07:24:03 ugh, !33329 2022-04-17 07:24:53 I think I should remove this, as idea even 2022-04-17 08:11:27 :) ? 2022-04-17 08:13:30 if you put it in main/ instead i will merge it instantly 2022-04-17 08:13:32 pinky swear 2022-04-17 08:13:58 sure do, just need to fix it first 2022-04-17 08:14:04 does it actually need polkit and friends or is that optional 2022-04-17 08:14:46 quite a lot of deps are optional, for now I've built it with everything 2022-04-17 08:14:54 including polkit 2022-04-17 08:19:52 I'm really starting to hate alpine (though I don't hate anything) 2022-04-17 08:20:22 and 'linux' in general ;) 2022-04-17 08:20:51 same, I prefer windows 2022-04-17 08:21:34 never used windows, so my life was/is a lot nicer 2022-04-17 11:05:48 The CI image was regenerated last night, but it built the same commit, so not sure why it started to complain again 2022-04-17 11:06:35 : ) 2022-04-17 11:06:40 and it's only 32bit 2022-04-17 11:07:02 also 3.15 is stuck 2022-04-17 11:07:09 and 3.16-aarch is probably too 2022-04-17 13:31:50 ooh, because it's now in the image, it already fails in the before_script, where it already tried to execute git fetch 2022-04-17 13:31:58 but not sure why this only happens on 32-bits images only 2022-04-17 13:32:25 maybe because for the other arches, it's has on older version of the image on the host 2022-04-17 14:35:33 🤦 2022-04-17 14:36:36 psykose: mps CI issue has been fixed 2022-04-17 14:36:59 ikke: thanks 2022-04-17 14:38:08 ikke: was git fixed or space fixed? 2022-04-17 14:38:19 git 2022-04-17 14:38:27 thanks 2022-04-17 14:38:27 It was not a space issue 2022-04-17 14:38:38 http://dup.pw/alpine/aports/53b28cf9050e 2022-04-17 14:40:21 mps: and no, we are not going to destroy gitlab 2022-04-17 14:40:35 ikke: ;p 2022-04-17 14:42:09 my 'sentence' was not about gitlab usage and work, but about how it is easy to add crap there 2022-04-17 14:45:29 ikke: i think lint is broken after that 2022-04-17 14:45:48 ==> Linting Fatal: not inside a git repository 2022-04-17 14:45:54 \/usr/local/bin/lint: cd: line 40: can't cd to Fatal: not inside a git repository: No such file or directory 2022-04-17 14:48:20 hmm, peculiar 2022-04-17 15:48:03 panekj: thanks for reporting, should be fixed now 2022-04-17 15:55:00 yep, works 2022-04-17 15:55:04 thanks again 2022-04-17 15:56:53 although it seems to still fail https://gitlab.alpinelinux.org/pj/aports/-/jobs/694744 2022-04-17 15:57:10 you need to rebase the MR 2022-04-17 15:58:18 right, I'll check later 2022-04-18 23:57:15 lmao nice 2022-04-19 07:05:30 do we have the needed diskspace to do v3.16? 2022-04-19 08:46:12 ncopa: we soon will have, but that is not your question :) 2022-04-19 08:49:11 ncopa: when will you need the space? 2022-04-19 08:50:30 i guess the x86X builders have enough, and ikke mentioned the arm one still has some left. 2022-04-19 08:50:38 i think the limitation will be in distribution 2022-04-19 08:50:57 and this is what we are trying to solve right now. 2022-04-19 08:52:41 ok. good thanks. I just want to avoid that we hit a roof when doing release candidates and are not able to do the release due to lack of diskspace 2022-04-19 08:53:21 i guess the release problems are builder related 2022-04-19 08:56:13 for distribution, my guess is that v3.16 will be ~ 230GB 2022-04-19 09:29:19 the remove before_script fix in master needs to be backported 2022-04-19 09:29:34 ci in 3.15 fails on the before script as well 2022-04-19 09:30:23 HRio: will do 2022-04-19 09:30:31 ikke: thanks 2022-04-19 09:34:15 backported back to 3.12 2022-04-19 09:39:10 HRio: fyi, you need to rebase any MRs to take these changes into account 2022-04-19 10:04:36 ikke: thanks again 2022-04-19 13:07:09 Somehow I don't believe that 2022-04-19 13:08:29 mm, looks like there's a problem with cloning from aports in the CI https://gitlab.alpinelinux.org/sdomi/aports/-/jobs/696307 2022-04-19 13:08:44 on some runners, at least 2022-04-19 13:09:35 seems temporary issue, now it continues 2022-04-19 13:10:22 hmm, it failed 2 times for me earlier.. welp 2022-04-19 13:11:06 oh, and that pipeline you just started failed at the end with "couldn't execute POST" :P 2022-04-19 13:11:29 Yes, I see 2022-04-19 13:11:40 Seems like some network issue 2022-04-19 13:11:47 yep 2022-04-19 14:24:55 Nothing reported for equinix 2022-04-20 21:12:59 ikke: it has been fixed 2022-04-20 21:13:06 it was sd_mod that was missing in modules 2022-04-20 21:13:29 Aha, nice. So it boots now? 2022-04-20 21:13:30 its weird as one system loads it automagically, and the other doesnt 2022-04-20 21:13:39 yup 2022-04-20 21:13:52 nld.t1.alpinelinux.org is now online :D 2022-04-20 21:14:06 w00t 2022-04-20 21:14:32 storage monster :) 2022-04-20 21:15:54 ikke: try 147.75.32.71 2022-04-20 21:16:34 I'm on it 2022-04-20 21:17:30 aha, so it has lots of block devices 2022-04-20 21:18:06 yes 2022-04-20 21:18:19 2 nvme 2022-04-20 21:18:25 2 sata 2022-04-20 21:18:31 12 sas 2022-04-20 21:19:31 12x 8TB 2022-04-20 21:19:40 i think that will be enough for now 2022-04-20 21:19:54 😁 2022-04-20 21:20:49 For the time being :P 2022-04-20 21:21:05 seems fdisk is limited 2022-04-20 21:21:13 it doesnt want to show the whole disk 2022-04-20 21:21:38 Disk /dev/sde: 2048 GB, 2199023255040 bytes, 4294967295 sectors 2022-04-20 21:21:46 lsblk 2022-04-20 21:22:07 7tb per device, wat 2022-04-20 21:22:43 we can make a raid1 with 10 spares 2022-04-20 21:24:33 we could do zfs 2022-04-20 21:24:42 but it will use a lot of memory 2022-04-20 21:24:57 oh wait, we have 200GB left :) 2022-04-20 21:25:43 oof 2022-04-20 21:28:07 ok i will try to bring up the other 2 servers soon 2022-04-20 21:28:34 if mason can release an updated image without making it public like he did before :) 2022-04-20 21:28:52 its not that stable yet 2022-04-20 21:29:00 With this amount of storage, we can think again about storing every version of packages 2022-04-20 21:29:31 yup i think we could 2022-04-20 21:30:51 but im still reluctant about that idea, we need to do the calculations if it makes sense. 2022-04-20 21:31:02 yes 2022-04-20 21:31:15 issues for later :) 2022-04-20 21:31:52 ikke: please think about a better naming scheme 2022-04-20 21:31:59 archlinux has https://archive.archlinux.org/ 2022-04-20 21:32:06 It's very nice to have 2022-04-20 21:32:38 not sure what i have used now is correct 2022-04-20 21:32:39 If I updated the kernel but need modues for the current running kernel, I can just grab it from there 2022-04-20 21:32:52 You mean the machine size? 2022-04-20 21:33:02 size? 2022-04-20 21:33:06 dns names i mean 2022-04-20 21:33:09 oh ok 2022-04-20 21:33:19 we wanted to update it 2022-04-20 21:33:24 remove the devX i guess 2022-04-20 21:33:58 I think it would be nice to have at least some indication of usage in the name 2022-04-20 21:34:07 nod 2022-04-20 21:34:13 and region would be nice 2022-04-20 21:34:31 thats why i made nld.t1.a.o 2022-04-20 21:34:48 so we could have nld.dev.a.o 2022-04-20 21:35:29 and for a builder it would be nice to have that arch 2022-04-20 21:35:33 the* 2022-04-20 21:35:35 yes 2022-04-20 21:35:44 maybe something like app for application hosts 2022-04-20 21:35:55 either using docker or lxc 2022-04-20 21:36:03 or whatever 2022-04-20 21:36:51 maybe do dev-x86 2022-04-20 21:37:00 bld-x86? 2022-04-20 21:37:03 build-x86 2022-04-20 21:37:09 yeah thats also fine 2022-04-20 21:37:38 Maybe number them, to plan for more builders per arch in the future 2022-04-20 21:37:46 if we have some way of orchestrating that 2022-04-20 21:38:40 maybe write the definition somewhere, maybe wiki on gitlab under infra 2022-04-20 21:38:58 sounds good 2022-04-21 10:14:47 oh neat, 96tb of raw storage 2022-04-21 10:14:48 good news :) 2022-04-21 10:25:35 We still need to get a final confirmation whether we can use it 2022-04-21 10:30:06 makes sense 2022-04-21 10:30:14 though, why would you be given access to that and then told no? :p 2022-04-21 10:32:29 1. We can just instantiate any machine we want in that account, atm there are not limits 2022-04-21 10:33:21 2. They don't have a lot of these storage nodes, so they need to make sure that the capacity is long-term available 2022-04-21 10:36:12 ah 2022-04-21 10:36:19 right, makes sense 2022-04-21 11:43:21 CONTENT PARTNERSHIP 2022-04-21 11:49:32 because why noit 2022-04-21 12:22:17 stunnel/busybox are stuck again on 3.16 2022-04-21 15:14:49 SigIgn: 0000000000001004 2022-04-21 15:14:53 Where does that come from...... 2022-04-21 15:16:57 mystery ignore 2022-04-22 09:43:39 when is 3.16 release is planed 2022-04-22 09:43:57 https://alpinelinux.org/releases/ 2022-04-22 09:44:55 Each May and November we make a release branch from edge 2022-04-22 09:45:04 yesterday looked there 2022-04-22 09:45:09 no 3.16 2022-04-22 09:45:32 did you read the txt? 2022-04-22 09:46:03 yes 2022-04-22 09:46:14 "each may and november" 2022-04-22 09:46:23 'use crystal ball Luke' ;) 2022-04-22 09:47:07 I mean, predicted date 2022-04-22 09:47:53 when its finished ;-) 2022-04-22 09:48:06 that sounds a lot better 2022-04-22 09:48:24 ncopa said tis going smooth 2022-04-22 09:48:39 so i expect only one month delay ;-) 2022-04-22 09:48:49 lol 2022-04-22 09:48:53 right 2022-04-22 09:49:42 so if all goes well, it should happen next month 2022-04-22 09:51:14 good, so I have some more time to look what needs fixed and upgraded 2022-04-22 09:53:40 ikke: any preference for raid level on t1 boxes? 2022-04-22 09:53:50 i am thinking of using zfs 2022-04-22 09:54:04 I have no experience with zfs at all 2022-04-22 09:54:07 with the 2 nvmes as log drives 2022-04-22 09:54:23 no but raid is the same 2022-04-22 09:54:28 except for draid 2022-04-22 09:54:30 But if you think it's reliable / workable, then go for it :) 2022-04-22 09:54:45 why would it not be reliable? 2022-04-22 09:55:01 "I have no experience with zfs at all" 2022-04-22 09:55:01 most enterprise storage systems use zfs 2022-04-22 09:55:14 to my knowledge at least 2022-04-22 09:55:20 zfs on linux? 2022-04-22 09:55:20 I use zfs 2022-04-22 09:55:36 on linux or freebsd 2022-04-22 09:55:44 i dont have numbers so i dont know 2022-04-22 09:55:48 don't use zfs 2022-04-22 09:55:51 But it's fine by me 2022-04-22 09:55:59 but the raid level is the same for everything 2022-04-22 09:55:59 Just need to learn it 2022-04-22 09:56:14 raid5/6/7 2022-04-22 09:56:18 yea 2022-04-22 09:56:27 For RAID i'd go for 5 or 6 2022-04-22 09:56:35 as this has so many disks im not sure what to choose 2022-04-22 09:56:38 you are opening 'can of worms' 2022-04-22 09:56:49 a higher raid level would make sense when you have more disks 2022-04-22 09:57:04 but then again, the data is not missing critical 2022-04-22 09:57:05 12 disks, right? 2022-04-22 09:57:13 mission* 2022-04-22 09:57:18 yup 2022-04-22 09:57:29 mps: thanks for your advise 2022-04-22 09:57:38 i will choose zfs ;-) 2022-04-22 09:57:44 :p 2022-04-22 09:58:32 clandmeter: you will have to tear hairs, not me ;) 2022-04-22 09:59:42 lets wait and see, i will cry in your arms if needed. 2022-04-22 10:00:57 :) 2022-04-22 10:01:54 clandmeter: are these boxes running with ECC RAM? 2022-04-22 10:02:03 yes 2022-04-22 10:02:10 at least i would believe so :) 2022-04-22 10:02:15 then I don't see any reason to not use zfs 2022-04-22 10:04:47 HMA82GR7CJR8N-WM seems ecc 2022-04-22 10:04:58 buti would not expect otherwise for this kind of server spec 2022-04-22 10:21:26 clandmeter: zfs is good! you will learn to love it 2022-04-22 10:24:25 lol 2022-04-22 10:37:26 clandmeter: our current t1 mirrors stopped syncing btw 2022-04-22 10:37:55 dl-4 and dl-5 2022-04-22 10:53:43 should we disable openjdk7 temporary to unblock builders 2022-04-22 10:54:14 mps: No, it would not help a lot 2022-04-22 10:54:28 Everything else has mostly already been built 2022-04-22 10:54:49 And I had set the builders to continue on error 2022-04-22 10:54:57 ok 2022-04-22 11:32:18 clandmeter: with RAID5, we have 88TB of space and can lose 1 disk 2022-04-22 11:32:38 RAID6 means 80 TB + 2 disk fault tolerance 2022-04-22 11:38:42 http://www.raid-calculator.com/default.aspx 2022-04-22 12:59:37 ah so, that's why mirrors aren't in sync :) 2022-04-22 13:00:09 rnalrd: yes 2022-04-22 13:00:15 clandmeter and ikke are playing with disks :P 2022-04-22 13:00:20 well 2022-04-22 13:00:40 We are playing with disks because we don't have disk space ;-) 2022-04-22 13:01:47 thanks for taking care of that :) 2022-04-22 13:09:47 but, we do need to figure out how the get the mirrors going short-term again 2022-04-22 16:28:03 clandmeter: apparently gbr1 is also full :( 2022-04-22 17:44:13 I'm not home 2022-04-22 17:44:19 i can check later 2022-04-22 18:27:37 np, it's the mirror there 2022-04-22 18:27:39 1.7TB 2022-04-23 10:04:13 We really need to figure out how to get space on the mirrors (before we switch over to the new planned setup) 2022-04-23 10:04:17 dl-cdn is behind now as well 2022-04-23 10:51:23 it is quite interesting experience to help nearly total unix/linux newbie to install alpine with xfce on experimental machine as is M1 :) 2022-04-23 10:51:50 and all over IRC 2022-04-24 06:42:31 78G available atm on usa9 2022-04-24 11:23:53 clandmeter: nld5 has distfiles_raid0 115G, can we use that for mirror space instead? 2022-04-24 11:24:01 6G in use 2022-04-24 11:24:17 nld3 has 214G free in the volume group 2022-04-24 11:50:35 I think so 2022-04-24 11:51:06 we solved the distfiles differently? 2022-04-24 11:51:21 These are local distfiles 2022-04-24 12:00:47 hmm, nld5 uses raid 10 2022-04-24 12:39:01 clandmeter: both dl-t1-1 and dl-t1-2 point to nld3 2022-04-24 12:58:51 https://imgur.com/y7pjT8R 2022-04-24 13:39:22 https://i.imgur.com/lxNXdkW.png 2022-04-24 14:00:51 \o/ 2022-04-24 14:12:50 storage issue has been fixed? 2022-04-24 14:13:00 Newbyte: not long-term yet 2022-04-24 14:13:13 nld3 had some extra space in the volume group, which I assigned 2022-04-24 14:13:37 but nld5 is not syncing yet 2022-04-24 14:13:48 which is the source for dl-4 / dl-5 2022-04-24 15:25:02 clandmeter: ping 2022-04-24 15:33:35 :o 2022-04-24 18:18:04 ftr, these mirrors are still not fully synced 2022-04-24 19:00:47 ikke: pong 2022-04-24 19:00:50 ah 2022-04-24 19:00:52 hi 2022-04-24 19:01:15 whatsup 2022-04-24 19:01:21 trying to get the mirrors going again 2022-04-24 19:01:28 ok 2022-04-24 19:01:32 any luck? 2022-04-24 19:01:34 yea 2022-04-24 19:01:37 and yes, we only have 1 t1 2022-04-24 19:01:38 atm 2022-04-24 19:01:38 gave nld3 200G 2022-04-24 19:01:45 and that helped for nld3 2022-04-24 19:01:51 so it's syncing again 2022-04-24 19:01:56 and with that all other mirrors 2022-04-24 19:01:57 thats the moist important one 2022-04-24 19:02:09 I just switched dl-4 and dl-5 to point to nld3 2022-04-24 19:02:20 sorry, i had a house full of ppl whole day. no time to take a look a tit 2022-04-24 19:02:33 no problem 2022-04-24 19:03:04 on nld3, there is ~1.7T used 2022-04-24 19:03:09 nld6 is 1.6T 2022-04-24 19:03:18 nld4* 2022-04-24 19:04:34 So I wonder if we should try, and wheter even possible, to switch to no raid on nld5 2022-04-24 19:04:42 Should give us some more disk space 2022-04-24 19:04:54 It will be decommissioned at some point anyway 2022-04-24 19:05:25 we can just kill that mirror 2022-04-24 19:05:38 right 2022-04-24 19:06:01 i guess we can get the new t1's online soonish 2022-04-24 19:06:19 and nld3 is/was the most important anyways 2022-04-24 19:06:28 the few extra mbs wont kill it 2022-04-24 19:06:43 no 2022-04-24 19:06:53 but it does not have that much left either 2022-04-24 19:07:14 /dev/mapper/vg0-mirror 1.9T 1.7T 97G 95% /srv/mirror 2022-04-24 19:07:30 yeah 2022-04-24 19:07:40 its limited 2022-04-24 19:07:56 we could move some stuff off from master 2022-04-24 19:08:11 and move it back when new t1's are ready 2022-04-24 19:08:24 i think master is also getting fuller 2022-04-24 19:08:49 tank/ct/8185 1.9T 1.6T 281.4G 86% / 2022-04-24 19:08:49 yes 2022-04-24 19:10:38 3.16 is now 166Gb 2022-04-24 19:11:00 estimate will be 210Gb 2022-04-24 19:11:11 so expect another 50 gb to be added 2022-04-24 19:11:16 right 2022-04-24 19:11:18 + releases 2022-04-24 19:11:31 thats minus releases 2022-04-24 19:11:38 nod 2022-04-24 19:11:39 i mean 3.15 has alraedy multiple releases 2022-04-24 19:11:56 3.16 will only have one + rc's 2022-04-24 19:12:07 for now :) 2022-04-24 19:12:34 so with a bit of luck nld3 will make it until we switch it 2022-04-24 19:12:37 I suppose we cannot setup rsync so that others syncing from us will not delete older releases while we do not have them locally? 2022-04-24 19:13:08 i dont think so 2022-04-24 19:13:15 but i guess its not needed 2022-04-24 19:13:35 i wonder if we should have a different layout 2022-04-24 19:13:46 move non supported releases to a differnt folder 2022-04-24 19:14:57 or just let the mirror master figure out what to do with it. 2022-04-24 19:15:57 i need to run again, painter is coming tomorrow, need to cleanup the mess before they mess up my stuff :) 2022-04-24 19:17:06 ill be back a little later 2022-04-25 07:44:21 Hi ikke, I heard you wanted to upgrade the s390x severs. How may I help you with that. Thanks. 2022-04-25 07:45:07 ACTION  2022-04-25 07:47:24 Hello Guest2757 2022-04-25 07:48:46 Hello 2022-04-25 07:50:32 I think we can do the actual OS upgrade, but would need someone to be able to take a look when we reboot 2022-04-25 07:51:04 and be able to fix things if it somehow doesn't boot 2022-04-25 07:53:33 ikke: thanks. We can do it this week if you have time. I'll brb (It's me tmhoang trying to logging in...) 2022-04-25 07:53:54 Hi clandmeter (it's me tmhoang) 2022-04-25 07:54:55 hi 2022-04-25 07:55:06 you dont use your thelounge anymore? 2022-04-25 07:57:06 I have had issue with OFTC last time but didnt have time to check - let me try again 2022-04-25 07:57:17 with OFTC via thelounge 2022-04-25 07:57:23 right 2022-04-25 07:57:37 you can do sasl or whatever its called 2022-04-25 07:58:09 certfp 2022-04-25 08:03:41 ACTION tmhoang 2022-04-25 08:11:37 tmhoang: I think this week should work. Any moment that works best for you? 2022-04-25 08:11:56 ikke: Today is fine for me. 2022-04-25 08:12:26 assuming you are in Berlin time, maybe after lunch ? 14:00 ? 2022-04-25 08:13:22 Ok, I'll probably do the upgrade before that, and then reboot it at that time 2022-04-25 08:14:03 or later is all good, thanks ikke 2022-04-25 08:24:27 hm, I had impression that s390x already died :) 2022-04-25 08:32:59 ive never seen one irl, so could be just virtual machine ;-) 2022-04-25 08:37:03 I did and even got job offer to work on it, but i thought it is fridge so I rejected :D 2022-04-25 09:08:22 5 mirrors left which are behind 2022-04-25 09:09:03 ikke: i will ask ed if we can already setup one mirror and keep that specific machine 2022-04-25 09:09:10 preferable the one ima ams 2022-04-25 09:09:17 s/ima/in 2022-04-25 09:09:17 clandmeter meant to say: preferable the one in ams 2022-04-25 09:14:45 https://i.imgur.com/NVpIqXf.png 2022-04-25 09:25:42 ? 2022-04-25 09:26:53 Made a map in zabbix about the current mirror infra 2022-04-25 09:28:59 so the only pain-point is nld5? 2022-04-25 09:32:51 nld3 has <200G free atm, but yes, mostly nld5 2022-04-25 09:41:49 clandmeter: https://zabbix.alpinelinux.org/zabbix.php?action=dashboard.view&dashboardid=1 2022-04-25 09:42:26 is that my new house? 2022-04-25 09:43:10 haha, yes 2022-04-25 10:23:09 tmhoang: zipl gives an arithmatic exception: https://tpaste.us/QNK4 2022-04-25 10:35:42 tmhoang: I've upgraded both boxes, but not rebooted them yet 2022-04-25 10:50:34 ikke: please dont reboot just yet, let me take a look on that error. 2022-04-25 10:50:45 tmhoang: yes, I won't :) 2022-04-25 13:00:09 mps: https://analyticsindiamag.com/ibm-unveils-industrys-first-quantum-safe-system-ibm-z16/ 2022-04-25 13:03:48 so maybe the duration the waiter/waitress at the restaurant waiting for the POS device to confirm your credit card then print out the bill might reduce for a fraction of a second, or a few. 2022-04-25 13:04:41 assuming your bank has fraud check 2022-04-25 13:05:23 hehe, I use only cash 2022-04-25 13:05:39 that's smart 2022-04-26 04:25:43 we know algitbot, we know 2022-04-26 09:57:07 ikke: Hi, which alpine version did you upgrade the s390x-ci from ? 3.12 ? Same for s390x.a.o ? 2022-04-26 09:57:27 s390x-ci from 3.12, s390x from 3.10 2022-04-26 17:34:44 clandmeter: psykose https://zabbix.alpinelinux.org/zabbix.php?action=dashboard.view&dashboardid=1 2022-04-26 17:35:02 fancy new dashboard :) 2022-04-26 17:39:16 ahuh 2022-04-26 17:39:23 there is also a map navigation tree 2022-04-26 17:39:37 Allows you to select what map you want to see from a tree list 2022-04-28 06:10:53 I 2022-04-28 06:10:57 morning 2022-04-28 06:11:11 im removeing the *_rc* from /srv/mirror/ 2022-04-28 06:31:42 👍 2022-04-28 18:04:04 psykose: I think I fixed the aports hook now 2022-04-28 18:50:59 do we have something like 'run-on-first-boot', script which sets basic things after installation 2022-04-28 18:51:53 hm, firstboot is there 2022-04-28 18:52:34 but this is not what I need 2022-04-28 18:53:00 local rc script that deletes itself? :) 2022-04-28 18:53:37 yes, something like this 2022-04-28 18:53:58 will try to create something 2022-04-28 18:54:32 I need it to extract wifi firmware for apple m1 2022-04-28 18:54:42 I was about to mention that or cronjob on @boot but me think BB does not have that extension 2022-04-28 18:56:07 I think so 2022-04-28 19:53:04 clandmeter: fyi, I had to move the update hook for aports to the gitaly config, as gitaly is executing the hook 2022-04-28 23:53:27 ikke: you sure did 2022-04-29 06:38:27 morning 2022-04-29 06:38:46 ikke: ok, the git update hook i guess? 2022-04-29 06:38:55 i dont even remember what was inside of it 2022-04-29 06:38:59 some check i think? 2022-04-29 06:39:12 if a user is dev or not 2022-04-29 06:46:40 correct 2022-04-29 06:46:45 and preventing pushing new branches 2022-04-29 07:01:49 ikke: i want to switch to the new webhook container 2022-04-29 07:02:00 but i dont have the time to monitor it 2022-04-29 07:02:04 if something goes wrong :) 2022-04-29 07:03:20 what im actually are saying is, i dont expect it to go wrong, but if it does, can you check the container for strange things? 2022-04-29 07:03:39 in the case im offline 2022-04-29 07:15:36 I suppose :) 2022-04-29 07:23:28 i guess if everything stops working you will try to debug it ;-) 2022-04-29 07:25:05 I will certainly try 2022-04-29 07:33:13 ok i will try to switch when i finished installing ubuntu via ikvm remotely over vpn. this will probably take a few hours.... 2022-04-29 08:16:10 ikke: I have built and installed0- s39tools-2.14 (which is supposed to have a working bootloader program) but still failed on the s390x-ci server (did not try the main server yet). But work on other Alpine servers. So this is still WIP. Just an update for you. 2022-04-29 08:16:32 tmhoang: thanks! 2022-04-29 08:21:30 ikke: I see pinned 3.11 community repo on s390x-ci, is it ok ? 2022-04-29 08:22:45 tmhoang: You can remove it, there are no packages using that repo pin 2022-04-29 08:23:19 ikke: may I take the s390x-ci down for some time today ? 2022-04-29 08:24:42 You can 2022-04-29 08:24:52 thanks 2022-04-29 08:40:49 That looks promising 2022-04-29 08:46:31 not so much - still same error on rescue system. I suspect it is due to the new kernel installed. 2022-04-29 08:47:38 tmhoang: ah ok 2022-04-29 09:19:38 added ^ to maintenance so it won't constantly alert 2022-04-29 09:19:50 [problem] [solved] 2022-04-29 09:22:31 ikke: so it seems s390x-ci is running latest kernel - but another restart would fail so please let me know if you want to do so - fixing that. I'm heading to do the same for s390x builder - is that OK atm ? 2022-04-29 09:23:51 Please go ahead. I won't be rebooting these machines myself, but would be nice if we know they come back if they go down for some reason :) 2022-04-29 09:25:42 agreed - I'm fixing on that 2022-04-29 09:26:13 there was something wrong with new kernel and both new+old s390-tools 2022-04-29 09:26:17 👍 2022-04-29 09:30:48 The builders are idle, so it should be fine to reboot 2022-04-29 09:32:53 OK reboot should be fine on s390x-ci. But let's not install s390-tools and a new kernel for now until I fully fix this thing. Heading to s390 builder. 2022-04-29 09:33:24 tmhoang: hi! how are you doin? thank you for helping us! 2022-04-29 09:33:49 ncopa: Hey howdy ! Thanks for everything. I'm trying to keep things up. 2022-04-29 09:34:50 i am working on a testsuite for the alpine installer. It should be (relatively) simple to add s390x with emulated qemu 2022-04-29 09:35:01 so we can test that the alpine installer continues to work on s390x 2022-04-29 12:48:27 ncopa: cool ! please let me know how I can help 2022-04-29 12:50:21 https://github.com/ncopa/alpine-installer-testsuite 2022-04-29 12:52:00 thanks 2022-04-29 14:00:26 ikke: do you know why s390x.a.o is at 3.15.4 but still older vanilla kernel ? Should we upgrade to lts ? 2022-04-29 14:02:00 tmhoang: I didn't replace vanilla with lts, please do 2022-04-29 14:02:19 I did check for s390x-ci, but forgot for s390x 2022-04-29 14:02:21 would be as simple as # apk add linux-lts ? 2022-04-29 14:02:39 and apk del linux-vanilla 2022-04-29 14:11:35 ncopa, ikke: both s390x.a.o and s390x-ci.a.o are running latest kernel now. Upgrading kernel won't work until I fix the s390-tools/zipl bootloader issue. Restart is OK. Please help me to login and verify all services are running. Sorry for long delay. 2022-04-29 14:11:37 thanks 2022-04-29 14:12:00 tmhoang: no worry, thanks for your help so far 2022-04-29 14:14:31 tmhoang: everything seems fine 2022-04-29 14:14:46 great !