2024-08-01 01:43:50 It seems --disable-libquadmath (which is used for other archs apart from x86*) is not enough for ppc64le, and additionally --disable-libquadmath-support is needed (the GCC bug report also mentions using both configure options) 2024-08-01 18:50:11 nu is aware of the network issues and is figuring out what the cause is 2024-08-01 20:39:56 hi 2024-08-01 20:43:36 hi 2024-08-02 13:42:27 do someone have time and could tell why this fails https://tpaste.us/QRp4 2024-08-02 13:42:45 trying to upgrade tg_owt 2024-08-03 08:55:43 build-3-20-riscv64 is idle. does it works or stuck 2024-08-03 09:56:15 mps: the service was not running, thanks 2024-08-03 10:19:30 aha ok, thanks for fixing it 2024-08-04 23:06:09 is there any reason why the new efs-utils-2.0.3 APK wouldn't be in the edge repo? For that matter the old version isn't there either. 2024-08-05 05:32:19 tomalok: the APKBUILD is missing the `arch` field 2024-08-05 05:32:48 so it will not be build for any arch 2024-08-05 10:57:33 looks like arch= was unintentionally remoced in 65fd41456d79af771b8dc4a482cd84e3c2eb684c 2024-08-05 11:15:47 clandmeter: do you know what the status is for the new loongarch64 machines in EU? 2024-08-05 11:22:17 they arrived 2024-08-05 11:22:20 router arrived 2024-08-05 11:22:43 midasi needs to commission them. not sure what the status exactly is. 2024-08-05 11:22:53 he was on holidays so could be some delay 2024-08-05 11:50:02 ok 2024-08-05 14:06:21 ikke, ncopa: 🤦 thanks... 2024-08-06 07:25:07 ikke: the pioneer seems more stable now? 2024-08-06 10:15:45 clandmeter: I suppose so, you know best :D 2024-08-06 10:52:45 sfp for the arm builder is replaced, hope the link wont drop anymore. they run very hot though:/ 2024-08-06 10:53:12 nu_: Thanks! 2024-08-06 10:53:19 welcome^^ 2024-08-06 11:03:11 nu_: grazie! 2024-08-06 11:04:44 nu_: thank you very much! 2024-08-07 07:34:11 gladly;> 2024-08-07 09:18:40 i think che-bld-1 needs filesystem fix 2024-08-07 09:19:01 i think we need run e2fsck 2024-08-07 09:29:04 ncopa: what happened? 2024-08-07 09:30:02 im trying to figure out why restic fails on all the arm machines 2024-08-07 09:30:10 i was not able to reproduce it in my lxc container 2024-08-07 09:30:22 it seems to be related xattrs 2024-08-07 09:30:56 since it passes in my dev container on same filesystem, I dont think it is related the mount options 2024-08-07 09:31:09 so i ran e2fsck -n -f 2024-08-07 09:31:13 and it shows lots of errors 2024-08-07 09:32:34 so i thought maybe the filesystem is corrupted 2024-08-07 09:33:02 nu_ was still setting up some oob access for us 2024-08-07 11:41:44 if it's needed quickly i think i can hack something together in a few hours 2024-08-07 11:41:54 the network in theory is already there 2024-08-07 11:57:04 I don't think there is a hurry. Ncopa found out it's most likely related to tmpfs. Though I suppose it would be good to do an fs check soonish 2024-08-07 11:59:32 yeah, dont stress with it 2024-08-07 19:22:16 nu_: not experiencing any issues atm, but our monitoring still sees pings dropped every so often 2024-08-08 08:18:40 clandmeter: in case you have some time to check, the mirror page still lists qontinuum.space, while I removed that mirror from the list a couple of days ago 2024-08-08 19:57:04 I would like to take che-ci-1 offline for around 30 minutes. Any objections? 2024-08-08 19:59:01 No, go ahead 2024-08-08 20:00:13 OK, thanks. We take it offline now. 2024-08-08 20:01:47 After this change, I assume we can no longer directly connect to che-ci-1, right? 2024-08-08 20:05:38 yes, exactly 2024-08-08 20:13:21 ikke: router is online 2024-08-08 20:13:25 dmvpn seems not working 2024-08-08 20:19:18 looks like we need some config on awall 2024-08-08 20:23:57 forgot adp router 2024-08-08 21:00:02 looks like it needed a reboot after ip change 2024-08-09 14:06:56 the arm*/aarch64 CI is (still) down? 2024-08-09 14:32:06 It’s up but something is wrong 2024-08-09 15:51:47 @ikke ack, ill setup some monitoring for it. so far the sfp change seems to have solved the link reset issue though (its uninterrupted for the last 4 days) 2024-08-10 09:17:40 ^ this is what happens when you down the right interface on the wrong host :( 2024-08-10 13:43:20 ikke: why was it that the arm ci runners are running in qemu instead of just host docker? 2024-08-10 15:10:12 clandmeter: mostly because it's tricky to combine different arches on the same host 2024-08-11 17:24:15 algitbot: retry master 2024-08-11 17:43:35 algitbot: retry master 2024-08-12 07:44:11 kunkku: with awall, if you have a zone that declares an interface and address, it is possible to use only the interface port for a filter? Use case is dhcp, where you cannot limit traffic by source address 2024-08-12 07:44:49 The adp-lan zone includes both, so if you use adp-lan for dhcp zones, it won't work 2024-08-12 17:52:31 pkgs is pretty slow 2024-08-12 18:45:16 it is from time to time 2024-08-13 06:44:14 algitbot: retry master 2024-08-13 17:20:57 ikke: you are right, the adp-dhcp policy is broken 2024-08-13 17:21:19 kunkku: solved it for now by adding an extra zone that only includes the interface 2024-08-14 07:26:13 ikke: ping 2024-08-14 07:26:17 https://tpaste.us/RBW5 2024-08-14 07:27:01 kunkku: ^ 2024-08-14 07:41:10 ikke: i think we may want to add some zabbix notification on cert expiration 2024-08-14 07:52:15 clandmeter: we already have it for many things. What is missing? 2024-08-14 07:52:45 dmvpn 2024-08-14 07:52:55 hub2 is expired 2024-08-14 07:53:28 Ok 2024-08-14 07:53:58 I have been thinking about adding monitoring for that 2024-08-14 08:03:29 how do i register a scripts output to do something at zabbix? 2024-08-14 09:46:16 i think nld has some network issues 2024-08-14 09:46:37 ls 2024-08-14 09:52:39 ikke: i put a script in the root called dmvpn-check-expiration.sh 2024-08-14 09:53:30 we could run this daily and report here if a cert would expire within x days 2024-08-14 11:43:31 do we have the loongarch CI machie in EU up and running? 2024-08-14 11:51:47 3 pieces 2024-08-15 17:11:14 clandmeter: replaced the cert for dmvpn2 2024-08-15 17:44:13 Tijd 2024-08-15 17:44:16 thx 2024-08-15 19:49:26 four days aarch64 builder is stuck by elisa with one test failed 2024-08-15 20:55:26 mps: it's also stuck on ktexttemplate 2024-08-15 20:56:00 yes, forgot to mention it 2024-08-15 20:57:31 maybe 'we' can disable check on aarch64 for elisa 2024-08-16 06:44:50 equinix support is not having its finest days :) 2024-08-16 10:41:33 ncopa, ikke maybe move the conversation here 2024-08-16 10:42:04 ikke: ncopa things its an mtu issue 2024-08-16 10:42:13 Checking wireshark, I do receive a packet of 1588 bytes from the server 2024-08-16 10:42:46 i just asked davidi about the internet connection 2024-08-16 10:42:49 aka midasi 2024-08-16 10:43:06 ftr, it does work when I first jump through 24.1 2024-08-16 10:43:07 i tried to reduce the MTU to 1472 of eth0 on .190 2024-08-16 10:43:18 ssh -J root@172.16.24.1 root@172.16.24.2 2024-08-16 10:43:18 same here 2024-08-16 10:43:51 i think reducing the MTU made the connectivity to .1 work 2024-08-16 10:43:57 i was not able to ssh to .1 before 2024-08-16 10:44:18 my experiance is .1 acts same as .2 2024-08-16 10:44:34 I am able to ssh to .1 2024-08-16 10:44:52 debug1: Local version string SSH-2.0-OpenSSH_9.8 2024-08-16 10:44:53 debug1: kex_exchange_identification: banner line 0: Not allowed at this time 2024-08-16 10:44:53 kex_exchange_identification: Connection closed by remote host 2024-08-16 10:44:53 Connection closed by 172.16.24.2 port 22 2024-08-16 10:45:19 I'm alo able to ssh to .1 2024-08-16 10:45:29 ikke were you able to ssh to .1 yesterday? 2024-08-16 10:45:31 yes 2024-08-16 10:46:00 let me adjust hte MTU back to 1500 2024-08-16 10:46:50 with tmu set to 1500, the connection hangs when connecting to .1 2024-08-16 10:46:59 debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: compression: none 2024-08-16 10:46:59 debug1: expecting SSH2_MSG_KEX_ECDH_REPLY 2024-08-16 10:47:03 and there it hangs 2024-08-16 10:47:13 typical mtu blackhole 2024-08-16 10:47:19 Now it hangs for me as well, strange, I could ssh yesterday 2024-08-16 10:47:55 i noticed that the path mtu is 1472 2024-08-16 10:48:38 ping packets with size bigger than 1472 go through 2024-08-16 10:48:55 ping: local error: message too long, mtu=1446 2024-08-16 10:49:19 i did: mtu -d 2024-08-16 10:49:54 davide is checking the specs of the connection 2024-08-16 10:50:24 it could also be something else in between that reduces the MTU 2024-08-16 10:50:41 i asked before 2024-08-16 10:50:49 he said its directly connected 2024-08-16 10:51:15 as i understand, it should work anyway, but it depends on PMTU ICMP packets to be allowed 2024-08-16 10:51:24 and I have seen those been blocked before 2024-08-16 10:51:47 i think fastly blocks those, or used to block them 2024-08-16 11:21:30 Is our firewall allow them? 2024-08-16 13:30:10 i mean if awall does accept PMTU ICMP? 2024-08-16 13:42:28 dont know 2024-08-16 15:22:09 iirc awall allows all icmp packets 2024-08-16 15:22:21 it does not filter out specific types of icmp packets 2024-08-16 16:12:00 What are our options here? 2024-08-16 16:16:13 And why does it work when connecting directly? 2024-08-19 00:36:02 For the record, the "celeste" currently in #alpine-linux and #alpine-devel is a different person, not me 2024-08-19 04:44:17 celie: oh, good to know 2024-08-19 04:45:04 Thanks, i've messaged ncopa and clandmeter, i was thinking about messaging you next 2024-08-19 04:45:48 Not that we can do anything about it (nor would i want to...i think), just to make everyone aware that that person is not me 2024-08-19 04:55:51 I find it most peculiar they are in alpine releated channels as well 2024-08-19 04:57:21 I have been told that "celeste" was known as "triallax" before this, and is on the team of Chimera Linux 2024-08-19 09:30:32 clandmeter: do you mind if we reboot the che-bld-2 now? 2024-08-19 09:31:42 reason is kernel upgrade? 2024-08-19 09:31:46 yes 2024-08-19 09:31:53 sure 2024-08-19 09:31:56 i upgraded it to 6.6.47 now 2024-08-19 09:31:59 ok. rebooting. 2024-08-19 09:32:03 we have bmc 2024-08-19 09:32:27 ncopa: they confirm they use pppoe 2024-08-19 09:32:41 but this nomrally only used 8 bytes 2024-08-19 09:32:55 so not sure why so many are lost 2024-08-19 17:19:07 we are currently having issues on che network 2024-08-19 17:19:39 what kind of issues? 2024-08-19 17:22:40 rotuer offline 2024-08-19 17:27:48 That does not help 2024-08-19 18:25:47 che-ci-1 should be back online. Please excuse the inconvenience caused 2024-08-19 18:28:53 Regarding the MTU size: The max MTU size on our border router is currently set to 1480. Technically, it could be set to 1492 (1500 - 8 bytes for PPPoE), but we need to test this change in advance. 2024-08-19 18:30:52 Accordingly, the max icmp payload is 1452 (1480 - 20 IPv4 - 8 ICMP) 2024-08-19 19:34:08 midasi: thanks, no harm done :) 2024-08-19 19:34:53 It seems the issue is solved, at least, I can ssh into hosts in the lan over dmpvn 2024-08-19 20:23:48 ah nice 2024-08-19 20:24:19 thx midasi 2024-08-19 22:02:25 ikke: we need to migrate the t1 nl to another box 2024-08-19 22:02:33 not sure you have time to look into that 2024-08-20 05:13:43 clandmeter: tomorrow evening / thursday I have time 2024-08-20 06:35:48 ok, we can just grab a new one and do an rsync 2024-08-20 07:24:22 Ok, and what about bgp? 2024-08-20 07:24:46 i think we just assign the address to it? 2024-08-20 07:24:56 we dont manage bgp iiuc 2024-08-20 07:25:20 can just assign the elastic ip to it, or transfer it. not sure yet 2024-08-20 07:27:03 Yes, we don't manage it, but need to make sure the anycast address is available for the new instance 2024-08-20 07:47:25 clandmeter: do we already have the new instance? 2024-08-20 08:23:17 no we need to bring one up 2024-08-20 09:36:30 clandmeter: I have no experience setting up a zfs pool 2024-08-20 10:29:13 i think its pretty simple 2024-08-20 10:29:21 and you dont boot over it 2024-08-20 10:29:38 i dont know either the commands anymore 2024-08-20 10:29:44 but you could check the history 2024-08-20 10:32:59 i am expecting a power outage in my office today, if they can find the power cable which needs replacement (which seems difficult). 2024-08-20 10:33:13 which means the rv64 boxes will be temp offline 2024-08-20 11:51:52 OK nl infra is going down, cya later. 2024-08-20 14:56:07 clandmeter/ikke: this afternoon we adapted the che infra and updated the max mtu size to 1492. I hope all dmvpn issues are gone now. 2024-08-20 15:05:54 midasi: Thanks! The ssh login issues seem to have been fixed 2024-08-21 12:52:23 ikke, ncopa im still having issues with ssh over dmvpn 2024-08-21 12:52:38 ok 2024-08-21 12:53:01 me too 2024-08-21 12:53:39 have you configured clamp-mss on the apu2 machine? 2024-08-21 12:53:49 MTU is 1492 2024-08-21 12:54:05 i have no clue about clamp-mss 2024-08-21 12:54:12 is it done via awall? 2024-08-21 12:54:23 awall enable clamp-mss 2024-08-21 12:54:31 yeah, i think we have awall feature for that 2024-08-21 12:54:38 adp-* 2024-08-21 12:54:39 but hang on, seems like mtu is 1500 2024-08-21 12:55:36 adp-clamp-mss disabled Clamp MSS on WAN 2024-08-21 12:55:50 yes i was just looking at it 2024-08-21 12:55:53 lets try to set mtu to 1492 and enable adp-clamp-mss 2024-08-21 12:55:58 I did try it before 2024-08-21 12:56:05 did it make any difference? 2024-08-21 12:56:09 No 2024-08-21 12:56:12 im testing now 2024-08-21 12:56:17 i just enabled it 2024-08-21 12:57:00 ok i get something now 2024-08-21 12:57:49 i can ssh to apu2's .1 address now 2024-08-21 12:59:17 and its gone 2024-08-21 13:00:22 something is still off 2024-08-21 13:00:56 how high must the mtu be from site to site via dmvpn? 2024-08-21 13:08:21 let me try restart dmvpn on apu2 2024-08-21 13:10:51 something works. i get error immediately now 2024-08-21 13:11:00 kex_exchange_identification: read: Connection reset by peer 2024-08-21 13:11:01 Connection reset by 172.16.24.2 port 22 2024-08-21 13:12:20 still weird 2024-08-21 13:12:40 dont know what is wrong 2024-08-21 13:38:26 yes i got that after the fix 2024-08-21 13:38:31 but now it hangs at the same part 2024-08-21 13:40:40 fyi: we use as well DMVPN and we set the MTU on the tunnel interface (gre) to 1400 and the ipsec tcp-mss to 1360 in nftables (we don't use awall) 2024-08-21 15:32:43 ikke, ncopa im building java on the builder so the service is stopped for now. when ready we need to start it again and remove the manual deps 2024-08-21 15:41:12 👍 2024-08-21 15:47:34 Ack 2024-08-21 17:37:52 I used to have access to an x86_64 lxc container on nld-dev-1.alpinelinux.org at port 22018 does that still exist? can I access it through the vpn somehow? 2024-08-21 17:40:26 It should still exist, yes 2024-08-21 17:41:07 172.16.26.18 2024-08-21 17:42:17 > ssh -p 22018 root@172.16.26.18 2024-08-21 17:42:19 > ssh: connect to host 172.16.26.18 port 22018: Connection refused 2024-08-21 17:42:22 different port/user? 2024-08-21 17:42:40 sshd probably needs to be restarted 2024-08-21 17:43:04 oh, right. that's entirely possible 2024-08-21 17:43:08 could you restart its 2024-08-21 17:43:09 *? 2024-08-21 17:43:21 yes 2024-08-21 17:43:30 done, can you try? 2024-08-21 17:44:45 oh, via vpn it's just port 22 2024-08-21 17:44:56 the 22018 is a NAT port 2024-08-21 17:46:09 yep, works 2024-08-21 17:46:12 thanks a lot! :) 2024-08-21 17:47:21 my lxc on Pioneer is not accessible, ping doesn't respond 2024-08-21 17:47:55 does it need to be started manually? 2024-08-21 17:48:20 power is still out as far as I know 2024-08-21 17:48:40 clandmeter mentioned that they need to replace a power cable at the facility where this is running 2024-08-21 17:49:24 aha, ok 2024-08-21 19:19:24 ikke: no its back 2024-08-21 19:19:35 aoh 2024-08-21 19:19:35 its not back online? 2024-08-21 19:19:37 no 2024-08-21 19:19:41 hmm 2024-08-21 19:19:52 i guess the router didn t get back onlinep 2024-08-21 19:19:55 i saw the builders 2024-08-21 19:19:58 blinking 2024-08-21 19:20:03 I cannot ping the router 2024-08-21 19:20:12 172.16.30.1 2024-08-21 19:20:51 yup 2024-08-21 19:20:56 vm is not set to autostart 2024-08-21 19:21:26 should be back up 2024-08-21 19:21:30 uhm 2024-08-21 19:21:31 ^ 2024-08-21 19:22:26 mps: your container should be reachable again 2024-08-21 19:22:55 ikke: it is, thanks 2024-08-21 19:29:37 bit weird these msgs 2024-08-21 19:29:44 boxes seem okish 2024-08-21 19:32:07 yeah 2024-08-21 19:42:41 ikke: how far did you get with the new t1? 2024-08-21 19:43:07 I still need to start 2024-08-21 19:45:14 ok 2024-08-21 19:45:51 the 3 of them have pull strategy from master? 2024-08-21 19:46:10 I've changed sgp to pull from nld, because there were connection issues from sgp to master 2024-08-21 19:46:30 Maybe that's solved now 2024-08-21 19:46:30 maybe better to change to usa 2024-08-21 19:46:35 right 2024-08-21 19:46:57 but if they are pulling, then at least we do not need to worry to much for nl 2024-08-21 19:49:20 Ok, so deploy a new s3.xlarge.x86 instance? 2024-08-21 20:01:42 yup sakme type 2024-08-21 20:02:17 im getting an http 500 on an uri on gitlab 2024-08-21 20:03:10 To create an MR? 2024-08-21 20:03:22 no, history of a file 2024-08-21 20:03:57 https://gitlab.alpinelinux.org/Celeste/aports/-/commits/try-openjdk-bootstraps/community/openjdk17/APKBUILD 2024-08-21 20:04:36 Happens when gitaly times out iirc 2024-08-21 20:05:04 should we do some tuning? 2024-08-21 20:05:50 If there is something to tune 2024-08-21 20:06:07 new server is being deployed 2024-08-21 20:06:36 nice 2024-08-21 20:06:48 ed mentioned he will arrange some extra credits 2024-08-21 20:07:42 "exception.class": "Gitlab::Git::CommandTimedOut" 2024-08-21 20:08:01 yeah, reading up on it 2024-08-21 20:08:31 It also happens when you open the 'create new MR' link that gitlab gives and your repo is too far out of date 2024-08-21 20:08:35 first suggestion, fast disk :) 2024-08-21 20:08:53 second repo optimalisation 2024-08-21 20:43:22 ok, it has been deployed 2024-08-21 20:44:09 alpine 3.18 2024-08-21 20:44:13 should I try to upgrade to 3.20? 2024-08-21 20:47:25 now as good as a moment as any 2024-08-21 20:51:10 Ok, it booted succesfully 2024-08-21 20:53:28 still on .18? 2024-08-21 20:53:36 no, 3.20 now 2024-08-21 20:53:43 i mean what is installed 2024-08-21 20:53:50 they are using an old image 2024-08-21 20:53:52 oh yeah 3.18 was installed 2024-08-21 20:54:32 Now figuring out how to setup the zfs pool 2024-08-21 20:55:07 :) 2024-08-21 20:55:22 i guess just checking my history 2024-08-21 20:55:23 yes 2024-08-21 20:55:31 Was looking in .ash_history 2024-08-21 20:55:33 with all of my typoss 2024-08-21 20:56:06 heh 2024-08-21 20:56:16 should have a magic key to remove previous stupidity :) 2024-08-21 20:58:25 You removed cloud-init?\ 2024-08-21 20:59:39 i did? 2024-08-21 20:59:50 do we need it? 2024-08-21 21:00:27 i mean we still need it after it has booted? 2024-08-21 21:03:25 clandmeter: iirc it can still update settings after first boot 2024-08-21 21:03:36 but I don't think it's required 2024-08-21 21:20:11 trigger the initial sync 2024-08-21 21:20:14 triggered* 2024-08-21 21:21:06 in a tmux session in case you like to follow :P 2024-08-22 07:51:22 still syncing 2024-08-22 07:59:59 heh 2024-08-22 08:00:06 how fast is it going? 2024-08-22 08:00:13 I don't get feedback 2024-08-22 08:02:30 du -hs is a pretty good analysis :) 2024-08-22 08:07:20 about 70mbit/s 2024-08-22 08:07:32 70 only? 2024-08-22 08:07:35 yes 2024-08-22 08:07:41 thats not much 2024-08-22 08:07:44 for 20Gbit 2024-08-22 08:07:54 I thought I set it to usa, but it still seems to be syncing with cz 2024-08-22 08:08:02 aha :) 2024-08-22 08:08:14 why not sync from the nl box? 2024-08-22 08:08:53 that would be like 10x faster 2024-08-22 08:09:28 Yeah, but thought maybe the bad disk could cause issues 2024-08-22 08:09:34 but will probably be okay 2024-08-22 08:10:08 set it to nld now 2024-08-22 08:10:18 I'm using iftop btw 2024-08-22 08:11:57 Still not very fast, but probably due to lots of small files 2024-08-22 08:13:20 ok, now ~600 mbit/s 2024-08-22 08:13:36 peak 975mbit/s 2024-08-22 08:18:31 :D 2024-08-22 09:27:48 current peak 1.62Gbps :) 2024-08-22 09:39:40 i guess cdn is also connected to this one? 2024-08-22 09:39:49 i dont remember 2024-08-22 09:41:29 yes, it is 2024-08-22 11:21:14 looks like build-edge-riscv64 is stuck 2024-08-22 11:27:23 started mqtt-exec 2024-08-22 11:29:48 how do you invoke mqtt to follow builders? 2024-08-22 11:30:16 Do you want to see the messages? 2024-08-22 11:30:55 only status is interesting to me 2024-08-22 11:33:38 mosquitto_sub -h msg.alpinelinux.org -t 'build/#' -F '%t: %p' 2024-08-22 11:35:54 thanks 2024-08-22 13:51:03 algitbot: retry master 2024-08-22 16:49:52 algitbot: retry master 2024-08-22 18:09:12 clandmeter: mirror sync is done 2024-08-22 18:49:37 Nice 2024-08-23 08:21:56 mqtt-exec on algitbot is not working: https://tpaste.us/BJv9 2024-08-23 08:33:10 restarted the containers 2024-08-23 08:33:14 that seems to have fixed it 2024-08-23 10:34:37 clandmeter: The box seems to be off-line ^ 2024-08-23 10:34:56 r very busy. getting some pings with high latency back 2024-08-24 12:03:28 clandmeter: ncopa can I delete the rv64 build container on nld1?\ 2024-08-24 12:03:34 nld-dev-1 2024-08-24 13:05:59 ikke: it is on x86_64 host? 2024-08-24 13:06:06 yes 2024-08-24 13:06:15 the one we used before we got the pioneers 2024-08-24 13:06:38 give me some time to make archive of my container and move it 2024-08-24 13:06:54 hour or two 2024-08-24 13:08:22 sure 2024-08-24 13:09:00 thanks 2024-08-24 13:45:13 ikke: finished. 2024-08-24 13:45:24 thanks 2024-08-24 13:46:55 ikke: i guess so 2024-08-24 14:17:19 clandmeter: can you check the nld-bld-2? Seems to be unreachable since yesterday 2024-08-24 14:20:23 ikke: its one of the pioneers? 2024-08-24 14:20:29 yes 2024-08-24 14:20:42 i can remotly power toggle them 2024-08-24 14:20:46 its plural 2024-08-24 14:20:53 yes, understood 2024-08-24 14:21:16 off now 2024-08-24 14:22:13 power on again 2024-08-24 14:25:54 Now it seems to be the opposite, lol 2024-08-24 15:01:48 clandmeter: looks like you have to start my lxc on pioneer again 2024-08-24 15:02:37 pioneer1 did not come back 2024-08-24 15:03:21 i can try ones more 2024-08-24 15:03:28 else need to wait until monday 2024-08-24 15:06:54 nice ;-) 2024-08-24 15:07:52 this with IP 172.16.30.1 responds to ping 2024-08-24 15:08:12 ohm, it is not pioneer 2024-08-24 15:27:01 .1 is router 2024-08-24 15:33:42 I see 2024-08-24 15:36:06 ok, now both are back 2024-08-24 15:36:45 mps: your container should be back 2024-08-24 15:37:30 ikke: it is, thanks 2024-08-24 16:06:34 clandmeter: the lxc service was not enabled on bld-1, that's why we have to manually start all the containers each time 2024-08-24 16:06:38 I've enabled it now 2024-08-24 16:47:49 it was on purpous so mps had to ask me every time ;-) 2024-08-24 16:49:08 :D 2024-08-24 16:52:30 :p 2024-08-24 17:04:54 Well, he still has since the firewall will still block the traffic by default :P 2024-08-24 17:07:11 hehehe 2024-08-24 17:36:07 Ikke: no objection deleting the qemu based builder 2024-08-25 18:04:23 ikke: ^ 2024-08-25 18:04:33 yea 2024-08-25 18:04:51 the last-updated file is not being synchronized 2024-08-25 18:05:05 What is responsible for that? 2024-08-25 18:06:01 i guess rsync? 2024-08-25 18:06:05 it seems the mqtt-exec hook only updates each repo 2024-08-25 18:06:43 are you sure that was active? 2024-08-25 18:07:11 Ah, another cron 2024-08-25 18:07:23 that's why I prefer crons in containers rather than on the host 2024-08-25 18:07:38 yeah thats what i do now 2024-08-25 18:07:51 makes no sense to use hosts cron 2024-08-25 18:07:59 when cron runs fine in a container 2024-08-25 18:10:00 Ok, should be fixed now 2024-08-26 06:11:25 omni: you give me hope for alpine :-) 2024-08-26 06:13:58 Are you referring to #16396? 2024-08-26 06:15:09 cely: ofc :) 2024-08-26 06:16:12 I'm not alone now 2024-08-26 06:22:11 Ah, i was wondering why inxi's check() was changed to `perl inxi` and then back to `./inxi`, now it see it's because $builddir changed 2024-08-26 08:58:26 clandmeter: when you have time, could you check nld-bld-2 again? 2024-08-26 09:09:55 down again? 2024-08-26 09:11:49 yes 2024-08-26 09:11:58 already since yesterday 2024-08-26 09:26:14 this is weird 2024-08-26 09:26:24 the network was not working, and nothing in dmesg 2024-08-26 09:26:39 ifup/down enabled it again 2024-08-26 09:26:55 dhcpc crashed? 2024-08-26 09:27:08 dhcpcd 2024-08-26 09:29:20 dunno 2024-08-26 09:29:21 didnt check 2024-08-26 09:29:27 could be 2024-08-26 09:30:42 but its static 2024-08-26 09:30:45 algitbot: retry master 2024-08-26 09:30:46 so i gues not 2024-08-26 09:30:47 oh ok 2024-08-26 09:32:15 the builder has no network either 2024-08-26 09:33:11 FORWARD chain is still set to accept 2024-08-26 09:35:01 restarting the builder fixed it.. 2024-08-26 09:35:16 (container) 2024-08-26 09:41:51 mps: I too want to avoid unnecessary bloat 2024-08-26 09:42:47 it would be a lot of dependencies, with their own dependencies etc, if we were to add everything needed to satisfy --recommends 2024-08-26 09:53:30 omni: yes, and I wrote comment in issue after I tried to upgrade an old debian installation but didn't manage to do it just because famous debian 'dependency hell' 2024-08-26 09:54:32 one of main reasons I switched to alpine is just this: minimum dependencies and only those essentially needed 2024-08-26 09:56:06 I often write around that we (alpine) know that our users are not lazy and stupid and can do such tasks or smart to learn 2024-08-26 10:24:55 *assume 2024-08-26 10:25:38 well, I'm lazy and stupid ;) 2024-08-26 10:26:50 ikke: is it that bld2 is more unstable? 2024-08-26 10:26:53 I'm too lazy for sure 2024-08-26 10:26:54 rv one 2024-08-26 10:29:08 compared to bld-1? 2024-08-26 10:29:42 yes 2024-08-26 10:32:54 I don't think it's too different, only this weekend, nld-2 seemed to have more issues 2024-08-26 10:33:00 https://zabbix.alpinelinux.org/zabbix.php?show=2&name=&acknowledgement_status=0&inventory%5B0%5D%5Bfield%5D=type&inventory%5B0%5D%5Bvalue%5D=&evaltype=0&tags%5B0%5D%5Btag%5D=&tags%5B0%5D%5Boperator%5D=0&tags%5B0%5D%5Bvalue%5D=&show_tags=3&tag_name_format=0&tag_priority=&show_opdata=0&show_timeline=1&filter_name=&filter_show_counter=0&filter_custom_time=0&sort=clock&sortorder=DES 2024-08-26 10:33:02 C&age_state=0&show_symptoms=0&show_suppressed=0&acknowledged_by_me=0&compact_view=0&details=0&highlight_row=0&action=problem.view&hostids%5B%5D=10480&hostids%5B%5D=10481 2024-08-26 10:41:23 ok understand 2024-08-26 10:43:34 I want to upgrade the kernel on #2 2024-08-26 10:45:56 it's currently building webkit2gtk, going to be busy for a while 2024-08-26 10:46:44 clandmeter: Should I change the loonarch builder to -sk so that it continues after failure, but skips packages that failed? 2024-08-26 10:48:08 I'm not sure how -s behaves if a new commit is pushed for a specific package 2024-08-26 11:02:41 yes that was my question also 2024-08-26 11:02:49 how would we force the rebuild 2024-08-26 11:12:51 ikke: did you ever change fw on bld1? 2024-08-26 11:13:06 Executing awall-1.12.3-r1.trigger 2024-08-26 11:13:06 awall: adp-ssh-server: filter.1: Invalid zone: adp-wan 2024-08-26 11:13:06 ERROR: awall-1.12.3-r1.trigger: script exited with error 2 2024-08-26 11:15:45 i think this profile only works if you setup router? 2024-08-26 11:17:34 removing the ssh profile returns even more issues 2024-08-26 11:18:58 I don't recall adding it 2024-08-26 11:24:48 i rebooted #1 but the containers dont get ip 2024-08-26 11:25:48 iptables -P FORWARD ACCEPT 2024-08-26 11:39:43 sorry about the build, i need to apply the kernel update. 2024-08-26 11:40:46 ikke: both builders are now interconnected 2024-08-26 11:41:04 you should be able to connected to each serial port via ttyUSB0 or similar 2024-08-26 11:41:47 i forgot about the vector stuff 2024-08-26 11:41:56 so we have additional information about this issue? 2024-08-26 11:42:13 ping mps 2024-08-26 11:42:31 clandmeter: pong 2024-08-26 11:49:12 clandmeter: nice, works: `tio /dev/ttyUSB0` 2024-08-26 11:50:00 if the system has really crashed i am not sure its usefull 2024-08-26 11:50:07 maybe you can do a force reboot 2024-08-26 11:52:31 sending it over init will probably hang it 2024-08-26 17:34:55 clandmeter: seems like nld-bld-1 still has network issues. Can hardly install packages 2024-08-26 17:38:37 still iptables 2024-08-26 18:27:33 what is the problem? 2024-08-26 18:27:55 iptables -P INPUT ACCEPT; iptables -P OUTPUT ACCEPT 2024-08-26 18:28:11 is that because of awall or docker? 2024-08-26 18:28:20 i dont expect docker to block stuff? 2024-08-26 18:28:27 awall 2024-08-26 18:28:30 I suppose 2024-08-26 18:29:13 Should I remove the awall rules? 2024-08-26 18:34:42 why we need awall anyways? 2024-08-26 18:34:46 its behind a router already 2024-08-26 18:35:10 its probably a leftover 2024-08-26 18:35:15 Yeah, I think we came to the conclusion it's not needed 2024-08-27 05:53:06 clandmeter: without awall, it seems the policy for FORWARD is still set to DROP on nld-bld-2, so at least that's something we would need to address 2024-08-27 08:26:41 ikke: the rv builders are ok now? 2024-08-27 08:26:44 it was just fw? 2024-08-27 10:19:13 Yes, but we need to make sure it remains okay after reboot 2024-08-27 12:41:45 Just wondering, are the Go 1.23 issues #16401/#16402 being found through some new builder infrastructure, or was it done manually? 2024-08-27 12:51:23 cely: manually 2024-08-27 12:51:52 Ok 2024-08-27 12:52:00 I built go 1.23 from the MR and then built all depending aports 2024-08-27 12:52:21 Ok 2024-08-27 12:52:42 There are more packages with failures, and testing was still running 2024-08-27 15:23:40 seems like the ppc64le machine is down? 2024-08-27 15:24:04 i cannot ping or route to anything in .15.0/24 2024-08-27 15:26:24 It still responds to ping 2024-08-27 15:26:38 so maybe memory contention? 2024-08-27 15:28:04 ah, I can login externally 2024-08-27 15:29:14 for some reason, dmpvpn is not working 2024-08-27 15:30:22 cert expired, but the script from clandmeter does not show it 2024-08-27 15:32:10 ncopa: it's back 2024-08-27 15:44:37 thank you! 2024-08-27 16:54:37 strange, what was wrong? 2024-08-27 16:54:42 bad shell? 2024-08-27 18:29:25 the DNS on ppc64le CI seems broken 2024-08-27 18:29:38 fatal: unable to access 'https://gitlab.alpinelinux.org/alpine/aports.git/': Could not resolve host: gitlab.alpinelinux.org 2024-08-27 19:58:05 ptrc: thanks, should be fixed now, had to restart docker 2024-08-27 19:58:29 ikke: thank you! 2024-08-27 19:58:41 clandmeter: helps to enable crond on the new mirror host 2024-08-27 20:17:51 ikke: what was the issue with the shell script on dmvpn? 2024-08-27 20:19:36 clandmeter: maybe nothing, since the cert was already expired 2024-08-27 20:19:48 ah ok 2024-08-27 20:19:59 yes 2 certs were going to expire 2024-08-29 14:46:12 blaboon: I'm trying to create a nodebalancer in the nl-ams region, where I have several linodes with both a vlan and vpc interface with a private addres, but on the node balancer configuration page it says it cannot find any node with a private address. Am I missing something? 2024-08-29 14:46:56 I did manage to create one before, not sure why it's not working now 2024-08-29 14:50:40 Ok, I finally managed to find it, just after asking, sorry. I was looking at the confiugrations page, where you manage other network settings, but you assign a dedicated private IP on the network tab 2024-08-30 08:57:44 ikke: apologies for the delay. i'm actually on vacation at the moment, but it looks like you got it sorted out :) 2024-08-30 09:31:07 blaboon: don't worry about it, enjoy your vacation! 2024-08-30 10:46:00 very smart decision IMO https://www.phoronix.com/news/Debian-Orphans-Bcachefs-Tools 2024-08-30 10:46:39 I regret now after some time I added it in aports 2024-08-30 12:28:39 eh, it's the standard "half the rust ecosystem is running on unstable versions" stuff 2024-08-30 12:29:40 it's not an issue for us because we fetch them from crates.io and link statically 2024-08-30 12:37:17 hm, IME rust is 'issue' 2024-08-30 20:30:26 Apparently someone been scraping our wiki