2023-05-01 07:33:59 psykose: i deliberately left it there to test your sanity 2023-05-01 07:36:20 :⁠-⁠D 2023-05-01 07:51:25 i think i forgot about it after nopca fixed those rv64 lockups in qemu 2023-05-01 16:18:18 clandmeter: yeah, i remember we talked about it then said that you removed it so i thought it couldn't be that :D 2023-05-01 18:38:48 does the arm machine(s) have serial console? 2023-05-01 18:40:14 seems like they do. Is anything attached to the serial console on the arm marchine? 2023-05-01 18:42:02 equinix has an sos console 2023-05-01 18:42:07 over ip 2023-05-01 18:42:16 oh 2023-05-01 18:42:18 wait 2023-05-01 18:42:29 do we have any arm machines which are connected to serial console? 2023-05-01 18:42:32 this is ungleich now 2023-05-01 18:42:40 yeah, thats why I ask 2023-05-01 18:43:07 im trying to figure out how to auto-enable serial console login if something is attached 2023-05-01 18:43:36 I'm not sure if anything is attached to serial 2023-05-01 18:43:49 ok 2023-05-01 18:43:55 i think I can test in qemu 2023-05-01 18:44:11 Can you even know if something is attached? 2023-05-01 18:44:34 not really 2023-05-01 18:44:46 but I can cheat 2023-05-01 18:45:17 at least on x86 2023-05-01 19:06:57 you can't know if something is attached to serial port if it is not active, except if you have serial device with extra signals, DSR for example 2023-05-01 20:11:13 mps: that is exactly what I'm looking for 2023-05-01 20:12:36 this should make it possible to boot the alpine iso with -nographic and get login via qemu's serial stdio 2023-05-01 20:13:57 for long time I haven't seen serial console adapter with these signals. usually only RX, TX and ground 2023-05-01 20:26:10 qemu-system-x86_64 -serial stdio ... set them 2023-05-01 20:29:23 yes. I thought you are speaking about hardware devices 2023-05-01 20:48:59 i actually tested with a hardware device 2023-05-01 20:49:15 i connected an usb serial console to laptop, ran minicom on it 2023-05-01 20:50:02 and on the alpine machine on the other end, it showed 0: uart:16550A port:000003F8 irq:4 tx:0 rx:0 CTS|DSR 2023-05-01 20:50:44 this means that with alpine 3.18, if you connect minicom or other serial terminal to your alpine device, and boot the iso 2023-05-01 20:50:48 you will get serial login 2023-05-01 20:51:09 even if you don't add console=ttyS0 to boot cmdline 2023-05-01 20:51:52 i dont know how it is with ttyAMA0 on aarch though 2023-05-01 20:52:11 with '-serial stdio' or '-serial /dev/ttySx' 2023-05-01 20:53:42 with physical device with physical serial cable 2023-05-01 20:53:46 without qemu 2023-05-01 20:54:29 meaning it works both with qemu -serial stdio / -nographic *and* with physical hardware 2023-05-01 20:56:04 previous week I tested it with qemu-system-aarch64 and -serial stdio when installed openBSD in qemu VM and it works 2023-05-01 20:59:50 earlier I tested alpine in qemu with u-boot-qemu and option ttyAMA0 2023-05-01 21:02:14 and I forgot what I used on real arm64 hardware 2023-05-01 21:06:04 aha, ttyS0 on olimex olinuxino 2023-05-04 05:33:15 ikke: could you kick ppc64le 2023-05-04 05:34:09 done 2023-05-04 05:40:17 thanks 2023-05-04 07:05:14 and both x86_64's i guess 2023-05-04 08:16:30 ikke: ^ 2023-05-04 09:41:27 whats up with the x86_64 builders? 2023-05-04 09:42:55 stuck 2023-05-04 09:43:21 not anymore i guess 2023-05-04 09:43:26 i rebooted them 2023-05-04 09:43:37 why did they get stuck in aws-cli? 2023-05-04 09:57:14 guess you have to kill it twice 2023-05-04 09:57:23 they get stuck because a test crashes and it hangs after that 2023-05-04 09:57:28 unfortunate pytest behaviour 2023-05-04 09:58:25 the actual crash is kinda weird, since it's not related to the test itself, rather all the tests, since it crashes in python itself from the aws-c-* native extensions 2023-05-04 09:58:30 probably something overflows somewhere 2023-05-04 11:14:57 (emphasis on killing it again) 2023-05-04 11:15:38 #nocontext 2023-05-04 11:16:44 a universe where things come back to life 2023-05-04 11:17:03 thanks 2023-05-05 09:40:14 do we want specify an email address for our signing key we use for signing kernel modules? 2023-05-05 09:40:34 im was thinking something like kernel@alpinelinux.org or security@alpinelinux.org 2023-05-05 09:43:55 can't hurt 2023-05-05 09:43:58 or do we use ~alpine/devel@lists.alpinelinux.org 2023-05-05 09:44:18 kernel@alpinelinux.org does not exist currently 2023-05-05 09:44:55 do we want kernel@alpinelinux.org to exist? 2023-05-05 09:45:57 you would most likely be the one monitoring it :) 2023-05-05 09:46:12 it's more about who you want to get emails or not 2023-05-05 09:46:24 devel seems more interesting to me but i dunno 2023-05-05 09:46:37 something makes me think nobody will ever read it and send emails from the signing key 2023-05-05 11:03:54 ikke: racing me to delete mirror spam eh :P 2023-05-05 11:10:49 Ack 2023-05-05 15:18:21 GitLab Critical Security Release 2023-05-06 12:23:56 psykose: are you able to smoketest gitlab-test? Upgraded it to 15.10 2023-05-06 12:24:10 whadya want me to test 2023-05-06 12:24:38 login works 2023-05-06 12:24:59 I have a test suite that tests most basic functionality 2023-05-06 12:25:07 cloning via ssh / https 2023-05-06 12:25:15 forking a project 2023-05-06 12:25:30 pushing stuff worked 2023-05-06 12:25:32 ssh worked 2023-05-06 12:25:53 fork prints fail 2023-05-06 12:25:54 https://img.ayaya.dev/mWuV3I0RFa7v 2023-05-06 12:26:08 prolly cause i have it 2023-05-06 12:26:11 yeah 2023-05-06 12:26:15 useless errors 2023-05-06 12:26:33 real fork works 2023-05-06 12:27:39 comitting via webif works 2023-05-06 12:27:48 cloning aports seems to hang forever 2023-05-06 12:27:49 (tests gitaly) 2023-05-06 12:27:50 ah 2023-05-06 12:27:52 takes a minute 2023-05-06 12:27:58 now to test an mr 2023-05-06 12:28:25 I don't have any realistic expections that anythign regarding that improved 2023-05-06 12:28:36 me either 2023-05-06 12:28:41 does ci work on -test 2023-05-06 12:28:46 no 2023-05-06 12:28:47 no runners 2023-05-06 12:28:53 okie 2023-05-06 12:29:36 will upgrade gitlab.a.o in a bit then 2023-05-06 12:29:44 sure thing 2023-05-06 12:30:14 https://img.ayaya.dev/mqZUmE2GS5ln 2023-05-06 12:30:16 still broke 2023-05-06 12:30:36 still unmergable since opened up2date 2023-05-06 12:30:44 yeah, the latter is more anoying 2023-05-06 12:30:58 the former just takes an extra refresh 2023-05-06 12:31:01 i wonder what they're even fixing in these updates 2023-05-06 12:31:13 they change the ui every single .X release 2023-05-06 12:31:20 and break the api some 2023-05-06 12:31:28 https://gitlab.com/groups/gitlab-org/-/issues/?sort=updated_desc&state=closed&label_name%5B%5D=type%3A%3Abug&or%5Blabel_name%5D%5B%5D=workflow%3A%3Acomplete&or%5Blabel_name%5D%5B%5D=workflow%3A%3Averification&or%5Blabel_name%5D%5B%5D=workflow%3A%3Aproduction&milestone_title=15.10 2023-05-06 12:31:33 now that is a url 2023-05-06 12:31:38 damn 2023-05-06 12:31:40 a lot of closed bugs 2023-05-06 12:32:01 looks like a ton of enterprise integration stuff 2023-05-06 12:32:13 or just 3rdparty i guess 2023-05-06 12:33:04 should test docker registry as well 2023-05-06 12:38:31 idk how to do that so you should 2023-05-06 12:38:45 yeah, I was working on that 2023-05-06 12:38:54 there is registry-test.a.o 2023-05-06 12:39:04 but I think I need to fix config to make gitlab know that endpoint 2023-05-06 13:13:27 so many spam accounts 2023-05-06 13:17:17 yeah 2023-05-06 13:19:57 alex23 is real 2023-05-06 13:21:53 sweep stakes new 2023-05-06 13:21:59 wal mart customer satisfaction 2023-05-06 13:22:00 survey 2023-05-06 13:22:28 mumbai hot models 2023-05-06 13:27:21 genericmeds dot jp 2023-05-06 13:27:32 lmao the first thing on that site is viagra 2023-05-06 13:41:41 feels faster but that's probably just it being freshly restarted 2023-05-06 13:45:33 I have this planned as well: https://gitlab.alpinelinux.org/alpine/infra/docker/gitlab/-/merge_requests/11 2023-05-06 13:45:54 (already moving to ruby 3.1 ahead of gitlab) 2023-05-06 13:46:59 not even 3.2 yjit smh 2023-05-06 13:48:42 ikke: also did you see my ping about qemu8.0 for riscv 2023-05-06 13:48:59 yes 2023-05-06 13:56:30 So we expect speedups / improvements with qemu 8? 2023-05-06 14:00:36 psykose: rebooting the nld-dev-1 2023-05-06 14:00:47 sure thing 2023-05-06 14:01:25 rebooting for upgrade? or 2023-05-06 14:01:37 yes 2023-05-06 14:03:45 not sure about faster, but they implemented a lot of emulations 2023-05-06 14:03:50 i.e. what hare uses in this instance 2023-05-06 14:04:30 ah ok 2023-05-06 14:04:43 I did see general virtualization improvements mentioned 2023-05-06 14:04:46 though qemu was also -Os this whole time so it's probably actually faster after a rebuild 2023-05-06 14:04:51 RISC-V: wide ranges of fixes covering PMP propagation for TLB, mret exceptions, uncompressed instructions, and other emulation/virtualization improvements 2023-05-06 14:05:10 if you want to wait like 15 min and upgrade again that is 2023-05-06 14:05:28 always funny: "Booting from Hard drive C:" 2023-05-06 14:05:36 :) 2023-05-06 14:07:18 It's backl 2023-05-06 14:07:55 hmm 2023-05-06 14:08:01 all rv64 containers are stopped 2023-05-06 14:09:43 probably not set to autorestart 2023-05-06 14:10:15 autostart = 1 2023-05-06 14:10:29 gg i guess 2023-05-06 14:10:33 builders went to bahamas 2023-05-06 14:11:15 works if I start it manually 2023-05-06 14:11:29 maybe binfmt wasn't loaded yet? 2023-05-06 14:12:53 start again 2023-05-06 14:12:58 started* 2023-05-06 14:13:09 yeah, probably 2023-05-06 14:13:11 lack of depends 2023-05-06 14:15:01 should I upgrade qemu after it's built? 2023-05-06 14:15:29 yea 2023-05-06 14:15:44 that's probably 15 from now instead actually 2023-05-06 14:15:46 spinning rust builder 2023-05-06 14:15:47 and then restart the containers I suppose 2023-05-06 14:15:50 yep 2023-05-06 14:16:07 15 what? 2023-05-06 14:16:10 minutes? 2023-05-06 14:16:21 yep 2023-05-06 14:16:49 wondering if dhcp is working again on rv64 2023-05-06 14:16:55 one can dream 2023-05-06 14:17:03 It once worked 2023-05-06 14:17:08 and then suddenly stopped again 2023-05-06 14:17:28 hm, unrelatedly, should probably tag a new edge for a new edge container image 2023-05-06 14:17:32 and also make a new edge image for ci 2023-05-06 14:17:35 been a ton of updates in it 2023-05-06 14:18:05 it would be nice if that was automatic haha 2023-05-06 14:18:07 just weekly 2023-05-06 14:20:07 Could start using https://gitlab.alpinelinux.org/alpine/infra/docker/alpine 2023-05-06 14:20:38 weren't we already (for the image in ci) 2023-05-06 14:21:40 Not for alpine-gitlab-ci yet: https://gitlab.alpinelinux.org/alpine/infra/docker/alpine-gitlab-ci/-/blob/master/Dockerfile 2023-05-06 14:22:04 well 2023-05-06 14:22:09 isn't alpinelinux/build-base basically the same thing just another repo 2023-05-06 14:22:14 idk how any of these work 2023-05-06 14:22:37 build-base pulls from hub: https://gitlab.alpinelinux.org/alpine/infra/docker/build-base/-/blob/master/Dockerfile 2023-05-06 14:22:53 sure, but what i mean is 2023-05-06 14:22:59 it's functionally identical to just add an upgrade to it 2023-05-06 14:23:18 You mean: https://gitlab.alpinelinux.org/alpine/infra/docker/build-base/-/blob/master/overlay/usr/local/bin/setup.sh#L13 2023-05-06 14:23:35 one layer higher, before image is committed, not on startup 2023-05-06 14:23:45 https://gitlab.alpinelinux.org/alpine/infra/docker/alpine-gitlab-ci/-/blob/master/Dockerfile#L7 like here 2023-05-06 14:23:57 setup.sh is run the Dockerfile 2023-05-06 14:24:20 ah 2023-05-06 14:24:24 And both images are rebuilt weekly 2023-05-06 14:24:40 something is weird then? i feel like the ci hasn't been updated in weeks 2023-05-06 14:24:42 So yes, they are already updated weekly 2023-05-06 14:25:32 strange! 2023-05-06 14:25:47 Do you have an example? 2023-05-06 14:26:03 rv64 might be an exception, because we have pull-policy never there 2023-05-06 14:26:10 because it otherwise will pull x86_64 images 2023-05-06 14:31:38 ah 2023-05-06 14:31:44 yeah, it was rv64 i was looking at 2023-05-06 14:31:46 that's weird 2023-05-06 14:32:48 The gitlab-runner runs x86_64 2023-05-06 14:33:14 and there is no way in gitlab-ci to ask it to pull images for a specific platform 2023-05-06 14:40:18 time to update traefik on deu1-dev1 2023-05-06 14:40:43 algitbot: ping 2023-05-06 14:40:48 guess it works 2023-05-06 14:41:05 That doesn't use traefik? 2023-05-06 14:41:16 it has labels 2023-05-06 14:41:23 ah 2023-05-06 14:41:25 irclogs 2023-05-06 14:41:28 well 2023-05-06 14:41:30 irclogs work 2023-05-06 14:41:40 cgit does 2023-05-06 14:42:37 seems qemu should be upgradable 2023-05-06 14:43:00 yup 2023-05-06 16:54:39 restarted build-edge-riscv64 2023-05-06 19:24:37 https://ptrc.gay/WDVVPcyO 2023-05-06 19:24:43 i imagine this was already reported? 2023-05-06 19:27:32 nope 2023-05-06 19:27:33 no 2023-05-06 19:27:37 looks new :D 2023-05-06 19:27:40 oh well 🙃 2023-05-06 19:28:00 i'll try to catch it in devtools, because i think it's some weird request failing 2023-05-06 19:28:01 Does that happen consistently? 2023-05-06 19:28:13 ptrc: if you have any request ID, that could be helpful 2023-05-06 19:28:18 happened 2 out of 2 times i tried to open a merge request, i don't know if that's very consistent :p 2023-05-06 19:28:32 it doesn't actually prevent anything 2023-05-06 19:28:37 it's just a nasty error message 2023-05-06 19:28:41 ok 2023-05-06 19:29:50 https://gitlab.com/gitlab-org/gitlab/-/issues/396387 2023-05-06 19:31:07 okay, i was able to reproduce it consistently, happens every time i load the "New merge request" page and switch to "Changes" tab 2023-05-06 19:31:41 but there's no erroring request, only a 304 not modified, req id 01GZS8PZNWV1NGK6J9DE42DQA8 2023-05-06 19:31:56 fixed in https://gitlab.com/gitlab-org/gitlab/-/merge_requests/119042 then 2023-05-06 19:31:58 i guess just 16.0 2023-05-06 19:34:33 Looks like something I can hotfix 2023-05-06 19:34:41 /backport 2023-05-06 19:45:23 https://gitlab.alpinelinux.org/alpine/infra/docker/gitlab/-/commit/90aadc2a71e9516f1a7eb1d54f5f66eb11cb526a 2023-05-06 19:53:31 pog 2023-05-08 05:17:16 s390x looks stuck 2023-05-08 05:17:34 3-18? 2023-05-08 05:18:23 buildrepo spinning again 2023-05-08 05:19:23 https://tpaste.us/D7OR 2023-05-08 05:19:56 weird 2023-05-08 05:20:23 would be nice to know which line of code 2023-05-08 05:20:24 sigh 2023-05-08 05:20:49 not sure how to find out 2023-05-08 05:23:33 seems to be the last (attempt) to build path=/home/buildozer/packages//community/s390x/perl-chi-0.61-r0.apk 2023-05-08 05:24:08 but cwd is HELM 2023-05-08 05:24:44 so it was probably after it, but on building helm 2023-05-08 15:25:05 :) 2023-05-08 17:15:57 I see that s390x was disable on dotnet* packages due to diskspace. Is this still an issue? If not, I'll be able to enable on tomorrow's monthly update. 2023-05-08 17:59:18 ayakael: probably still tight 2023-05-08 18:00:52 ayakael: and currently both 3.18 and edge track master, so it would take double the space 2023-05-08 19:34:51 Copy that, I'll enable once 3.18 is released. 2023-05-08 19:37:26 Will the space issue continue being a problem for future releases on s390x? I've already implemented approaches to reduce space usage on dotnet build, but I can investigate further. 2023-05-08 19:38:52 The s390x builders do not have a lot of diskspace 2023-05-08 19:39:16 We could ask if there is any possibility to expand it 2023-05-08 19:40:48 We do need to rearchitect the builder setup though 2023-05-08 19:40:56 the releases are only increasing in size 2023-05-09 02:26:55 should probably keep it disabled unless ibm wants to pay me to give a shit personally 2023-05-09 02:26:59 literally nobody uses that garbage 2023-05-09 03:55:16 Copy that 2023-05-09 05:53:36 ikke: can you kick 3.18 x86? 2023-05-09 06:05:50 ikke: and all the dotnet builds so they don't rebuild 2023-05-09 06:06:04 oof 2023-05-09 06:25:20 nevermind, one finished, so have to build them all anyway 2023-05-09 06:25:24 just 3.18-x86 then 2023-05-09 06:25:39 wish i could just kick those myself 2023-05-09 06:33:25 The builders would never get a chance to build anything :⁠-⁠P 2023-05-09 06:40:30 maybe in.. opportune moments :p 2023-05-09 07:39:34 ugh, or you have to kick them anyway since they get stuck on network memes at random 2023-05-09 07:43:56 psykose: late reply, but i think we had 2 kill hacks implemented. one was time based and one was irc based. probably only the later got removed previously. things in cron tend to get forgotten :) 2023-05-09 07:44:16 the former was made only after that latter got removed iirc 2023-05-09 07:44:22 was about the same time 2023-05-09 07:44:29 btw, you mention it was using the wrong signal? 2023-05-09 07:45:00 yea 2023-05-09 09:25:03 ikke: actually, you mind carefully checking if x86_64 is stuck without killing it 2023-05-09 09:28:23 nah, isn't 2023-05-09 13:51:30 ikke: do you have perms in gitlab to move issues? https://gitlab.alpinelinux.org/alpine/aports/-/issues/14894 seems I lack the permission 2023-05-09 13:58:01 Yes, but when I'm at home 2023-05-09 14:08:36 ikke: thanks 2023-05-09 15:48:42 im trying to configure deu7-dev1 as ProxyJump the build-*-a* machines 2023-05-09 15:48:52 but im not succeeding 2023-05-09 15:49:57 maybe fw is not open? 2023-05-09 15:50:11 deu7-dev1 [~]# ssh buildozer@2a0a:e5c1:517:1:216:3eff:fe64:b5c6 2023-05-09 15:50:13 hangs 2023-05-09 15:50:15 ncopa: is tcp tunneling allowed? 2023-05-09 15:50:24 i thikn i enabled it 2023-05-09 15:50:45 but it seems like ssh to containers via ipv6 is not allowed 2023-05-09 15:51:13 deu7-dev1 [~]# ping -c1 2a0a:e5c1:517:1:216:3eff:fe64:b5c6 2023-05-09 15:51:13 PING 2a0a:e5c1:517:1:216:3eff:fe64:b5c6 (2a0a:e5c1:517:1:216:3eff:fe64:b5c6): 56 data bytes 2023-05-09 15:51:13 64 bytes from 2a0a:e5c1:517:1:216:3eff:fe64:b5c6: seq=0 ttl=41 time=32.717 ms 2023-05-09 15:51:15 routing works 2023-05-09 15:51:21 Yeah, might be the case 2023-05-09 15:51:32 I proxy directly through the host as I can reach it via IPv6 2023-05-09 15:52:19 As we rarely allow direct connections to the builder containers 2023-05-09 15:53:01 yeah, thats probably a good idea 2023-05-09 15:54:20 You can proxy jump multiple times? 2023-05-09 15:54:25 first to deu7 and then to che-bld-1? 2023-05-09 15:54:33 im testing now 2023-05-09 15:54:35 (I mean, set it up so that it does that automaticalaly) 2023-05-09 15:55:13 ok, i can proxyjump to che-bld-1 2023-05-09 15:55:36 now lets see if I configure che-bld-1 as proxyjump to the build-*-a* 2023-05-09 15:56:00 https://superuser.com/questions/1697450/ssh-config-for-multiple-proxy-jumps 2023-05-09 15:56:09 So add both to your local .ssh config 2023-05-09 15:57:01 heh i got it working 2023-05-09 15:57:03 nice 2023-05-09 15:57:11 i set up a proxyjump to che-bld-1 2023-05-09 15:57:25 then i used che-bld-1 as proxyjump for build-* 2023-05-09 15:57:31 yeah 2023-05-09 15:57:33 ssh is pretty awesome 2023-05-09 19:02:21 hmm, when looking at commit and it has multiple tags/branches, clicking on them does nothing 2023-05-09 19:03:29 it works when opening as new tab (scroll click) 2023-05-09 19:03:43 (talking about gitlab ofc) 2023-05-10 09:44:46 hmm, knot-utils disappeared from armv7 with 3.18 upgrade 2023-05-10 09:45:51 'options="!check"' is fix 2023-05-10 10:19:48 s/fix/workaround :⁠-⁠P 2023-05-10 10:35:33 it works, is important 2023-05-10 10:40:59 knot has been disabled on 32-bits arches since a long time 2023-05-10 10:42:10 17f8c79406253fdc96765de38055bbde6e1b4cac 2023-05-10 10:42:22 3 years ago 2023-05-10 10:42:25 hm, I had it on 3.17, maybe I didn't upgraded properly 2023-05-10 10:43:11 yes, I looked git log before build now with 'options="!check"' 2023-05-10 10:45:48 telmich: seems like our servers are unreachable? 2023-05-10 11:12:04 nice, alpine is on top https://linuxsimply.com/fastest-linux-distro/ 2023-05-10 11:12:06 mps: i think we should report the error upstream 2023-05-10 11:12:21 ncopa: sure 2023-05-10 11:12:57 but I think this should do maintainer and if Jakub don't want then someone else 2023-05-10 11:13:13 he obviously dont care 2023-05-10 11:13:36 i didnt care since i just disabled it 2023-05-10 11:13:49 hm, will look when I find time 2023-05-10 11:14:00 i guess we just wait for someone who cares enough :) 2023-05-10 11:14:18 mps: you could file an issue and ping jirutka 2023-05-10 11:15:01 but i think he has opinion that we should kill all 32 bit arches 2023-05-10 11:15:23 no please 2023-05-10 11:15:42 i dont think killing all 32bit arches is a good idea either 2023-05-10 11:15:50 and we wont do that 2023-05-10 11:15:58 a lot of 32bit SBCs are around and used and alpine is very good for them 2023-05-10 11:16:23 yeah 2023-05-10 11:28:57 https://gitlab.alpinelinux.org/alpine/aports/-/issues/11773 2023-05-10 11:29:02 I'm reporting it upstream 2023-05-10 11:29:56 Are there tests still failing 2023-05-10 11:30:01 ah, I see 2023-05-10 11:30:15 ikke: yes, tested two hours ago 2023-05-10 11:31:12 i filed it here: https://gitlab.nic.cz/knot/knot-dns/-/issues/843 2023-05-10 11:32:07 just saw your report seconds before you posted url here 2023-05-10 11:34:01 I think I will be busy for a month with writing one closed source kernel driver but hope later will more time for alpine 2023-05-10 11:34:16 have* 2023-05-10 14:20:03 ikke: seems anitya doesn't work since yesterday 2023-05-10 14:21:37 I've added v3.18 yesterday 2023-05-10 14:30:21 i mean it doesn't report outdated daily 2023-05-10 14:30:25 unless it skipped for some reason 2023-05-10 14:34:03 Oh 2023-05-10 14:34:20 I had the cron job stopped for the Import 2023-05-10 14:34:31 That ran overnight 2023-05-10 14:34:55 It runs at 2am, right? 2023-05-10 14:35:27 ye 2023-05-10 14:35:30 2am utc 2023-05-10 14:45:47 So that's why, the cron container was not running 2023-05-10 15:11:28 arm is still not back 2023-05-10 15:18:29 indeed 2023-05-10 15:18:32 it'll come back eventually 2023-05-10 15:51:27 arm machine is missing? 2023-05-10 15:51:47 yes 2023-05-10 15:51:49 network 2023-05-10 15:51:53 ok 2023-05-10 15:52:01 Pinged telmich, no reaction yet 2023-05-10 16:51:02 gonna upgrade deu1/deu7 in a bit 2023-05-10 16:51:08 some containers first 2023-05-10 17:02:52 testing some stuff on deu1 too 2023-05-10 17:03:08 what kind of stuff 2023-05-10 17:03:15 crun for dockerd 2023-05-10 17:03:26 seems to work, now for all the containers 2023-05-10 17:03:40 lower memory use overhead 2023-05-10 17:03:42 crun instead of runc? 2023-05-10 17:03:43 no real downside i ever found 2023-05-10 17:03:44 yeah 2023-05-10 17:03:47 ok 2023-05-10 17:03:54 here goes nothin 2023-05-10 17:04:42 seems default-runtime didn't change the running ones 2023-05-10 17:05:01 need to recreate the containers? 2023-05-10 17:05:13 either that or i am wrong in the config 2023-05-10 17:05:25 recreate is still runc 2023-05-10 17:06:03 Going to restart gitlab in a bit to apply the hotfix (didn't get to that yet) 2023-05-10 17:06:14 i'd assume a service arg of --default-runtime= would definitely work instead but idk why the json isn't 2023-05-10 17:08:50 hmm 2023-05-10 17:08:55 i think the issue is docker compose overrides stuff 2023-05-10 17:09:00 just docker run will use the default 2023-05-10 17:09:02 interesting 2023-05-10 17:09:22 There is a runtime you can specify 2023-05-10 17:09:33 but would be anoying if have you to do that always 2023-05-10 17:09:44 https://docs.docker.com/compose/compose-file/05-services/#runtime 2023-05-10 17:12:02 yeah i can't find anything 2023-05-10 17:19:29 even that doesn't seem to work 2023-05-10 17:20:01 bizarre 2023-05-10 17:21:23 well, gonna reboot 2023-05-10 17:28:40 based on https://github.com/docker/compose/issues/6239 it's just a compose v2 thing 2023-05-10 17:28:41 but not v3 2023-05-10 17:28:51 ... 2023-05-10 17:28:58 and they still have not fixed this 2023-05-10 17:29:35 Total reclaimed space: 12.08GB 2023-05-10 17:29:35 lesgo 2023-05-10 17:34:47 ptrc: do you know of a way to change the runtime with compose v3 2023-05-10 17:35:11 with golang cli-compose v2 running the template 2023-05-10 17:36:20 isn't v3 compose really a swarm-only thing? 2023-05-10 17:36:30 not anymore 2023-05-10 17:36:44 hm, the spec claims it's the same: https://github.com/compose-spec/compose-spec/commit/d9feb70c3799c219b9d7d9d4a5d180af947711c3 2023-05-10 17:36:48 with compose v2 and the compose specification they merged v2 and v3 2023-05-10 17:40:45 when I said "v3 compose" I meant a v3 compose file ;-) In the new merged spec "version" is deprecated 2023-05-10 17:41:33 do u have a solution or nah 2023-05-10 17:47:18 ptrc: fyi, the patch has been backported, can you check you still get the error messages? 2023-05-10 17:47:24 (gitlab patch for MRs) 2023-05-10 17:49:27 sure, i'll check in a second 2023-05-10 17:49:38 psykose: seems to work just fine as always: https://ptrc.gay/YSgoFahp 2023-05-10 17:49:53 does it actually run with crun 2023-05-10 17:49:56 ps aux 2023-05-10 17:50:24 ...right 2023-05-10 17:50:29 :) 2023-05-10 17:50:30 >containerd-shim-runc-v23 2023-05-10 17:50:31 v2* 2023-05-10 17:50:45 if you do version: 2 does it work 2023-05-10 17:51:19 nope 2023-05-10 17:51:22 Hasn't been the first time there is some mismatch between the spec and the implementation 2023-05-10 17:52:09 funnily enough, `docker info` fully recognizes that the default runtime is 'crun' 2023-05-10 17:52:30 but it still runs via containerd-shim-runc-v2 2023-05-10 17:52:31 yeah, it's just containers created with compose that don't use it 2023-05-10 17:52:40 according to psykose 2023-05-10 17:52:57 ah, wait, no 2023-05-10 17:53:09 https://ptrc.gay/slyIETqL 2023-05-10 17:53:14 it does, in fact, use crun :p 2023-05-10 17:53:33 it's just that crun probably execve(2)s into the entrypoint/command 2023-05-10 17:53:44 so it's not visible in the process tree 2023-05-10 17:54:43 yeah but there's still runc processes 2023-05-10 17:54:49 so what's up with that 2023-05-10 17:54:55 i removed all of actual runc from the system 2023-05-10 17:54:58 and it still works 2023-05-10 17:55:05 the shim is not runc itself 2023-05-10 17:55:26 yeah but there's still runc processes 2023-05-10 17:55:26 so what's up with that 2023-05-10 17:56:06 where are those processes 2023-05-10 17:57:35 deu1 server still after rebooting with everything having runtime: crun 2023-05-10 17:58:43 well, can't say what's going on with that server, but in my testing there aren't any runc processes, only the shim ( which launches containers via crun ) 2023-05-10 17:59:13 psykose: did you change the update frequency of htop on deu1? 2023-05-10 17:59:17 ah i see 2023-05-10 17:59:30 ikke: i always start it with -d2 and sometimes it updates and sometimes not 2023-05-10 17:59:36 htoprc is a volatile thing 2023-05-10 17:59:38 ah ok 2023-05-10 17:59:57 no biggy, just supprised about the hyperactivity :P 2023-05-10 18:00:19 hehe 2023-05-10 18:01:40 ptrc: containerd-shim-runc-v2 is that shim you mentioned, right? 2023-05-10 18:01:44 yes 2023-05-10 18:02:01 indeed everything works with no runc 2023-05-10 18:02:13 idle memory seems a bit lower 2023-05-10 18:02:14 it's named 'runc', but launches any runtime apparently, standard container people business 2023-05-10 18:05:50 clandmeter: I wonder if we can clean up logs for thelounge, about 15G worth atm 2023-05-10 18:17:52 curious why the #1 cpu user all the time is traefik 2023-05-10 18:17:57 what it really be doin 2023-05-10 18:19:53 wiki upgrade time 2023-05-10 18:19:59 already broke the db, lesgetit 2023-05-10 18:20:32 and fixd 2023-05-10 18:23:51 for some reason i can't ssh into it 2023-05-10 18:23:52 curious 2023-05-10 18:24:23 and ssh -4 works 2023-05-10 18:24:25 very very interesting 2023-05-10 18:24:36 praise be wiki.gbr-app-1.alpin.pw 2023-05-10 18:36:28 https://img.ayaya.dev/G80hVskc8xnF 2023-05-10 18:36:30 glorious 2023-05-10 18:36:59 always 10 moving parts to this shit i keep forgetting 2023-05-10 18:37:04 would actually be easier as a container 2023-05-10 18:40:37 Feel free to containerize it :P 2023-05-10 18:41:12 oof but that's work 2023-05-10 18:41:38 Exactly :D 2023-05-10 18:43:22 :D 2023-05-10 18:46:35 The difficult part is finding out all the customizations 2023-05-10 18:47:13 it's all just localsettings and the rest is php fluff 2023-05-10 18:47:30 if i went that route i would probably just overeye the whole localsettings file 2023-05-10 18:47:37 a ton of stuff in there prolly doesn't apply anymore 2023-05-10 18:47:48 also would be nice to get a dark theme 2023-05-10 18:48:04 citizen looks nice but the colors are wonky on the color highlights 2023-05-10 18:50:32 What about plugins? 2023-05-10 18:51:10 they're all in the localsettings file and it's what's in gitmodules^pluginsdir 2023-05-11 01:15:42 arm builders having issues? 2023-05-11 03:28:18 tomalok: they're awol 2023-05-11 03:47:58 :'( 2023-05-11 07:04:44 ikke: im not sure, i dont like losing my history 2023-05-11 07:49:03 # dig che-bld-1.alpinelinux.org 2023-05-11 07:49:12 ANSWER: 0, 2023-05-11 07:50:13 arm machine is still down? 2023-05-11 07:56:43 try vigir23.place6.ungleich.ch 2023-05-11 08:12:21 ANSWER: 0 2023-05-11 08:14:19 I pinged Nico on mastodon as well 2023-05-11 09:01:38 I need to work on armv7 kernel today. I suppose I can use aws for now. 2023-05-11 09:11:24 ncopa: it might be that ungleich has a wider issue. The site shows a 502 2023-05-11 09:17:09 whoops! not good 2023-05-11 09:19:51 in my back head for some time is idea/dream to have mesh network for infra 2023-05-11 11:08:20 we have dmvpn which is a mesh vpn network 2023-05-11 11:21:08 ah, didn't know that dmvpn is mesh 2023-05-11 11:21:38 It dynmically sets up gre tunnels when necessary 2023-05-11 11:21:56 dm stands for dynamic multi-point 2023-05-11 13:42:10 Something we may take a look at: https://docs.gitlab.com/ee/administration/gitaly/configure_gitaly.html#pack-objects-cache 2023-05-11 14:11:54 no word from ungleich and the arm machine? 2023-05-11 14:12:05 not yet 2023-05-11 14:12:15 I can try to e-mail him as clandmeter suggested 2023-05-11 14:12:37 yeah, please do 2023-05-11 14:13:18 i'm happy this didn't happen on monday. would have been catastrophe for the 3.18 release 2023-05-11 14:14:01 Yes, was thinking the same 2023-05-11 14:14:05 just the day after the release 2023-05-11 14:18:38 maybe the Arm machine needed a lie down/rest after working so hard on Monday? ;-) 2023-05-11 14:19:34 minimal: well, it apparently took ungleich with it then 🤔 2023-05-11 14:25:51 ncopa: should we post a short announcement on www.a.o? 2023-05-11 14:29:36 can we add announcements to gitlab? 2023-05-11 14:29:43 ncopa: yes, we can as well 2023-05-11 14:29:45 maybe makes more sense on gitlab 2023-05-11 14:30:16 My thinking was that it would also affect users not getting updates 2023-05-11 14:30:39 its still only less than a day? 2023-05-11 14:30:43 So our servers are unreachable due to a router failure, which they are working on 2023-05-11 14:31:53 ungleich.ch being unavailable is unrelated 2023-05-11 14:35:00 ncopa: I've added a broadcast message 2023-05-11 14:35:33 awesome. thanks! 2023-05-12 05:36:00 psykose: I wonder if this would help us: https://docs.gitlab.com/ee/administration/gitaly/configure_gitaly.html#pack-objects-cache 2023-05-12 05:36:13 They say it does increase disk i/o 2023-05-12 05:36:14 it should, assuming each job is going the fetch 2023-05-12 05:36:32 i.e. when any ci thing runs, iirc we do 9 fetches at once 2023-05-12 05:36:34 And also increases storage 2023-05-12 05:36:40 that looks configurable 2023-05-12 05:36:57 Yes, there is a amount of hits before it's cached 2023-05-12 05:36:59 if you do max_age: 10 minutes it should be fine 2023-05-12 05:37:04 they recommend n=1 2023-05-12 05:37:13 yeah 2023-05-12 05:37:17 for 10 minutes it's probabl ok 2023-05-12 05:37:28 even 5 of the default should be fine 2023-05-12 05:37:34 i would toggle it and see if it breaks anything or not 2023-05-12 05:37:43 yeah, I'll first try it on gitlab-test 2023-05-12 05:38:08 ack 2023-05-12 06:29:30 good morning! no news on arm builder? 2023-05-12 06:32:34 none yet 2023-05-12 06:45:55 ncopa: yesterday Nico replied that they have an issue with the routers for the location where the arm servers are, no ETA yet though 2023-05-12 07:14:02 how long do we wait before we start work on alternate solutions? over the weekend? 2023-05-12 07:18:06 ncopa: are there urgent issues? 2023-05-12 07:19:07 just the cloud images for 3.18 and normal security fixes 2023-05-12 07:19:58 and the kernel fix for armv7 is currently blocking me 2023-05-12 07:20:03 but not really that urgent 2023-05-12 07:20:49 also linux-edge should be upgraded 2023-05-12 07:23:47 ncopa: what alternatives do we have? 2023-05-12 07:31:22 pretty sure just waiting is fine 2023-05-12 07:59:35 yeah 2023-05-12 10:27:14 ooh 2023-05-12 10:27:16 ncopa: ^% 2023-05-12 10:27:37 arm is back 2023-05-12 10:28:22 is it on builds.a.o 2023-05-12 10:32:12 I don't see any new messages yet 2023-05-12 10:35:23 probably would have built something by now 2023-05-12 10:35:43 The dns servers (which do dns64) are not reachable yet 2023-05-12 10:36:31 so msg.a.o cannot be resolved 2023-05-12 11:07:02 hm 2023-05-12 11:07:08 i'd imagine the builds fail too then? :D 2023-05-12 11:07:12 machines are reachable tho 2023-05-12 11:09:06 indeed no dns works 2023-05-12 11:12:48 I let Nico know, they'll look at it. 2023-05-12 11:29:03 yay! progress! 2023-05-12 12:05:33 Looks like DNS is fixed? 2023-05-12 12:06:01 Yup, see build status messages again as well 2023-05-12 12:26:55 Hmm 2023-05-12 12:32:49 not good :D 2023-05-12 12:32:54 hopefully this time it's faster 2023-05-12 13:02:46 ikke: sorry but could you again add iptables rule so I could access mps-edge-ax86 from mps-edge-riscv64 container 2023-05-12 13:03:20 sorry, from mps-edge-x86_64 2023-05-12 17:59:18 don't suppose you have any news for that one 2023-05-12 17:59:37 nope 2023-05-12 17:59:43 I sent an e-mail, no response yet 2023-05-12 18:06:39 another 2 days? :D 2023-05-12 18:07:51 well, to be fair, after I sent the mail he responded quickly 2023-05-12 18:10:44 they have a bounty to package something 2023-05-12 18:10:56 https://ungleich.ch/u/projects/jobs-hacks-bounties/ 2023-05-12 18:11:00 30 chf for an easy package 2023-05-12 18:11:04 but.. it's gtk2 :D 2023-05-12 18:12:53 eboarD? 2023-05-12 18:12:59 yea 2023-05-12 18:22:07 back 2023-05-12 19:14:47 ikke: riscv is stuck 2023-05-12 19:14:58 ACTION puts on boots 2023-05-12 19:15:45 :) 2023-05-12 19:16:05 also arm ci gets eperm 2023-05-12 19:16:05 https://gitlab.alpinelinux.org/craftyguy/aports/-/jobs/1031467 2023-05-12 19:16:12 er 2023-05-12 19:16:12 well 2023-05-12 19:16:14 no 2023-05-12 19:16:16 dns is broken 2023-05-12 19:16:23 no upgrades 2023-05-12 19:36:24 network is weird on ci host 2023-05-12 19:37:34 the aarch64 runner host does have network / dns 2023-05-12 19:54:09 dns doesn't work on arm containers 2023-05-12 20:39:04 ikke: the builders don't seem to be back 2023-05-12 20:39:56 mqtt-exec failed 2023-05-12 20:44:44 hmm... 2023-05-12 20:45:29 for some reason, the aarch64 edge builder has the network interfaces file from the host? 🤔 2023-05-12 20:48:02 better yet, they are linked? :/ 2023-05-12 20:49:46 :) 2023-05-12 20:50:00 euh 2023-05-12 20:50:07 I'm puzzled 2023-05-12 20:50:14 lxc-attach but I remain on the host :/ 2023-05-12 20:50:42 hostname -> build-3-17-aarch64 2023-05-12 20:51:59 I've never seen this 2023-05-12 20:59:04 ugh 2023-05-12 20:59:09 container called build-3-18-aarch 2023-05-12 20:59:14 (note the missing 64) 2023-05-12 20:59:24 haha 2023-05-12 20:59:33 what happens if you pass it 2023-05-12 20:59:37 no error message? 2023-05-12 21:00:06 pass it? 2023-05-12 21:00:11 with 64 2023-05-12 21:00:45 there are 2 containers 2023-05-12 21:01:03 for some reason, build-3-18-aarch gets all the same ips as the host 2023-05-12 21:09:37 I just stopped the container 2023-05-12 21:15:04 need to figure out the docker network issue on the runners 2023-05-12 21:15:10 another time 2023-05-12 21:15:31 that's ok 2023-05-12 21:15:34 just fixing the builders is fine 2023-05-12 21:15:42 I think they should be fine now 2023-05-12 21:15:53 staring 2023-05-12 21:16:06 lots of thins building atm 2023-05-12 21:16:13 when do i get builder access 2023-05-12 21:16:18 21:16:13 up 75 days, 9:12, 0 users, load average: 131.22, 115.80, 93.01 2023-05-12 21:16:49 need to discuss that with ncopa / clandmeter 2023-05-12 21:16:56 aye, big load 2023-05-12 21:26:20 apparently one transit changed bgp peering with ungleich 2023-05-12 21:26:46 effectively blackholing them 2023-05-12 21:26:53 they switched over to a secondary ISP 2023-05-12 21:28:41 aw :( 2023-05-12 21:37:21 3.15 seems to still have broken dns 2023-05-12 21:48:51 seems like some domains 2023-05-12 21:52:32 yeah, just dns in general 2023-05-12 21:52:41 think it's still 64 2023-05-12 21:55:50 hm, no, even AAAA records don't work 2023-05-12 21:56:00 i think it just worked because of our internal cache for most 2023-05-12 21:56:05 external dns is broken 2023-05-12 21:56:26 could force a cache sync for now if you don't want to fix it 2023-05-12 22:07:00 it's not dns in general 2023-05-12 22:07:08 I can resolve all kinds of hosts 2023-05-12 22:07:13 but not cdn.kernel.org 2023-05-12 22:08:02 weir 2023-05-12 22:08:12 or the mariadb ones 2023-05-12 22:11:49 get PTY allocation request failed on channel 0 2023-05-12 22:11:53 and it's late 2023-05-12 22:18:56 could you run the cache sync command 2023-05-13 10:56:49 Switched dns temporarily to google dns as the ungleich dns servers return SERVFAIL 2023-05-13 13:07:15 ikke: still no fix for ssh access between lxc containers? mps-edge-x86_64 and mps-edge-riscv64 2023-05-13 14:54:11 mps: I found a workaround to specify it in awall, it should work now 2023-05-13 14:54:39 For some reason traffic from on container to another container on the same bridge is considered FORWARDed traffic 2023-05-13 14:56:27 ikke: thank you. works now 2023-05-13 19:04:38 psykose: killing my vim sessions huh :P 2023-05-13 19:04:49 i thought that was some thing i had forgotten ages ago lol 2023-05-13 19:04:54 hehe :) 2023-05-13 19:04:59 apologies :) 2023-05-13 19:05:02 no worry 2023-05-13 19:05:04 also oof that's some 2gb log 2023-05-13 19:05:05 wasn't looking anymore 2023-05-13 19:05:08 yeah 2023-05-13 19:05:16 that error was usually permissions 2023-05-13 19:05:17 but like 2023-05-13 19:05:23 i fixed the permissions 2023-05-13 19:05:25 i fixed everything 2023-05-13 19:05:28 it was still broken 2023-05-13 19:05:28 then 2023-05-13 19:05:31 i disabled the debug log 2023-05-13 19:05:33 and it was fixed 2023-05-13 19:05:33 ?? 2023-05-13 22:27:48 builds to not seem to be happy at present :/ 2023-05-13 22:27:56  ERROR: Job failed: failed to pull image "alpinelinux/gitlab-runner-helper:latest" with specified policies [always]: Error response from daemon: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) (manager.go:237:15s) 2023-05-13 22:55:55 that's just the arm ci still having no dns etc 2023-05-13 22:56:01 ikke: think riscv64 is stuck again 2023-05-14 05:13:32 psykose: arm ci does have DNS, but for some reason CA not connect to the docker registry 2023-05-14 05:21:51 psykose: was stuck on the same package again 2023-05-14 10:07:08 yea 2023-05-14 10:07:09 weird 2023-05-15 07:46:10 ikke: the s390x builders went on vacation 2023-05-15 10:51:31 buildrepo spinning again 2023-05-15 11:03:01 the networking cut out for a bit and it was spinning after it came back 2023-05-15 11:03:03 hm 2023-05-15 16:36:10 ikke: could you kick 3.16-x86 2023-05-15 16:36:15 let me guess, buildrepo :p 2023-05-15 18:07:59 ikke: also can you check for stale makedeps on aarch64 2023-05-15 18:54:49 psykose: no, make test 2023-05-15 18:55:03 cute 2023-05-15 18:55:23 no stale dependencies on aarch64 2023-05-15 18:55:27 hrm 2023-05-15 18:55:31 weird 2023-05-15 18:55:33 (I cleaned them up right after the network issues 2023-05-15 19:13:32 is there a way we can patch buildrepo on the builders to just make sure stale .makedeps are uninstalled at the start of a build cycle 2023-05-15 19:13:37 i think that would just always fix it 2023-05-15 19:13:47 not good default behaviour but good on the builders specifically 2023-05-15 19:14:10 literally just apk del .makedepends\* before doing the build 2023-05-15 19:14:30 maybe an optional clean step? 2023-05-15 19:17:00 there's an infinite amount of ways to do it, but it's very builder specific so we can just do literally that before calling buildrepo 2023-05-15 19:17:05 it doesn't sound like it would ever matter 2023-05-15 19:17:30 unless people manually leave stuff around on builders for makedeps in .makedepends\* 2023-05-15 19:17:31 I don't like the idea of hotpatching the builders 2023-05-15 19:17:55 i don't mean hotpatching 2023-05-15 19:17:58 optional clean step yea 2023-05-15 19:18:14 well 2023-05-15 19:18:27 which repo had the mqtt-exec thing that calls it 2023-05-15 19:18:40 lua-aports has buildrepo 2023-05-15 19:18:51 aports-build in aports has aports-build 2023-05-15 19:19:30 But imho this could be added to abuild 2023-05-15 19:19:46 There is already support for manually specifying stops to clean up 2023-05-15 19:19:52 (ie, overriding) 2023-05-15 19:20:30 it's not in scope for abuild 2023-05-15 19:20:35 abuild builds one thing not the repo 2023-05-15 19:21:00 abuild is what creates those deps 2023-05-15 19:21:35 when the repo gets built an N amount of packages get built 2023-05-15 19:21:41 and abuild already cleans the deps it creates 2023-05-15 19:21:45 this is explicitly ones that leak 2023-05-15 19:22:02 so unless you want it to call apk del .makede\* after _every single built package_, that does not make sense to put it there 2023-05-15 19:22:07 how else would it call it once at the start 2023-05-15 19:22:33 I was just thinking cleaning every time before build 2023-05-15 19:22:40 yeah that sounds terrible 2023-05-15 19:23:41 It could be an option to buildrepo like keep building or skip aports with src dirs 2023-05-15 19:24:07 idk i think this is already way too overcomplicated so i don't really want to think about it 2023-05-15 19:24:16 issue: once every 3 weeks some dep leaks 2023-05-15 19:24:26 solution: once on git trigger you just apk del * 2023-05-15 19:24:41 apparently better: lets call apk del 50,000 per build just in case 2023-05-15 19:24:49 apk del .make* you mean? 2023-05-15 19:25:22 ye 2023-05-15 19:25:42 the deps never leak inside the same build, it's only between restart/kill or whatnot, so once on start catches every issue so far 2023-05-15 19:25:43 apk del * would brick the builder :P 2023-05-15 19:26:06 add it to lua-aports then 2023-05-15 19:26:15 as an option 2023-05-15 19:29:00 well, sure, i could try 2023-05-15 19:29:12 but conceptually this is only useful on our builders and is literally one line 2023-05-15 19:29:48 and putting it in buildrepo is going to suddenly be 'why is this one option that immediately maps to abuild-apk del ..' or whatnot (if that was even allowed), etc 2023-05-15 19:29:56 and then the same bikeshed all over again, to run one line of shell 2023-05-15 19:30:06 It needs to live somehwere 2023-05-15 19:30:12 and one some hotpatch 2023-05-15 19:30:42 if the mqtt-exec script that runs on git trigger was in a git repo it wouldn't be a hotpatch 2023-05-15 19:31:24 mqtt-exec execs buildrepo 2023-05-15 19:31:54 https://gitlab.alpinelinux.org/alpine/aports/-/blob/master/main/aports-build/mqtt-exec.aports-build.confd 2023-05-15 19:32:02 There is no script in between 2023-05-15 19:32:13 Oh, sorry, it exec aports-build 2023-05-15 19:32:23 ttps://gitlab.alpinelinux.org/alpine/aports/-/blob/master/main/aports-build/aports-build 2023-05-15 19:32:42 So it is in a repo 2023-05-15 19:32:59 that's the one 2023-05-15 19:33:53 https://img.ayaya.dev/zm8g8mq6Wica 2023-05-15 19:33:53 xd 2023-05-15 19:34:34 And I have another random file in my downloads folder :P 2023-05-15 19:35:29 Sorry, I assumed you knew where aports-build lived 2023-05-15 19:35:40 i forgot about it 2023-05-15 19:35:40 somehow 2023-05-15 19:36:04 with a huge comment https://img.ayaya.dev/T1qJO7ujmPc2 2023-05-15 19:36:05 but i dunno 2023-05-15 19:36:09 maybe i should sleep first instead 2023-05-16 14:02:15 looks like /usr/local/bin/build.sh: line 125: JOBS: parameter not set is printed in the ci runs 2023-05-16 14:21:38 Maybe because it moved to /usr/share 2023-05-16 14:22:10 The setting in abuild.conf, i mean 2023-05-16 14:33:02 yeah 2023-05-16 14:33:13 doesn't affect the actual abuild invocations 2023-05-16 14:34:23 ah, it was just to print it 2023-05-16 14:34:35 you could just delete the 2 lines 2023-05-16 14:34:52 it already defaults to nproc in default.conf 2023-05-16 20:46:48 ikke: you wrote place5 in netbox but it's place6 here, which is correct 2023-05-16 20:47:17 It's places, but the DNS records are still mentioning place6 2023-05-16 20:47:26 Place6* 2023-05-16 20:47:26 hah 2023-05-16 20:47:33 er 2023-05-16 20:47:33 ugh, mobile 2023-05-16 20:47:34 which 2023-05-16 20:47:36 it's place5 2023-05-16 20:47:39 okie 2023-05-16 20:48:06 Nico did mention the packetloss was expected in the fail-over situation 2023-05-16 20:48:19 yea 2023-05-16 20:49:27 someone here said they would love to fix the 11mbit networking 2023-05-16 20:51:46 mhm, i'd be happy to^^ i know a bit about their nat/dns64 2023-05-16 20:52:11 Right, I think you helped me last time 2023-05-16 20:53:10 yeah about using their dns for the fake dns entries 2023-05-16 20:53:10 I suppose we should do that after their network setup is restured 2023-05-16 20:53:36 for some reason, dl-cdn returns SERVFAIL on their dns servers 2023-05-16 20:55:41 thats strange, only that? did u explicitely test with e.g. dig @ , and made sure that it originates from their ip ranges? 2023-05-16 20:56:08 https://tpaste.us/5Xo9 2023-05-16 20:56:13 ^ 2023-05-16 20:56:21 only I've seen so far 2023-05-16 20:58:43 can it only originate from their ip-s? 2023-05-16 20:59:31 ikke: could you also look at https://gitlab.alpinelinux.org/alpine/infra/docker/alpine-gitlab-ci/-/merge_requests/14 so we can speed up the ci a bit especially for arm :p 2023-05-16 21:00:35 nu: sorry, what do you mean? 2023-05-16 21:03:35 i saw some mentions about vpn, and if the query goes and comes back from outside, then it may not be allowed 2023-05-16 21:04:13 there is no vpn involved for IPv6 2023-05-16 21:05:26 hmm, it should actually say it nayway 2023-05-16 21:05:28 Hmm, but the route is strange 2023-05-16 21:06:03 https://tpaste.us/vNKO 2023-05-16 21:06:30 So seems like something is routed externally 2023-05-16 21:06:34 maybe you can give it a shot to the other one too: 2a0a:e5c0:0:a::b 2023-05-16 21:06:45 I alreaady tested that 2023-05-16 21:06:55 Result was the same 2023-05-16 21:07:29 loves ip 2023-05-16 21:07:46 wow, that routing might be a clue for the perf too 2023-05-16 21:08:13 well, not sure if that was always there alreaady 2023-05-16 21:08:26 they had a router issue, and then an ISP that black-holed them 2023-05-16 21:12:01 i mean switzerland is pretty nice to travel around, but not in this case :D 2023-05-16 21:12:42 prove it 2023-05-16 21:13:07 Today in life as a packet in switzerland 2023-05-16 21:14:00 nu: hm, I remember switzerland by serious car accident when I was there last time long ago 2023-05-16 21:19:02 trains wont let you down here^^ 2023-05-17 10:30:07 ikke: the edge arm builders took a nap after this one 2023-05-17 10:31:32 Not really napping are they 2023-05-17 10:31:58 Seems the buildrepo spinning is indeed related to network issues 2023-05-17 10:32:35 i think there's a network request somewhere that spins 2023-05-17 10:32:44 probably like right after build when it tries do something 2023-05-17 10:33:00 but i imagine that's the kind of things that should timeout 2023-05-17 10:33:05 maybe not in whatever lua 2023-05-17 10:34:46 check that the deps were cleaned too :p 2023-05-17 10:35:15 wow 2023-05-17 11:10:06 psykose: I already did that 😉 2023-05-17 12:08:23 :) 2023-05-17 12:08:30 so, what do you think of that apk del proposal 2023-05-17 12:27:00 It's fine for me 2023-05-17 12:27:22 s/for/with 2023-05-17 12:41:28 ncopa: anything against aports-build removing any leftover .makedepends packages on start? 2023-05-17 12:42:36 https://img.ayaya.dev/zm8g8mq6Wica 2023-05-17 12:44:25 no, i think thats fine 2023-05-17 13:04:48 let's give it a try then 2023-05-17 13:08:40 wrote a nice big comment 2023-05-17 13:11:49 ikke: did the container cache thing build (i forget how that deploys) 2023-05-17 13:12:49 It should, but depends on pill policy of the runners 2023-05-17 13:13:14 Images get purged once a day, so that should force the runners ro download the latest 2023-05-17 13:13:26 Exception is rv64 2023-05-17 13:13:41 aha, ok 2023-05-17 13:13:45 i also fixed the JOBS thing 2023-05-17 13:13:50 Ok 2023-05-17 13:13:56 which you can just merge 2023-05-17 13:14:03 don't think we have had JOBS=1 for a real long time now 2023-05-17 13:14:08 think like 3.11? 2023-05-17 13:15:31 hm, no 2023-05-17 13:15:32 3.14 2023-05-17 13:15:50 ccc25a283598e11da29765c764356cdf0f965857 in aports 69aea2246288c37b7f2961d7a31983465eb925a5 in abuild 2023-05-17 13:15:51 :) 2023-05-17 13:16:06 which we don't really run ci for anymore so it's fine to not do anything fancy 2023-05-17 13:16:14 but if it matters, we can just re-export JOBS ourselves, no patches 2023-05-17 13:16:16 it's respected 2023-05-17 13:16:37 well, back before that commit it wasn't, so nvm 2023-05-17 13:16:52 computers hard 2023-05-17 19:31:49 ikke: did you ever look at that gitaly cache again 2023-05-17 19:32:02 No, haven't had the time yet to explore it 2023-05-17 19:34:00 could just yeet it in if i have a way to disable it if it explodes 2023-05-17 19:37:01 tomorrow afternoon I have some time 2023-05-18 13:33:55 psykose: enabled to pack object cache on gitlab test 2023-05-18 13:34:00 lesgo 2023-05-18 13:35:14 cached files with --depth=500 is ~15M 2023-05-18 13:38:22 doesn't sound too bad 2023-05-18 13:41:49 I had to search a bit how the gitaly config looks like, as the documentation only provides the omnibus config 2023-05-18 14:42:29 Does anybody know if gitlab can find and understand .editorconfig and set the tab with for that for reviews? https://gitlab.alpinelinux.org/alpine/cloud/tiny-cloud/-/merge_requests/78#note_307958 2023-05-18 14:43:15 found the answer: https://gitlab.com/gitlab-org/gitlab-foss/-/issues/49426 2023-05-18 17:56:38 Time for my favorite game again, guess where the packet is dropped 2023-05-18 17:59:03 i bet: third hop 2023-05-18 17:59:24 By what means? 2023-05-18 17:59:39 traceroute hops 2023-05-18 18:02:05 In the style of cluedo 2023-05-18 18:02:47 never played :D 2023-05-18 18:03:18 Typicaly solution would be: "The buttler, in the dining room, with a candlestick" 2023-05-18 18:05:28 ikke: courier says it is "currently out for delivery" ;-) 2023-05-18 18:06:23 is it sent via DHL? We used to call them Drop it, Hide it, or Lose it :-) 2023-05-18 18:06:38 minimal: network packet 2023-05-18 18:06:40 :) 2023-05-18 18:07:47 avian networking? ;-) 2023-05-18 18:08:28 would probably be more reliable 2023-05-18 18:10:05 Cluedo? ah, China did it, in the red room, with the Great Firewall lol 2023-05-18 18:10:19 heh 2023-05-18 18:18:51 the buttler? the butler of butts :D 2023-05-18 18:20:28 :) 2023-05-18 18:22:10 hows the cache looking 2023-05-18 18:22:35 hm, are you capable of seeing memory use for a pushed commit trigger 2023-05-18 18:22:49 i.e. i push a thing, you see how much it spikes 2023-05-18 18:23:59 psykose: need to restart gitlab for that 2023-05-18 18:24:06 before, i mean rn 2023-05-18 18:24:12 i can push something and you can see how much it generally is 2023-05-18 18:24:18 then later, after you turn it on, see again 2023-05-18 18:24:19 i guess 2023-05-18 18:24:33 not very scientific :p 2023-05-18 18:26:40 isn't the caching mostly saving CPU 2023-05-18 18:27:54 probably 2023-05-18 18:27:58 just all the info 2023-05-18 19:39:20 ikke: do you have a moment to do the rust move 2023-05-18 19:52:20 can you also tell me if you checked that arm sysctl thing for rust 2023-05-18 21:54:09 bye bye 2023-05-18 23:26:11 ikke: also seems comment search is broken, i.e. searching anything and hitting comments just 500's https://gitlab.alpinelinux.org/search?group_id=2&project_id=1&scope=notes&search=rust 2023-05-19 01:12:26 also https://security.alpinelinux.org/ still has no 3.18 entry 2023-05-19 10:30:16 psykose: I enabled the sysctl setting on ci for armhf, but deu to network issues, I was not able to confirm 2023-05-19 10:34:58 psykose: I've updated the repos for secfixes 2023-05-19 14:30:14 thanks! 2023-05-19 16:45:19 also the arm ci seems still broken 2023-05-19 17:02:57 yes 2023-05-19 17:03:21 But to fix it I need to know what the problem is first 2023-05-20 02:35:30 ikke: did we still need the 11GB of stale apkbrowser dbs on gbr2? we never put that one up 2023-05-20 04:15:39 (updated thelounge since there was a new release) 2023-05-20 04:16:32 seems to work :) 2023-05-20 04:17:48 gonna upgrade gbr2 too 2023-05-20 04:32:43 seems everyone loaded back except for ncopa 2023-05-20 04:32:44 strange 2023-05-20 04:54:13 Yeah, that's always the case with ncopa 2023-05-20 04:54:57 :) 2023-05-20 05:00:49 scared me for a moment there 2023-05-20 05:00:51 hrm 2023-05-20 05:00:57 we have to fix turbo at some point 2023-05-20 05:01:07 that image is still on 3.11 or something since it doesn't work otherwise 2023-05-20 05:01:15 and the turbo stuff is just abandoned 2023-05-20 05:01:22 don't think it works with openssl3 2023-05-20 05:01:33 well, it works for tpaste, so maybe 2023-05-20 07:40:34 incidentally the matrix server is also broken after reboot 2023-05-20 07:50:11 not exactly clear why 2023-05-20 07:56:39 ..and after upgrading to the master branch and downgrading again it works 2023-05-20 07:56:56 the coturn thing is broken and just crashloops tho 2023-05-20 07:58:07 what a miserable platform 2023-05-20 11:29:08 ikke: which host is the alpine-dev one? i notice it's still on 3.13 :) 2023-05-20 14:34:04 psykose: upgradaed 2023-05-20 14:34:08 pog 2023-05-20 14:34:15 didn't reboot tho 2023-05-20 14:34:16 :D 2023-05-20 14:34:26 It's a container 2023-05-20 14:34:39 So you see the uptime of the host 2023-05-20 14:34:39 ah 2023-05-23 20:29:49 kunkku: I have some challenge with awall that I have encountered twice now: There is traffic going through the FORWARD chain where the incoming interface is equal to the outgoing interface (br0 / lxcbr0), but I have difficulty modeling this in awall. How should I specify this? 2023-05-23 20:33:26 One option is to leave out the "in" part of the filter rule, so that it's added to all chains 2023-05-23 20:33:48 psykose: in other news, arm CI is working again 2023-05-23 20:33:53 is it 2023-05-23 20:34:04 https://gitlab.alpinelinux.org/alpine/aports/-/pipelines/165484 2023-05-23 20:34:16 nice 2023-05-24 00:55:22 also just noticed the runners have different versions of gitlab-runner hehe 2023-05-24 01:15:02 ikke: why does 3.18 only have https://security.alpinelinux.org/vuln/CVE-2021-36217 marked 2023-05-24 01:15:14 invalid for all but funny how only on one anyway 2023-05-24 03:34:44 ikke: also kick all the builders when you see this, they all hung on the same testsuite that goes forever on failure 2023-05-24 04:46:31 psykose: done 2023-05-24 04:46:43 missed x86_64 which will reach it at some point :D 2023-05-24 04:46:59 thanks 2023-05-24 04:47:05 yeah, it was not building rathole 2023-05-24 04:47:43 I'm not fond of killing innocent processes :P 2023-05-24 04:47:55 how pious 2023-05-24 04:55:08 fyi, I have some times no clue why the secfixes tracker does or doesn't do certain things 2023-05-24 04:55:29 yeah it's weird 2023-05-24 04:55:37 i also noticed some remapped things that aren't in effect on 3.18 2023-05-24 04:55:40 forgot which 2023-05-24 04:56:04 It's difficult to debug 2023-05-24 05:05:43 ¯\_(ツ)_/¯ 2023-05-24 05:09:47 gigashrug 2023-05-24 07:22:52 more ratholes 2023-05-24 07:24:57 Nothing a sigterm cannot handle 2023-05-24 10:57:33 ikke: could you kick edge-aarch64 again too 2023-05-24 10:58:55 *thunk* 2023-05-24 10:58:59 thanks 2023-05-24 10:59:03 cute sound 2023-05-24 20:58:13 will you guys help me if I get into trouble with alpine? :) I'm planning to install it tomorrow 2023-05-24 21:00:19 aldcor: as you know you should ask on #alpine-linux, but you can ask me in private 2023-05-24 21:00:37 this channel is for alpine infra team, mostly 2023-05-24 21:00:55 oh, right! thanks 2023-05-24 23:55:28 even more flagged spam eh 2023-05-24 23:55:33 ikke: could we just turn it off 2023-05-25 12:49:25 InterContinental Malicious Packets 2023-05-25 13:20:55 packets got stuck in the hadron collider 2023-05-25 13:24:20 they're just going round and round and round? ;-) 2023-05-25 22:00:36 ikke: riscv64 is stuck :-) 2023-05-26 04:01:22 the new gitlab layout is quite ok 2023-05-26 04:01:31 16? 2023-05-26 04:01:33 wonder how many other bugs it comes with 2023-05-26 04:01:34 yeah 2023-05-26 04:01:37 well, not entirely sure 2023-05-26 04:01:41 some opt-in thing on gitlab.com 2023-05-26 04:01:42 it's opt-in 2023-05-26 04:01:45 yeah 2023-05-26 04:01:45 ah 2023-05-26 04:01:46 yeah 2023-05-26 04:02:00 Haven't seen it yet 2023-05-26 04:02:15 you can go to any gitlab.com like pmaports and just opt in and click around 2023-05-26 04:02:17 if you have an account that 2023-05-26 04:02:20 that is* 2023-05-26 04:02:25 I have 2023-05-26 04:02:31 for all the bugs I need to report :P 2023-05-26 04:02:43 the only thing i don't like is in 16 with either layout 'subscribe notifications' is not on the right anymore 2023-05-26 04:02:49 it's under top 3 dots 2023-05-26 04:02:50 why 2023-05-26 04:03:12 but yeah it's alright 2023-05-26 04:03:20 also you'll have to upgrade twice for migrations iirc 2023-05-26 04:03:25 but i think you knew that laready 2023-05-26 04:04:18 yes 2023-05-26 04:04:58 but that's for 16.1.0, which is not released yet? 2023-05-26 04:05:13 the first one is in 16.0 2023-05-26 04:05:15 dunno 2023-05-26 04:05:18 https://docs.gitlab.com/ee/update/#1600 2023-05-26 04:05:56 i think ptrc went 15.10->16 and it failed 2023-05-26 04:34:41 Yeah, you need 15.11 first 2023-05-26 04:35:01 https://docs.gitlab.com/ee/update/#upgrade-paths 2023-05-26 04:37:13 :) 2023-05-27 04:33:59 who knew that riscv would get stuck yet again :p 2023-05-27 10:44:46 and now it's aarch64 2023-05-27 10:44:47 sad day 2023-05-27 10:45:01 stuckity-stuck 2023-05-27 14:29:26 hm 2023-05-27 14:29:28 and now it's armhf 2023-05-27 14:29:32 can you see which test it is tho 2023-05-27 14:34:08 did you already find out? 2023-05-27 14:37:07 no 2023-05-27 14:38:09 This is the tail of the buildlog: https://tpaste.us/lgnX 2023-05-27 14:38:46 https://tpaste.us/d16y 2023-05-27 14:38:58 The two guile processes are zombies 2023-05-27 14:39:51 There is on thread reading on fd 30 2023-05-27 14:40:00 Most of the others are in FUTEX_WAIT 2023-05-27 14:40:11 (FUTEX_WAIT_PRIVATE) 2023-05-27 14:40:58 psykose: need anything more? 2023-05-27 14:41:18 classic random hang 2023-05-27 14:41:20 nope 2023-05-27 14:41:37 ie, deadlock 2023-05-27 14:42:41 yeah something like that 2023-05-27 14:50:27 annoying how all the deadlocks happen suddenly :p 2023-05-27 14:50:38 yup 2023-05-27 14:50:48 checking rv64 which is also building guile 2023-05-27 14:51:13 nope, very much still busy 2023-05-27 20:40:22 can we get a more specific timeframe instead of '5 minutes' next time? :p 2023-05-27 20:41:00 ( gitlab doesn't show when the warning was created, it just appears randomly on refresh/navigation ) 2023-05-27 20:41:08 what 5 minutes 2023-05-27 20:41:28 check gitlab 2023-05-27 20:41:47 I just quickly want to restart :P 2023-05-27 20:42:31 git lab 2023-05-27 20:44:15 ptrc: gitlab is back :P 2023-05-27 20:44:25 thanks for the ping :3 2023-05-27 20:45:08 can u take it back down and upgrade it a few more times? 2023-05-27 20:45:30 Is that serious or a joke? 2023-05-27 20:46:06 a joke but also would be cool i think 2023-05-27 20:46:09 upgrades = always good 2023-05-27 20:46:32 upgrade == random things breaking 2023-05-27 20:47:23 yeah but we love those 2023-05-27 20:48:09 I want to first upgrade to 15.11, with the switch to alpine 3.17 / ruby 3.1 2023-05-27 20:52:53 ooh yea 2023-05-27 20:52:54 pog 2023-05-28 04:46:40 psykose: Haven't seen the pgsql error anymore since updating the image 2023-05-28 13:05:20 ikke: re: your question on awall 2023-05-28 13:05:53 perhaps you want to use "route-back": true in the zone 2023-05-28 13:06:40 hmm, let me check 2023-05-28 13:16:07 kunkku: That seems to indeed do what I need 2023-05-28 13:16:30 Thanks! 2023-05-28 13:17:41 +1 2023-05-28 13:19:12 The supprising thing was when I found out that traffic staying on the same bridge is still considered being forwarded 2023-05-28 13:19:24 (not related to awall) 2023-05-28 21:31:12 ikke: /proc/sys/net/bridge/bridge-nf-call-iptables 2023-05-28 21:32:26 that would bypass iptables completely right? 2023-05-28 21:34:02 I don't mind it going via iptables, was just supprised it was in the FORWARD chain instead of INPUT/OUTPUT 2023-05-29 11:26:12 ikke: yes, setting that tunable to 0 bypasses iptables for packets just crossing the bridge 2023-05-29 11:26:52 INPUT/OUTPUT chains still apply to packets destined to or originating from the host 2023-05-29 20:29:36 ikke: you guessed it, x86_64 is stuck 2023-05-30 04:28:29 looks like 3/8 archs got updated -- any known backlog? https://pkgs.alpinelinux.org/packages?name=tiny-cloud-aws&branch=edge&repo=&arch=&maintainer= 2023-05-30 04:28:48 https://build.alpinelinux.org/ 2023-05-30 04:29:01 it fails to build on some arches 2023-05-30 04:29:14 checsum mismatch 2023-05-30 04:32:09 do the runners keep /var/cache/distfiles/* (or equivalent) around? 2023-05-30 04:32:41 the builders, yes 2023-05-30 04:32:44 gitlab-runners don't 2023-05-30 04:33:49 yeah, builders. that'd do it. re-tagged rc7 for a last minute fix for -r1, and should have just done rc8. wiill set that in motion in the morning, thanks! 2023-05-30 04:33:52 but if there is a checksum mismatch, it will rename the file 2023-05-30 04:34:18 but, there is an extra layer, DISTFILES_MIRROR 2023-05-30 04:36:11 tomalok: retrying already fixed it 2023-05-30 15:14:12 ah, builders are offline 2023-05-30 15:14:32 Unreachable 2023-05-30 15:16:07 for long they cannot fix their network 2023-05-30 18:28:16 ikke: upgrade ;-) 2023-05-30 18:28:29 gitlab 16 you mean? 2023-05-30 18:28:42 Yeah 2023-05-30 18:28:47 Yeah, saw it 2023-05-30 18:28:52 we first have to go to 15.11 2023-05-30 18:29:04 nod 2023-05-30 18:30:05 clandmeter: not sure if it's our instance, but I noticed I frequently get disconnected warnings on matrix 2023-05-30 18:31:03 sounds like our instance 2023-05-30 18:31:24 the coturn thing also just dies in a loop every minute 2023-05-30 18:32:12 Coturn should be for audio or similar 2023-05-30 18:32:33 yeah, for direct calls 2023-05-30 18:32:43 I’ve been away for many weeks so no time to check anything 2023-05-30 18:33:01 nice to see you again :P 2023-05-30 18:33:18 dendrite also crashloops every minute 2023-05-30 18:33:25 matrixdotorg/dendrite-monolith "/usr/bin/dendrite" 22 hours ago Up 17 seconds 2023-05-30 18:33:26 that would explain it 2023-05-30 18:34:40 some goroutine panic 2023-05-30 18:34:59 level=panic msg="roomserver output log: write room event failure" 2023-05-30 18:35:16 good software 2023-05-30 18:35:27 It's still beta™ 2023-05-30 18:35:40 lol 2023-05-30 18:36:06 https://github.com/matrix-org/dendrite/issues/1701 2023-05-30 18:41:57 federationsender_rooms doesn't even exist 2023-05-30 18:50:37 roomserver_events 2023-05-30 18:51:07 should we evect that record? 2023-05-30 18:51:12 select * from roomserver_events where event_id = '$458hzYN2f5x9HGEj_mUmVqIg_clmg5w-WoRz3dIWnWo'; 2023-05-30 18:57:44 ye 2023-05-30 18:58:15 sounds like it 2023-05-30 18:59:22 It says we need to replace things, not remove things 2023-05-30 19:00:51 The error is "missing state events", which is different from that issue 2023-05-30 19:02:12 well yes, that issue was fixed 2 years ago 2023-05-30 19:02:21 ahuh 2023-05-31 05:49:45 also riscv is stuck yet again 2023-05-31 05:50:52 I bet it's just tired 2023-05-31 05:51:07 definitely 2023-05-31 05:52:06 that looks like a routine go build that got stuck 2023-05-31 05:52:07 how nice 2023-05-31 05:53:19 yeah, I had to kill -9 it 2023-05-31 05:54:17 :/ 2023-05-31 05:54:31 this stuff is so frustrating 2023-05-31 07:03:28 ikke: now is a good time to upgrade gitlab :D 2023-05-31 07:03:52 How so? :) 2023-05-31 07:05:32 break even more things 2023-05-31 07:05:34 :3 2023-05-31 07:10:49 https://imgflip.com/i/7npxej 2023-05-31 07:13:26 no features = no bugs 2023-05-31 07:39:32 ACTION executes docker compose down 2023-05-31 07:39:56 still reachable, down it harder 2023-05-31 18:07:30 anyone else happen to have issue connecting to dmvpn via wg? 2023-05-31 18:29:51 I can't connect containers 2023-05-31 18:30:17 and cannot ping 172.16.252.1? 2023-05-31 18:30:48 right 2023-05-31 18:30:54 hmm 2023-05-31 18:30:59 but I can connect to dmvpn 2023-05-31 18:31:30 What do you mean with that? 2023-05-31 18:32:18 ssh to dmvpn 2023-05-31 18:32:33 deu7-dev1 2023-05-31 18:32:44 I call it dmvpn locally 2023-05-31 18:32:59 ah ok, dmvpn itself is dmvpn1 and dmvpn2 2023-05-31 18:33:09 aha 2023-05-31 18:33:09 deu7-dev1 is indeed up 2023-05-31 18:33:28 Maybe linode is filtering things 2023-05-31 18:33:40 I can ping it, but mtr (traceroute) is not getting responses) 2023-05-31 18:33:47 it should not if I can ssh to it 2023-05-31 18:34:01 filtering udp 2023-05-31 18:34:04 or something like that 2023-05-31 18:35:13 I see traffic on wg0 2023-05-31 18:35:42 From my machine, I only see handshake requests being sent 2023-05-31 18:36:02 "Handshake initialization 2023-05-31 18:37:53 hm, I don't see udp port 41414 from my local IP 2023-05-31 18:40:56 hm, could be that the linode filter udp port 41414 2023-05-31 18:41:16 I do see traffic coming in on deu7 on port 41414 2023-05-31 18:45:12 ah yes 2023-05-31 18:45:24 but not from my address 2023-05-31 18:51:37 I see traffic from my router port 41414 to deu7-dev1 but no answer from it 2023-05-31 18:51:49 same 2023-05-31 18:51:58 would it help to restart the wg0 interface? 2023-05-31 18:52:06 not sure if that would make any sense 2023-05-31 18:52:14 I doubt but will not hurt 2023-05-31 18:53:55 ikke: what is this '189001.661437] IN= OUT=eth0 SRC=172.104.203.112 DST=109.72.52.77 LEN=176 TOS=0x08 PREC=0x80 TTL=64 ID=34624 PROTO=UDP SPT=41414 DPT=41414 LEN=156' in dmesg 2023-05-31 18:54:31 our filter? 2023-05-31 18:54:40 that's iptables logging to dmesg 2023-05-31 18:54:47 yes 2023-05-31 18:54:49 would indicate the traffic is blocked 2023-05-31 18:55:05 yes I see my address there 2023-05-31 18:55:49 did someone manipulated ip filter in last few hours 2023-05-31 18:55:52 but -A INPUT -i eth0 -p udp -m udp --dport 41414 -j ACCEPT is there 2023-05-31 18:57:40 This indicates outgoing traffic is blocked 2023-05-31 18:57:52 would mean connection tracking is broken? 2023-05-31 18:58:30 I think no 2023-05-31 18:59:00 it should work without connection tracking iirc 2023-05-31 18:59:42 changed the output policy to ACCEPT 2023-05-31 19:00:10 if I ping from wg gateway my wg interface it should work without CT 2023-05-31 19:01:08 it's about the outside traffic 2023-05-31 19:04:18 output policy is drop 2023-05-31 19:04:56 yes, but awall adds -j ACCEPT at the end of each chain 2023-05-31 19:05:14 and there is no rule for 41414 in output 2023-05-31 19:05:32 that's why I mentioned connection tracking 2023-05-31 19:06:31 did you changed it in last few hours 2023-05-31 19:06:43 no 2023-05-31 19:06:52 hm 2023-05-31 19:06:57 except just now where I set the output policy to accept 2023-05-31 19:06:59 who else have access 2023-05-31 19:07:22 psykose 2023-05-31 19:07:31 current is 'Chain OUTPUT (policy DROP)' 2023-05-31 19:08:08 https://tpaste.us/D7Yd 2023-05-31 19:08:13 maybe you need restart firewall 2023-05-31 19:08:30 that's the output of awall diff and I did an activate 2023-05-31 19:08:56 try `iptables -L -n -x -t filter | less` 2023-05-31 19:09:48 manually changed the policy of OUTPUT to accept, no change 2023-05-31 19:11:14 cat /proc/sys/net/ipv4/ip_forward = 0 2023-05-31 19:12:38 I don't know to understand alpine firewall rules, newer used it 2023-05-31 19:13:24 reaching the wg0 interface should I think not involve forwarding, would it? 2023-05-31 19:14:58 hm, I forgot what we set when installed it 2023-05-31 19:15:16 I enabled it on all interfaces 2023-05-31 19:15:23 and set policy for FORWARD to accept 2023-05-31 19:15:24 no change 2023-05-31 19:16:15 well, I have 'udp dport 41414 accept' in my nftables forward chain 2023-05-31 19:20:18 we could switch to nftables, it is better understandable and easier to read 2023-05-31 19:20:27 and easier to set 2023-05-31 19:20:52 (going out now) 2023-05-31 19:21:25 I'll first just reboot the machine, see if that helps anything (despite disliking it) 2023-05-31 19:46:22 I hate it, but reboot helped 2023-05-31 19:53:51 heh 2023-05-31 19:54:37 good thing it that the problem is solved, bad thing is we don't know what caused problem 2023-05-31 19:54:44 exactly 2023-05-31 19:56:56 idea to switch to nftables stays 2023-05-31 19:57:25 I need to get familiar with nftables first 2023-05-31 19:57:34 I'm at least comfortable with awall now 2023-05-31 19:58:01 oh yes, helpers which hide real things 2023-05-31 19:58:20 I always used bare iptables before 2023-05-31 19:59:06 awall is just a nice abstraction on top of it to easier and more consistenly apply rules 2023-05-31 19:59:07 I understand, raw iptables tables are cumbersome and because that there are a lot 'helpers' for it 2023-05-31 19:59:11 yes 2023-05-31 19:59:26 long time I used ferm 2023-05-31 19:59:27 And I understand nftables improved this a lot 2023-05-31 20:00:15 yes, nftables source is nearly same as 'nft list ruluset' even comments are preserved 2023-05-31 20:01:32 variables are supported by default and complicated rules if needed 2023-05-31 20:02:18 though I'm far from experienced nftables user 2023-05-31 23:39:36 ikke: i never touched anything, was asleep :)