2026-01-01 02:14:36 ikke: nice :) 2026-01-01 02:14:53 and happy new year 2026-01-01 17:18:04 (trying to cleanup space) 2026-01-01 18:16:30 clandmeter: do you have a moment to check gbr2-dev1? apk-browser takes a lot of space and I haven't been able to reduce it yet 2026-01-01 18:16:44 sure 2026-01-01 18:18:00 edge db is a bit big 2026-01-01 18:46:40 ikke: not sure whats going on 2026-01-01 18:46:53 i tried to vacuum, but i think the tmp dir is not big enough 2026-01-01 18:55:22 ah has nothing to do with tmp, disk is just full 2026-01-01 18:59:39 ikke: can i add a tmp disk to the instnace? 2026-01-01 19:01:25 i think i can 2026-01-01 19:02:41 why is it not showing... 2026-01-01 19:22:17 ikke: the instance has double the space 2026-01-01 19:22:27 just didnt init on the boot disk 2026-01-01 19:22:48 so we could increase the disk by 160 to 320 2026-01-01 20:57:54 ikke: it looks like sqlite is not automatically deleting referenced files and such, growing the table to 230M rows 2026-01-01 21:15:24 foreign_keys is not set anymore 2026-01-01 21:15:45 so the on delete cascade is broken 2026-01-02 07:30:08 clandmeter: thanks, so we have some cleanup to do 2026-01-02 07:30:30 im writing a script 2026-01-02 07:30:42 and will created a pr to fix it 2026-01-02 07:31:14 pmos gitlab is defuct? 2026-01-02 07:31:40 ah today it works 2026-01-02 08:18:37 How can we resize the disk? Fdisk does not report that it's 320g 2026-01-02 08:28:43 i didnt resize it yet 2026-01-02 08:28:46 i didnt know before 2026-01-02 08:28:49 i just added a disk 2026-01-02 08:29:12 i think when i am finished we can remove this disk and resize the boot disk 2026-01-02 08:31:23 disk is mounted on /mnt 2026-01-02 08:31:32 and im running a cleanup script in tmux 2026-01-02 08:31:37 will take some time 2026-01-02 08:32:18 the result size will be 4.9G :) 2026-01-02 09:56:37 clandmeter: nice, that's 10% of the old size 2026-01-02 10:21:51 and all other db's have the same issue 2026-01-02 12:07:03 ikke: going to replace the edge db 2026-01-02 12:09:22 clandmeter: alright 2026-01-02 12:09:28 done 2026-01-02 12:10:07 will cleanup the others now 2026-01-02 13:31:40 clandmeter: thanks!! 2026-01-02 14:06:56 ikke: done 2026-01-02 14:09:06 i added a local change to fix the issue 2026-01-02 14:22:14 achill: ping 2026-01-02 14:22:39 pong 2026-01-02 14:27:47 achill: not sure who managed pmos pkgs, but i guess you are also missing a setting. 2026-01-02 14:28:38 i guess postmarketos' infra team, i can tell them about it 2026-01-02 14:28:43 foreign_keys 2026-01-02 14:28:55 i think martijn left it out but did copy the logic 2026-01-02 14:29:15 this will keep increasing the db over time 2026-01-02 14:29:53 ah alr 2026-01-02 15:55:22 clandmeter: I've been thinking about migrating gitlab artifacts to linode object storage to alleviate disk usage 2026-01-02 15:56:28 https://docs.gitlab.com/administration/cicd/job_artifacts/#using-object-storage 2026-01-02 18:09:59 I must say I'm happy with our gitlab-test environment. Caught a change in configuration we needed to apply that would've prevented gitlab from starting :-) 2026-01-02 22:41:14 do the CI runners need some poking again now that gitlab was upgraded again? 2026-01-02 23:54:17 why is the aarch64 CI builder having issues fetching from busybox.net? 2026-01-03 05:30:22 achill: no, bot there are some long running jobs 2026-01-03 05:35:04 s/bot/but/ 2026-01-03 07:22:13 omni: I don't think it's specific to aarch64 2026-01-03 07:52:06 ah 2026-01-03 14:03:41 hmm https://gitlab.alpinelinux.org/samitha.mdml/aports/-/commits/eb87367831330c8196e702948ebac924b7054062 2026-01-03 18:33:29 404 2026-01-03 18:34:04 It's priate 2026-01-03 18:34:07 private 2026-01-03 18:40:54 ok, just between you and them 2026-01-03 18:48:04 They're building some ISO in CI and want to store it in gitlab 2026-01-03 18:49:44 wtf 2026-01-03 18:51:43 https://tpaste.us/ZKBg 2026-01-03 18:52:49 hyperland related 2026-01-03 18:52:55 not sure why you need a dedicated iso for that 2026-01-03 19:00:02 i dunno either. but the way they look at things in hyprland world i assume they'll try to be their own os at some point. it's like they never dep'd on wlroots before. 2026-01-03 20:00:37 I think mangowc may be a better hyprland, so I packaged it 2026-01-04 17:26:26 hey all, I am not sure if this is the right place to ask or if it is a known issue but I noticed it today that dl-cdn.alpinelinux.org resolves to 146.75.2.132 (AS54113 - Fastly, Inc.) from my IPs in Hungary and that IP cannot be reached (connection timeout), I noticed it about 2 hours ago and the problem still persist 2026-01-04 17:28:08 well, that was not a question after all :) anyway, I just wanted to let you know, I am using mirrors for now, they work fine 2026-01-04 17:54:57 gheja11: it's not a known issue 2026-01-04 17:56:00 gheja11: can you try a traceroute? 2026-01-04 18:02:50 sure, there are responses until the third hop - https://pastebin.com/raw/srmKEdyE (expires in 1 hour) 2026-01-04 18:06:15 and I can ping the neighbouring IPs, .131 and .133 2026-01-04 18:06:35 and connect to them over http 2026-01-04 18:09:30 I just checked from a different IP range and it worked from there, both the ping and the download 2026-01-04 18:10:33 so I guess my primary provider's IP range is blocked for some reason 2026-01-04 18:22:40 Or perh a routing issue 2026-01-04 18:22:44 perhaps 2026-01-05 06:19:55 clandmeter: nld-bld-1 seems unresponsive again :( 2026-01-05 06:30:06 ncopa: can you check the date on `shared-runner nor-ci-1`? 2026-01-05 09:53:20 morning! will do 2026-01-05 10:00:38 it looks like it is building: https://gitlab.alpinelinux.org/jvvv/aports/-/jobs/2161542 2026-01-05 10:55:34 I contacted my provider (regarding the connection timeout for dl-cdn.alpinelinux.org 146.75.2.132), they said they contacted Fastly and solved the issue together - now it works from their IP range too 2026-01-05 10:55:51 thanks for the help, @ikke! 2026-01-05 10:56:09 Great, thanks for the update 2026-01-05 10:56:43 ncopa: I saw some jobs failing due to an tls verification error 2026-01-05 11:46:09 maybe time was off? 2026-01-05 11:50:13 yes, that's why I was asking to check the date 2026-01-05 11:54:19 its correct now 2026-01-05 11:54:29 Thanks 2026-01-05 22:01:57 the arm/aarch64 builders seem to default to ipv6 and not try ipv4 when failing 2026-01-05 22:10:52 when and how are archives added to distfiles? 2026-01-05 22:13:46 I've mentioned in #meli that their AAAA address doesn't seem to be reachable 2026-01-06 05:32:07 omni: that's because we use bb wget 2026-01-06 05:32:20 Which does not fall back 2026-01-06 05:54:13 omni: Once per day the builders sync their distfiles withdistfiles.a.o 2026-01-06 10:14:15 ikke: ok, thanks! 2026-01-06 10:17:45 they have reloaded their firewall and the ipv6 address is reachable now 2026-01-06 10:29:33 Great 2026-01-06 10:52:33 i think we have corrupt ext4 filesystem on gitlab-runner-ppc64le 2026-01-06 10:52:45 https://build.alpinelinux.org/buildlogs/build-edge-ppc64le/main/linux-lts/linux-lts-6.18.3-r0.log 2026-01-06 10:53:11 we should probably reboot the ppc64le host and run filesystem check 2026-01-06 10:53:24 or I can remount it read-only and do fsck.ext4 2026-01-06 10:54:07 /dev/sda4: ********** WARNING: Filesystem still has errors ********** 2026-01-06 10:56:11 ikke: do you think we should reboot it, or just try fsck without reboot? 2026-01-06 10:57:38 I suppose it is risky to try apk upgrade with a corrupt fs 2026-01-06 10:58:11 I will try fsck it online 2026-01-06 11:00:45 didnt work. I need to reboot it 2026-01-06 11:00:56 10:59:55 up 3 days, 15:08, 0 users, load average: 0.44, 22.51, 38.08 2026-01-06 11:00:56 gitlab-runner-ppc64le [~]# uptime 2026-01-06 11:01:14 apk audit --system also did not detect anything changed 2026-01-06 11:01:19 im rebooting it 2026-01-06 11:16:14 build-edge-ppc64le should be back up now 2026-01-06 11:21:38 Thanks! 2026-01-06 11:22:20 ncopa: how did you schedule the fsck? 2026-01-06 11:27:29 1) ensure fsck.ext4 is installed. 2) rc-update add fsck boot 3) reboot 2026-01-06 11:27:57 I suppose we should upgrade it to v3.22 or something 2026-01-06 11:28:39 i noticed that gitlab runners does not work well with alpine v3.23. docker version mismiatch 2026-01-06 11:29:06 Hmm, ok 2026-01-06 14:36:48 clandmeter: thanks 2026-01-06 14:44:53 rebooting bld1 2026-01-06 14:50:00 mds matrix bridge is having issues 2026-01-07 07:02:51 clandmeter: I've increased the mirror volume on deu1-t1-1 we have about 600G left 2026-01-07 07:03:04 In the vg I mean 2026-01-07 07:03:09 ok 2026-01-07 07:03:23 we should also fix the volume on pkgs 2026-01-07 07:03:31 Right 2026-01-07 07:03:48 Perhaps we can reduce the storage used for dev containers on that server 2026-01-07 07:04:22 we can kill the second disk 2026-01-07 07:04:31 and extend the first by 160G 2026-01-07 07:04:41 i only used it for backup 2026-01-07 07:04:56 I meant on deu-t1-1 2026-01-07 07:05:00 oh you mean the mirror 2026-01-07 07:06:14 Yes 2026-01-07 07:06:29 That server also has x86_64 dev containers 2026-01-07 07:06:35 The volume for that is 1T now