2025-04-01 07:49:56 Hej, I've started working on apk-polkit-rs again (basically the adapter to manage apk packages with GNOME Software), would it be possible to add the repository to renovate bot? :) 2025-04-01 10:19:32 Cogitri: sure, you only have to add the [renovate] topic 2025-04-01 10:19:59 Although, depends what group it's hosted, renovatge needs to be able to make (and possibly merge) MRs 2025-04-01 10:20:33 Cogitri: nice to see you btw :) 2025-04-01 10:22:16 Ah, currently it‘s hosted on my user, so I guess it‘d have to be moved for that to work? 2025-04-01 10:23:03 ikke: good to see you too, now that my master‘s thesis and move to my new job is done, I have some breathing room again :D 2025-04-01 10:24:37 Out of the box it will not use indeed 2025-04-01 10:24:54 will not work* 2025-04-01 10:26:34 Glad to hear you finished your thesis 2025-04-01 11:34:35 Yeah, it was fun to work on a topic for a few months, but it's also very nice to only work in a 9 to 5 job without having to write a thesis :D 2025-04-01 11:35:03 OK, currently the project lives under https://gitlab.alpinelinux.org/Cogitri/apk-polkit-rs, I wouldn't mind a move (or setting that repo up for renovate) 2025-04-01 13:26:56 is edge aarch64 stuck on deno or temporarily paused? last build notification was ~16:07 the day before 2025-04-01 13:31:16 (or ~16:53 after building b3sum) 2025-04-01 15:07:06 mio: seems like it's stuck, a lot of Z processes 2025-04-01 15:12:23 ah okay, thanks for checking 2025-04-02 00:48:25 From Trusted and Vouched Dealers... (full message at ) 2025-04-02 10:28:14 https://gitlab.alpinelinux.org/alpine/aports/-/jobs/1792923 2025-04-02 10:28:20 > OSError: [Errno 28] No space left on device 2025-04-02 10:28:23 ikke: ^ 2025-04-02 10:30:59 ptrc: will check 2025-04-02 10:31:11 thanks! 2025-04-02 10:41:57 ptrc: should be plenty of spacen ow 2025-04-02 10:42:07 thanks again ^^ 2025-04-02 11:47:46 hm 2025-04-02 11:47:51 i wonder why the s390x runner ooms 2025-04-02 11:48:28 looking at the graphs, it only takes about 20-24GB of ram 2025-04-02 11:48:30 https://zabbix.alpinelinux.org/zabbix.php?action=charts.view&filter_set=1&filter_hostids%5B%5D=10339 2025-04-02 11:49:24 https://ptrc.gay/rnxNvgKG 2025-04-02 11:49:37 max 25.78 2025-04-02 17:26:02 would Alpine welcome a new bridge once the matrix.org one vanishes? 2025-04-02 17:38:44 If they don't it means I won't be in Alpine channels anymore... 2025-04-02 17:39:14 postmarketOS hosts a bridge since this week, we could probably let it cover the Alpine channels as well 2025-04-02 18:11:19 PureTryOut: You can already join random channels from the postmarketOS irc bridge 2025-04-02 18:11:31 you're in the whitelist 2025-04-02 18:11:36 just FYI 2025-04-02 18:12:08 But yeah would be cool to have the new bridge cover alpine too 2025-04-02 18:13:44 It's been working pretty well overall for the last 2 days (not counting the weeks/days before when several small mainlining channels and a test channel got bridged for test purposes) 2025-04-02 18:14:02 OFTC also has limit exceptions in place for it 2025-04-02 18:15:26 Finally IMO it works better than the m.org bridge in a few ways 2025-04-02 18:16:22 you *can* send multiline messages and reply-to from Matrix and they'll be formatted just fine on IRC for example, no "full message at..." thing (unless message >12 lines) 2025-04-02 18:18:42 and other stuff 2025-04-02 18:19:01 wdyt? 2025-04-02 18:22:02 I think it's fine if someone (pmos) wants to maintain one for Alpine 2025-04-02 18:22:16 pmos potentially 2025-04-02 18:22:40 ikke: ok ^^ so what do you think of pmOS creating #alpine-linux:postmarketos.org etc channels and bridge them to IRC 2025-04-02 18:23:11 and who should be mod there? 2025-04-02 18:23:26 (could bring the postmarketOS cross-banning bot) 2025-04-02 18:23:43 (the matrix incarnation only though, maybe not the IRC one) 2025-04-03 05:59:48 ikke: algitbot is rather talkative... 2025-04-03 08:53:32 clandmeter: Free speech! 2025-04-03 11:39:05 clandmeter: the server was hanging last night for some reason 2025-04-03 11:39:13 (zabbix server) 2025-04-03 15:40:55 hello from postmarketos.org bridge's private/restricted bouncer/portalling feature 2025-04-03 16:14:53 ikke: So are you OK with multiple #alpine-...:postmarketos.org (e.g. #alpine-linux:pmos.org, #alpine-devel:pmos.org, etc) channels being created and then bridged to IRC, then give mod to a few people on the [m] side? (not that I expect much spam from those, but you never know) 2025-04-03 16:32:02 (and irc banning with /mode +b works just as it did on the matrix.org bridge) 2025-04-03 18:14:47 f_: I think that's fine 2025-04-03 18:15:24 ikke: good ^^ 2025-04-03 18:16:20 for reference, the #_oftc_#alpine-...:matrix.org channels won't really work out because they're "portals", so only the matrix.org bridge has Admin there 2025-04-03 18:16:33 and I've no idea what will happen to them 2025-04-04 09:58:52 ikke: just to be clear, is #alpine-...:postmarketos.org ok or do you want #alpine-...:*matrix*.org instead? 2025-04-04 10:03:50 or maybe it doesn't matter 2025-04-04 10:19:47 I don't have a strong opinion 2025-04-04 10:20:32 👍 2025-04-04 10:20:36 As long as it's clear for everyone 2025-04-04 10:23:10 got it! 2025-04-04 10:23:34 I guess expect some new |m users in upcoming days then ;) 2025-04-04 10:24:08 Also it'd be nice if someone modifies the old matrix channel to state that it would be deprecated 2025-04-07 15:09:20 hello! can I get access to a riscv64 container to bisect community/go riscv64 check() failures in !82333 ? I previously used to have access to a container on nld-dev-1 but that container doesn't seem to exist anymore (I have wireguard access) 2025-04-07 15:17:41 clandmeter: now nld-bld-1 is unreachable 2025-04-07 16:54:38 nld-bld-1 is itself reachable for me but the container I had there was setup via a port forwarding (port: 22019) and that doesn't seem to exist anymore, I think? 2025-04-07 16:57:33 You mean nld-dev-1? 2025-04-07 16:58:14 oh, yea. thought your were responding to my message, sorry! 2025-04-07 16:58:28 well, it was related, but they are different servers :) 2025-04-07 16:58:36 *nod* 2025-04-07 16:58:46 The container is still there, but it does not get an IP address 2025-04-07 16:59:27 I'll just set one statically 2025-04-07 17:02:28 nmeum: can you connect to 172.16.26.19? 2025-04-07 17:02:58 yep, that works! 2025-04-07 17:03:25 ftr, this one is emulated 2025-04-07 17:05:02 ok, let's see if I can reproduce it there. thanks ikke 2025-04-07 21:34:45 The generate-build-jobs step seems to fail quiet often lately. Is there any way to change that? 2025-04-07 21:43:38 I did make some changes this evening to deal with that 2025-04-07 21:43:54 planning to do some more 2025-04-07 22:23:31 thanks 2025-04-10 10:27:26 ikke: i think I'd like to upgrade and reboot nld9-dev1. hopefully it will help with the util-linux build 2025-04-10 10:32:24 ncopa: alright 2025-04-10 10:35:00 i need to reboot it. there are a couple of processes in "D" state, uninteruptible 2025-04-10 10:35:14 im reboot it 2025-04-10 10:35:15 oh yeah, that's anoying 2025-04-10 10:35:28 i updated kernel and all 2025-04-10 10:35:32 and rebooting it now 2025-04-10 10:35:34 ack 2025-04-10 10:35:49 lets see if it comes back 2025-04-10 10:36:05 looks like it was something in the kernel, the ext4 driver that caused it 2025-04-10 10:51:19 util-linux seems to have passed now 2025-04-10 11:04:23 yeah, reboot helped. some bug in ext4 probably 2025-04-10 11:04:38 i hope its fixed in latest kernel version 2025-04-10 11:05:22 I will start with the 3.22 builders soonish, but I think we will run out of diskspace 2025-04-10 11:40:38 yes 2025-04-10 16:22:15 maybe only do every other release on all arches? (assuming it's an arch specific disk space issue) 2025-04-10 16:49:21 che-bld-1: tomorrow will go offline for a bit for an nvme expansion 2025-04-10 17:28:49 Thx for the notice :) 2025-04-10 17:28:56 And the service 2025-04-11 12:53:39 same to you too:) 2025-04-11 17:14:44 \o/ 2025-04-11 17:14:51 nvme1n1 259:0 0 3.6T 0 disk 2025-04-11 17:19:50 ayy 2025-04-11 17:20:11 all went well it seems :3 2025-04-12 13:20:27 build-edge-aarch64 may be stuck, kismet building for >=10h 2025-04-12 14:32:28 it's unstuck now, thanks 2025-04-12 14:45:02 mio: for some reason the service was not running nor set to run on start 2025-04-12 14:46:32 ah ... it's on its way now with the rebuilds :) 2025-04-12 17:07:36 Hi all, I've been unable to create new requests in GitLab for the past few days (since april 8) Last request ID 01JRNFECDJS79XA9KRW5KK0Y14 2025-04-12 17:52:32 ayakael: make sure your master branch on your fork is up-to-date 2025-04-12 19:16:01 ikke: that did it, thanks! 2025-04-13 07:43:06 arm builders are out of disk space 2025-04-13 09:20:08 \:D/ 2025-04-13 09:20:28 now it's just the s390x builder that is stuck 2025-04-13 19:32:30 edge x86_64: no space left on device when running minikube tests 2025-04-17 11:01:25 seems like disk io on loongarch64 builders is slow? 2025-04-17 11:01:49 takes significant longer time to clone the build-3-21-loongarch64 than the other arches 2025-04-17 11:39:51 looks like risc-v builder went missing? 2025-04-17 11:51:30 ncopa: the edge builder is still there, but the other one is gone 2025-04-17 11:51:54 the host is gone? 2025-04-17 11:52:01 ssh: connect to host 172.16.30.2 port 22: Host is unreachable 2025-04-17 11:52:19 yup 2025-04-17 11:52:37 sort of unfortunate. im setting up the builders now, and riscv is the slowest 2025-04-17 11:54:49 clandmeter: ^ 2025-04-18 13:25:29 When I try to open https://git.alpinelinux.org/mkinitfs/ (with the last slash) I get a bad gateway error. Without the last slash it works so not sure what happened there. 2025-04-18 13:28:47 i get bad gateway with or without. we are probably just overloaded by AI scrapers 2025-04-18 13:36:12 I'm looking at something like https://git.gammaspectra.live/git/go-away 2025-04-18 16:10:13 ikke, ncopa: Currently I just kick people out based on User-Agent 2025-04-18 16:10:37 f_: That's no longer feasible (unless you target old chromium versions) 2025-04-18 16:10:45 These scrapers do not identify themselves 2025-04-18 16:10:52 Yeah I am aware 2025-04-18 16:10:54 and use residential proxies 2025-04-18 16:11:21 f_: what user agents do you block? 2025-04-18 16:12:46 https://paste.debian.net/hidden/58c73ce2/ 2025-04-18 16:13:05 I need to check if the AI bots have started crawling my infra though 2025-04-18 16:13:17 Yeah, that list would not make the slightest dent for us 2025-04-18 16:13:25 *the bad AI bots 2025-04-18 16:14:04 I suggest anubis for the gitlab at least 2025-04-18 16:14:42 go-away looks interesting though, might have a go at hosting it maybe 2025-04-18 16:15:09 I'm just concerned that setting up those proxies might also make archiving my stuff more difficult 2025-04-18 16:15:13 I'm not a fan of the insterstitial 2025-04-18 16:15:40 (in general, not the anubis one) 2025-04-18 16:15:56 So if possibly, I'd like to avoid it 2025-04-18 16:23:49 ikke: What is "insterstitial"? 2025-04-18 16:41:48 f_: The page in between that says "verifying you're a human" or something like that 2025-04-18 17:16:32 ikke: yeah neither am I 2025-04-18 22:02:54 ikke I just setup something now https://git.vitali64.duckdns.org/ 2025-04-18 22:03:15 and whitelisted some user-agents 2025-04-18 22:18:51 An interesting alternative someone is using; for some URLs, they do pure nginx stuffs to check for a specific cookie. If it doesn't exist the user is told to visit the main page to get it, and then they can access whatever URL they wanted to access 2025-04-19 13:14:36 build-3-22-armv7 might be stuck on guile if it isn't temporarily offline ... build-3-22-x86 is likely stuck on py3-pexpect. both have been building >=9 h 2025-04-19 13:20:21 also wondering if nasm-2.16.03 tarball could be added to cache for the other 3.22 builders to use (cached tarball was found on loongarch64 and already rebuilt there)? the project website is still down, there's a github mirror but the tarball contents is a bit diffferent 2025-04-19 13:20:26 thanks! 2025-04-19 13:25:23 s/diffferent/different/ 2025-04-19 13:25:59 (e.g. no configure script, so build steps would probably need to be modified a bit) 2025-04-19 13:34:25 edge one seems to be gone now, but there's a copy at v3.21 distfiles 2025-04-19 14:24:25 error occurred: Failed to find tool. Is `llvm-ar` installed? 2025-04-19 14:24:40 happened when bootstrapping rust on ppc64le and x86_64 2025-04-19 14:26:28 but for some reason the rust bootstrap succeded on s390x 2025-04-20 04:27:29 3.22 s390x and x86 are probably stuck, also left a small list of ftbfs at #17096 in case it may help 2025-04-20 08:41:48 what's with git.alpinelinux.org being unavailable? 2025-04-20 09:07:16 no idea 2025-04-20 09:07:40 seems like disk on arm builder has issues 2025-04-20 09:07:43 ERROR: Unable to lock database: Read-only file system 2025-04-20 09:07:49 will have to investigate later today 2025-04-20 10:31:16 huh I thought it was LLM bots scraping all of it 2025-04-20 10:33:26 omni: huge amounts of requests 2025-04-20 10:33:59 oh, "AI"? 2025-04-20 10:35:20 Presumably 2025-04-20 10:35:56 But not marked as such, hugely abusive, using residential proxies 2025-04-20 10:42:37 clandmeter: it seems che-bld-1 has issues with the new nvme disk 2025-04-20 10:43:08 https://tpaste.us/PQ81 2025-04-20 10:43:14 (and many more messages) 2025-04-20 10:44:30 nu_: ^ 2025-04-20 10:52:39 go-away is looking pretty promising though 2025-04-20 11:00:37 clandmeter: the filessytem is mount RO now 2025-04-20 15:19:58 strange 2025-04-20 15:20:15 i can remove it and retry it in a separate machine 2025-04-20 15:20:23 it's brand new :o 2025-04-20 15:20:32 nu_: Not entirely sure it is the disk 2025-04-20 15:20:53 me neither, but this is the only thing i can verify 2025-04-20 15:21:07 It's part of an lvm volume group 2025-04-20 15:21:53 also dunno how experimental the mobo is 2025-04-20 15:22:13 WARNING: Couldn't find device with uuid a56qMe-OK5D-aEQi-xvLq-a1bL-3XEJ-0njM9X. 2025-04-20 15:22:15 WARNING: VG vg0 is missing PV a56qMe-OK5D-aEQi-xvLq-a1bL-3XEJ-0njM9X (last written to /dev/nvme1n1p2). 2025-04-20 15:22:43 it can be from the fact that the nvme controller crashes and the block device goes missing for a while 2025-04-20 15:22:51 yup 2025-04-20 15:23:23 is ok for me to remove it? 2025-04-20 15:23:41 or would you like to test something with the drivers or settings on the nvme? 2025-04-20 15:25:20 Not sure if we can get more information about what's happening from the bmc 2025-04-20 15:26:06 i would think no 2025-04-20 15:27:02 I guess you can remove it, it's already not recognized at the moment 2025-04-20 15:27:25 ok, it will involve a restart. ill do it within 3 days 2025-04-20 15:28:36 hows it going with the release? is it now urgent? also is the data on it deletable? 2025-04-20 15:28:55 ncopa just bootstrapped the new builders 2025-04-20 15:29:07 I would not consider the data deletable 2025-04-20 15:55:44 nu_: I can try to reboot the server now and see if that helps? 2025-04-20 15:56:16 I can see if I can isolate the disk again 2025-04-20 15:56:21 oki 2025-04-20 15:56:33 it would be interesting to see if it happens instantly or after some time 2025-04-20 15:56:59 At least this time it happened after some time 2025-04-20 15:57:20 But not sure if it's because it now started to use the disk 2025-04-20 15:57:51 Trying to make a backup, but many files are missing (io error, expected due to missing disk) 2025-04-20 16:08:52 ouch 2025-04-20 16:08:58 this went terribly wrong 2025-04-20 16:09:31 I'll see what the status is after reboot 2025-04-20 16:10:21 nu_: would it be feasible to provide us access again to the bmc interface?? 2025-04-20 16:10:25 s/??/? 2025-04-20 16:12:09 i planned to do that. ill take a look at it too 2025-04-20 17:53:43 nu_: Are you by chance able to check the bmc console yourself? I rebooted it, ping came back for a bit, but now no longer responds 2025-04-21 10:39:54 https://gitlab.alpinelinux.org/alpine/infra/infra/-/issues/10847 2025-04-21 15:38:27 ikke: ill check in 15 mins 2025-04-21 15:40:07 ok, I'm away now, but I'm back in an hour 2025-04-21 17:24:21 ikke: do we have a recovery plan? 2025-04-21 18:01:42 i tried to remove the new nvme but then it gets stuck at grub because it cannot find the nvme 2025-04-21 18:02:27 now it booted with (none) in the login screen, so i suspect some parts of the OS migrated to the new nvme and this caused chaos? 2025-04-21 18:25:07 bmc is routed now 2025-04-21 20:51:46 forgot to mention but i reseated to nvme and cleaned the port. there is a chance that this will fix the io errors 2025-04-21 20:52:58 Both disks are shown atm 2025-04-21 20:53:39 you could give it some work too;) 2025-04-21 20:53:58 I need to run fsck 2025-04-22 07:17:49 ikke: how did the fsck do? 2025-04-22 07:18:38 clandmeter: wasn't able to yet. Tried to boot an iso over the network, but it hang. Asked nu to mount an usb drive 2025-04-22 07:19:08 ikke: just boot with iso from netboot.xyz 2025-04-22 07:19:13 its only 1mb 2025-04-22 07:19:43 Ok, can try that 2025-04-22 07:20:34 bmc over internet is not working with full iso's 2025-04-22 07:20:45 so better to netboot, much faster and easier 2025-04-22 07:34:34 clandmeter: fyi, the server still boots, but degraded. Root fs is mounted RO and many services are not running 2025-04-22 10:27:54 what is the status of arm server? anything I can do to help? 2025-04-22 10:36:28 ncopa: Trying to boot a live iso via ipxe to do a fsck 2025-04-22 10:39:00 ncopa: after reboot, both disks are available again, but there is some filesystem issues which I'm trying to fix 2025-04-22 10:42:50 Not much seems to be happening (booting alpine netboot) 2025-04-22 11:11:24 I've put some anti-bot measures in front of git.a.o. Let me know if you run into issues 2025-04-22 11:14:04 ikke: Is it go-away? 2025-04-22 11:14:10 yup 2025-04-22 11:17:00 wow, the nginx access log is quiet now.. 2025-04-22 11:41:02 awesome! 2025-04-22 11:42:21 ikke: awesome, that one is pretty unnoticeable 2025-04-22 11:42:21 Lots of clients are talking to a teapot now :P 2025-04-22 11:42:46 (418 response code) 2025-04-22 11:43:45 "guess what you got challenged but didn't notice it" 2025-04-22 11:44:22 The challenge is pretty basic 2025-04-22 11:45:33 I've been thinking of deploying go-away sometime 2025-04-22 11:45:45 but for now powxy should be good enough 2025-04-22 11:45:56 (and I did my best to exempt archiving bots too) 2025-04-22 11:46:32 How do you target them? 2025-04-22 11:48:17 I just exempt *Archive* for now 2025-04-22 11:48:38 I only thought of archiveteam's ArchiveBot for now 2025-04-22 11:49:04 but go-away has some pretty good features. 2025-04-22 13:30:28 ikke: Hej, can we move https://gitlab.alpinelinux.org/Cogitri/apk-polkit-rs-docker to alpine/infra/docker so that I can automatically build CI images for multiple arches? 2025-04-22 13:31:56 The images probably shouldn't end up on docker.io/alpinelinux, but if I'm reading the CI template right it shouldn't be a problem to use cogitri as DOCKER_NAMESPACE, right? 2025-04-22 13:43:09 Cogitri: we have our internal registry as well 2025-04-22 13:44:17 Oh, that would be great as well 2025-04-22 15:06:44 ncopa: we seem to have issues booting issues 2025-04-22 15:06:47 booting isos 2025-04-22 15:07:37 ncopa: it gets stuck after "booting 'linux lts' 2025-04-22 15:12:40 could it be that it fails when loading storage drivers? 2025-04-22 15:12:45 is this serial console? 2025-04-22 15:12:49 v3.20 booted now 2025-04-22 15:13:26 i was looking at the failed 3.21 boot via a vga screen 2025-04-22 15:48:43 Running fsck now 2025-04-22 15:55:57 fsck finished, says everything is clean now 2025-04-22 16:09:00 We're back in business :-) 2025-04-22 16:09:13 nu_: clandmeter ncopa ^ 2025-04-22 16:09:34 \o/ 2025-04-22 16:11:21 .. 2025-04-22 16:14:24 hmm, needed to restart nginx 2025-04-22 16:16:23 deu1-dev1 load is extremely low now :D 2025-04-22 16:16:52 Last week we had peaks of 80 requests/s 2025-04-22 16:19:47 Anyone knows a good status page (generator) that supports (or can easily support) being updated with something like webhoops as well as manual updates? 2025-04-22 16:24:37 webhooks? 2025-04-22 16:25:35 receiving webhooks in order to update the status 2025-04-22 16:25:43 Or something similar 2025-04-22 16:28:59 maybe check statping, I don't recall if it could receive updates via webhooks 2025-04-22 16:30:16 seems like it's statping-ng now 2025-04-22 17:07:38 nice one ikke :)) 2025-04-22 17:08:42 no io errors so far? 2025-04-22 17:10:50 Nope 2025-04-22 19:47:35 Cogitri: ping 2025-04-22 20:15:27 ikke: yes? :) 2025-04-22 20:26:48 Cogitri: So I'll be moving that repo, could you try to switch over to https://gitlab.alpinelinux.org/alpine/infra/gitlab-ci-templates/-/blob/master/exec/docker-image-all-arches.yml to build the image?? 2025-04-22 20:26:57 That would push it to our own registgry 2025-04-22 20:26:59 registry 2025-04-22 20:28:43 nu_: next time clean your hands before installing ;-) 2025-04-22 20:29:11 It took a couple of days before these errors happened 2025-04-22 20:30:36 clandmeter: did you catch that I installed go-away in front of cgit? 2025-04-22 20:30:42 things get warm, so it could still be a bad contact 2025-04-22 20:30:53 yeah, sounds great 2025-04-22 20:31:00 what will happen to the ones the get hit? 2025-04-22 20:31:46 So the first time you hit a page other than /, you'll receive a redirect challenge which you need to follow. If you fail, you get a 403 or 418 response 2025-04-22 20:32:12 ikke and nu_: thank you for getting the arm machine back! 2025-04-22 20:32:15 403 when you are explicitly rejected by the policy 2025-04-22 20:32:27 ikke: looks like you are bootstrapping runs on armv7? 2025-04-22 20:32:32 ncopa: correct 2025-04-22 20:32:47 aarch64 was also running, but it OOMed 2025-04-22 20:32:54 aha 2025-04-22 20:33:41 and armhf? 2025-04-22 20:34:04 was already built 2025-04-22 20:34:14 oh, nice 2025-04-22 20:35:45 ok, i go to bed then. lets continue tomorrow. thank you! 2025-04-22 20:35:48 o/ 2025-04-22 20:35:53 sleep well 2025-04-22 20:35:58 good night 2025-04-22 20:36:38 Cogitri: I've moved the project and added renovate to the topic so that renovate bot should discover it 2025-04-22 20:38:17 ikke: https://github.com/ivbeg/awesome-status-pages 2025-04-22 20:38:24 maybe there is something here 2025-04-22 20:38:29 clandmeter: Yeah, I've found that list already 2025-04-22 20:38:38 right 2025-04-22 20:38:50 it was on the top of my google results :) so i figured 2025-04-22 20:52:42 ikke: Alright, thanks! I‘ll update the include for the CI file tomorrow :) 2025-04-22 20:52:56 ack 2025-04-22 22:32:23 ncopa: \o/ 2025-04-23 06:06:40 im bootstrapping rust on build-3-22-aarch64 now 2025-04-23 06:52:21 ncopa: 👍 2025-04-23 06:52:47 ncopa: can you check if it built on armv7? 2025-04-23 06:53:39 yes, i think it did 2025-04-23 06:53:44 and I restarted it 2025-04-23 06:54:15 Ok, cool 2025-04-23 06:55:14 Did you remove the .bootstrap* package? 2025-04-23 06:58:00 now i did 2025-04-23 06:59:26 it ran out of memory on aarch64 2025-04-23 06:59:35 probably due to the mono build 2025-04-23 15:24:25 still no IO errors on che-bld-1 2025-04-23 15:54:49 bootstrapping go on loongarch64 and x86 2025-04-23 15:56:40 thanks! 2025-04-23 15:57:52 loongarch64 finished 2025-04-23 16:04:14 x86 finished 2025-04-23 16:07:43 that was fast, thanks 2025-04-23 16:07:48 yeah 2025-04-23 16:08:25 are you doing something with s390x at the moment as well? wondering if there's any log for perl-server-starter 2025-04-23 16:09:50 no 2025-04-23 16:09:59 but I'm AFK for a bit 2025-04-23 16:10:28 okay, no worries then 2025-04-23 17:52:37 i fixed the libks deadlock, with some help from dalias 2025-04-23 17:55:23 nice thanks! 2025-04-23 17:55:38 thanks ... it didn't happen consistently, good that you traced the issue 2025-04-23 17:58:46 the code does weird stuff. i don't know why they try to "wakeup" stdout. should probably just let the OS handle it. 2025-04-23 17:59:08 should probably also fix dovecot segfault 2025-04-23 17:59:13 but not today 2025-04-23 17:59:16 have a nice evening 2025-04-23 17:59:23 you too! 2025-04-23 17:59:30 thanks 2025-04-23 18:10:57 I guess mono is hanging on aarch64 2025-04-23 18:11:02 no new log output 2025-04-24 06:57:36 ikke: Moved to docker-images-all-arches, works great, thanks :) 2025-04-24 06:57:59 Cool 2025-04-24 06:59:57 And I saw that renovate created an onboarding MR already 2025-04-24 07:04:33 Yup :) 2025-04-24 08:53:27 Grand, everything seems to work and apk-polkit-rs is tested on more platforms now :) 2025-04-24 10:14:14 Cogitri: cool 2025-04-24 10:14:19 Nice that it worked out 2025-04-24 10:31:03 Yeah :) Now I‘m wondering if it makes sense to move apk-polkit-rs itself too for renovate? 2025-04-24 10:46:48 boo 2025-04-24 10:47:47 Wonder why that happens, not due to load 2025-04-24 12:15:29 ikke: One issue I‘ve noticed that the CI runners apparently use PollPolicy: IfNotPresent, but since the CI images are tagged as latest-$ARCH now, it doesn’t update the images 2025-04-24 12:15:45 Or do I have to use the image digest instead of the tag? 2025-04-24 12:37:26 Cogitri: it does not update the images where? 2025-04-24 12:37:55 For example here: https://gitlab.alpinelinux.org/Cogitri/apk-polkit-rs/-/jobs/1822454 2025-04-24 12:39:04 latest-x86_64 didn‘t contain cargo valgrind at first, I‘ve added it some 2 hours ago but the old image is still present on the CI runner so it still fails with the same error 2025-04-24 12:41:37 That runner does not even have a pull policy 2025-04-24 12:48:02 Huh, that‘s odd, I though I remembered smth about our CI runners setting that 2025-04-24 12:48:09 Hmm, how does it end up with the old image then, that’s odd 2025-04-24 12:50:13 Cogitri: This is a new runner on kubernetes, different deployment 2025-04-24 12:51:21 I could try to set it explicitly 2025-04-24 12:51:59 That‘d be nice - if there‘s something I can do send me a message :) 2025-04-24 12:52:15 You are right that the docker runners do set the pull policy to never due to docker throtling 2025-04-24 12:52:21 / rate limiting 2025-04-24 12:53:40 Hmm, there is not even a setting for that on the kubernetes executor 2025-04-24 12:54:27 Oh, there is, just documented on a different page 2025-04-24 13:02:21 Cogitri: I've set the pull policy, can you try again? 2025-04-24 13:05:59 Oh, I can just rerun the job I suppose 2025-04-24 13:06:19 But a different runner may pick it up 2025-04-24 13:06:54 yup, it did 2025-04-24 13:07:12 It succeeded at least now 2025-04-24 13:13:40 OK, I‘ll see if it starts failing again at some point when it happens to be scheduled on the one runner that has the old image :) 2025-04-24 13:14:41 Ah seems like I forgot to set an env variable in the new container, so I can actually test it right now 2025-04-24 13:45:52 It's anoying that with the kubernetes executor that it does not show what image is used 2025-04-24 13:46:40 I guess because it's not trivial to get because it's the kubernetes node the pod runs on that pulls it 2025-04-24 13:56:29 Yeah, might be a bit more involved 2025-04-24 13:56:45 But it works now, my job ended up on the same node and pulled the new image :) 2025-04-24 14:03:04 ok, interesting, good to know 2025-04-24 14:06:26 Cogitri: I don't see any recent jobs using the kubernetes executor 2025-04-24 15:25:25 https://gitlab.alpinelinux.org/Cogitri/apk-polkit-rs/-/jobs/1822607 2025-04-24 15:25:48 Ah I‘m stupid, that‘s the docker executor 2025-04-24 18:39:48 ikke: looks like the arm builders fs might have gone read-only 2025-04-24 18:40:24 `fatal error: opening dependency file build-debug/src/njs_vmcode.dep: Read-only file system` on 3.22 armhf building njs 2025-04-24 18:41:32 3.22 aarch64, `fsync() failed: I/O error` 2025-04-24 18:53:26 yup :( 2025-04-24 18:54:42 nu_: ^ 2025-04-24 19:10:30 I wonder if it will help to upgrade to alpine 3.20 2025-04-24 19:12:59 Server is running again 2025-04-24 19:13:21 thanks! so close ... untrusted signature error? 2025-04-24 19:14:36 where 2025-04-24 19:14:48 asteroid-* 2025-04-24 19:15:20 regarding icinga 2025-04-24 19:16:14 yeah, asteroid-* and libmspack on 3.22 armhf 2025-04-24 19:17:07 emacs-memoize on 3.22 aarch64 right before asteroid-* 2025-04-24 23:29:08 not sure if mentioned already, but the riscv64 3.22 builder uses 3.20 distfiles 2025-04-24 23:29:09 https://build.alpinelinux.org/buildlogs/build-3-22-riscv64/main/librtlsdr/librtlsdr-2.0.2-r0.log 2025-04-25 05:25:36 What's up with the 32-bit arm builders? Both armhf and armv7 are failing with ERROR: Unable to read database state: package file format error, independent of the package it's trying to build itself 2025-04-25 05:28:14 And now all edge arm builders are stuck on "pulling git" 2025-04-25 05:47:46 disk issue, fs turning read-only, infra has been looking into it 2025-04-25 05:53:52 now it seems to be stuck, like it's in memory contention 2025-04-25 05:56:01 ptrc: I wonder how it ends up usining v3.20? 2025-04-25 05:56:04 build-3-22-riscv64 [~]# grep DISTFILES_MIRROR /etc/abuild.conf 2025-04-25 05:56:04 DISTFILES_MIRROR=https://distfiles.alpinelinux.org/distfiles/v3.22 2025-04-25 05:56:48 ncopa: .abuild/abuild.conf 2025-04-25 05:56:54 yeah, i found it 2025-04-25 05:58:32 we should also add a kernel signing key 2025-04-25 05:59:30 should be fixed now 2025-04-25 07:55:20 nu_: can you add my wg key so i can access bmc? 2025-04-25 08:36:16 im bootstrapping go on build-3-22-ppc64le now 2025-04-25 08:37:08 oh it is already bootstrapped 2025-04-25 08:38:54 boostrapping go on build-3-22-riscv64 2025-04-25 08:57:14 and on build-3-22-x86_64 2025-04-25 09:27:54 yea 2025-04-25 09:28:09 if you send it:) 2025-04-25 09:28:52 ikke: o no 2025-04-25 09:29:59 nu_: we're trying to update the bios fw to see if that helps 2025-04-25 09:30:18 i think it would be better to just put a new one in and test 2025-04-25 09:30:27 can be a defective piece 2025-04-25 09:30:44 i think atm there are no more nvme slots left 2025-04-25 09:30:55 shall i put 2x sata ssds? (1tb each) ? 2025-04-25 09:31:01 it would be the quickest 2025-04-25 09:52:47 clandmeter wanted to try to swap the disk order to see if the problem travels with the disk or not 2025-04-25 09:53:39 can do 2025-04-25 11:37:18 Yes please 2025-04-25 11:37:47 Our fae has the login of BMC now 2025-04-25 11:37:58 In case he needs to checj 2025-04-25 12:02:24 is it also ok if i get another 4TB nvme and dd the content from the current one? this way we can eliminate both the faulty nvme and the faulty slot in one step 2025-04-25 12:02:59 clandmeter, ikke 2025-04-25 12:28:27 Not sure 2025-04-25 12:28:33 Check ncopa 2025-04-25 12:28:51 For releasing 2025-04-25 12:29:09 Dd will take some time 2025-04-25 12:29:46 how long time do you expect it to need? 2025-04-25 12:30:08 i mean, current state is probably worse 2025-04-25 12:31:01 we could also try upgrade the kernel to newer version, in case there is some problem in the driver 2025-04-25 12:34:12 dd would be about 4 hours 2025-04-25 12:35:03 thats a pessimistic estimate 2025-04-25 12:37:33 ok 2025-04-25 12:38:14 i think we cannot do anything now anyway on that machine 2025-04-25 15:32:21 ok lets do it tonight then 2025-04-25 15:32:28 if you have time 2025-04-25 15:33:58 I'd be curious to find out if the firmware upgrade itself helped 2025-04-25 15:34:07 if we make other changes now we're not really sure what fixed it 2025-04-25 15:36:57 you could also just swap the drives 2025-04-25 15:47:42 im doing the dd now on the arm machine itself 2025-04-25 16:58:53 dd is complete. no io errors. avg 1.1GB/s 2025-04-25 17:02:23 ikke, clandmeter: booting now with dd-d new nvme 2025-04-25 17:05:08 booted 2025-04-25 17:05:15 all yours^^ 2025-04-25 18:02:07 wow these nvme drives are so fast 2025-04-26 08:31:36 been looking at "Checking you are not a bot" for a while now... 2025-04-26 08:44:59 Does it stay if your refresh? 2025-04-26 13:31:37 edge loongarch64, 3.22 x86 and x86_64 builders appear to be stuck 2025-04-26 13:33:04 same package for >12h 2025-04-26 15:47:15 oi 2025-04-26 15:47:23 ajuda 2025-04-26 20:44:51 ncopa: build-edge-armv7 and build-edge-armhf had corrupted apk-related files. /lib/apk/db/triggers, /lib/apk/db/scripts.tar, /etc/apk/world 2025-04-26 20:45:07 armv7 had a corrupt APKINDEX.tar.gz for community as well 2025-04-26 20:46:05 For armv7 I moved the corrupt files out of the way and set the default world for builders 2025-04-26 20:46:24 It somehow resulted in all packages from being uninstalled (/etc/apk/world was empty) 2025-04-26 20:46:30 I fixed it with apk.static 2025-04-26 20:48:06 I recreated the index with apk index --rewrite-arch armv7 --no-warnings --quiet --description armv7 --output APKINDEX.tar.gz *.apk 2025-04-26 20:48:17 and then abuild-sign APKINDEX.tar.gz 2025-04-26 20:48:31 for armhf I only had to move the corrupt files out of the way and recreate world 2025-04-27 08:12:51 fixed renovate leaking zombie processes (by enabling docker init) 2025-04-27 11:09:05 ikke: disk stable? 2025-04-27 13:48:42 clandmeter: until now, yes 2025-04-27 18:38:34 Trying to upgrade gitlab, but apparently grpc does not like gcc 14 yet. I need to upgrade to a newer alpine base image to get go 1.23 which is now required 2025-04-28 06:09:29 im bootstrapping go on buil-3-22-s390x now 2025-04-28 06:13:47 ikke: do you want help with grpc/gcc? 2025-04-28 06:14:30 I think 1.72 fixes it, but the ruby gem is not available yet 2025-04-28 06:16:17 can we use clang instead? 2025-04-28 07:23:14 ncopa: I'd have to check how much work it is to switch 2025-04-28 10:19:02 ncopa: I don't think we can use clang, since the compiler should match ruby 2025-04-28 10:32:08 ok 2025-04-28 10:36:31 I've opened https://github.com/grpc/grpc/issues/39394, hopefully it will be available soon 2025-04-28 11:18:10 librrd on loongarch64 and rrdtool-dev on armv7 have "bad signature" 2025-04-28 11:18:15 see https://gitlab.alpinelinux.org/fossdd/aports/-/pipelines/319946 2025-04-28 12:01:29 edge? 2025-04-28 12:05:06 ah, i see why 2025-04-28 12:05:19 it was rebuilt but pktrel not bumped. f8effb65279bbe3d1e98cf082d7e3b9cb412b496 2025-04-28 12:05:21 im fixing it 2025-04-28 12:25:07 thanks! 2025-04-29 01:10:39 I posted these in the devel channel but Clayton from pmOS told me you had an infra channel so I figured I'd post here just in case: 2025-04-29 01:10:41 build-edge-aarch64 seems to be completely stuck, can anyone give it a kick? mono was started on April 22 and lf on April 27. 2025-04-29 01:10:41 Sounds like it needs some monitoring to announce in here if a build has taken over x hours. 2025-04-29 13:28:40 seems like deu2-dev1 is struggling. I dont appear to have login there 2025-04-29 13:29:26 gitlab is also down 2025-04-29 14:09:10 Seems like the containers just restarted 2025-04-29 15:31:34 Not sure yet what caused it, but memory and CPU usage were very high 2025-04-30 03:59:14 it seems new aports MRs aren't getting auto-assigned to maintainers 2025-04-30 03:59:35 as in, newly opened MRs at aports 2025-04-30 05:16:34 mio: thanks for reporting, should be fixed now 2025-04-30 05:18:55 ikke: thanks! 2025-04-30 05:19:55 I'm a bit puzzled why git.a.o is still giving 502 results regularly 2025-04-30 05:19:59 The load is all but gone now 2025-04-30 05:20:14 Something with uwsgi I suppose 2025-04-30 05:35:22 Ok, it was set to exit / restart on 60 seconds of idle time 2025-04-30 10:34:56 bootstrapping openjdk8 on armv7 2025-04-30 15:21:18 I think git.a.o should be quiet now