2024-10-01 07:46:37 ptrc: main as default? 2024-10-01 07:51:42 https://pkgs.alpinelinux.org/contents 2024-10-01 07:51:53 The main repo is already selected 2024-10-01 08:23:01 ah 2024-10-01 08:23:24 need to update that part as well 2024-10-02 18:31:49 i dont think every maintainer got an email from pkgs.a.o. seems i didnt handle the exceptions correctly 2024-10-02 18:32:30 clandmeter: did you see the issue created today? 2024-10-02 18:32:52 yes it was created after the fix :) 2024-10-02 18:33:11 ah ok, because I received a similar e-mail after the issue was created 2024-10-02 18:33:39 similar email? 2024-10-02 18:33:41 from pkgs? 2024-10-02 18:33:45 yes 2024-10-02 18:33:53 the email should be before 2024-10-02 18:34:03 else i dont know how this user could have known 2024-10-02 18:34:05 Let me check 2024-10-02 18:34:34 right, 6 minutes before 2024-10-02 18:35:24 oh the issue was first :) 2024-10-02 18:35:33 but i did fix it already locally 2024-10-02 18:35:42 i saw the mail coming in and noticed the error 2024-10-02 18:36:03 ok :) 2024-10-02 22:03:54 ikke: the update failed kind of, looks like python oomed 2024-10-02 22:04:18 so i set init = true which will always process all indexes 2024-10-02 23:41:45 not sure if it's already been reported, but the gitlab history button returns 500 error on some aports directories. noticed it recently due to new apkbrowser's "Git repository" link now points to gitlab.a.o instead of git.a.o 2024-10-02 23:42:10 e.g. https://gitlab.alpinelinux.org/alpine/aports/-/commits/master/community/libwacom compared to https://git.alpinelinux.org/aports/log/community/libwacom 2024-10-03 06:53:26 there seem to be an issue with Anitya and version numbers for getmail6, upstream releases are of this format 6.19.04, but Anitya thinks this == 6.19.4 and therefor nags that the package it not latest versio 2024-10-03 06:54:33 https://github.com/getmail6/getmail6/tags 2024-10-03 06:56:29 You just upgraded it to 6.19.05 though 2024-10-03 06:58:11 yes, but guess it will nag in a few hours again 2024-10-03 06:58:22 Ok 2024-10-03 06:58:46 Maybe more than a few hours, depending on how quickly fortify-headers is fixed 2024-10-03 07:00:09 "getmail6 current: 6.19.04-r0 new: 6.19.4" 2024-10-03 07:00:16 was the last nag 2024-10-03 07:00:16 (fortify issues are preventing 5 archs from proceeding to testing/) 2024-10-03 07:00:18 Ok 2024-10-03 07:03:02 As for me, Anitya nags me to upgrade Perl to an odd-numbered dev version, but not sure how you would solve that without a special exception for Perl that ignores dev versions 2024-10-03 07:06:19 ah thats also anoying, I guess I will only get one nag per release, but you will get it everytime they do a new dev release? 2024-10-03 07:07:08 Thankfully Perl dev releases are not made that often 2024-10-03 08:37:57 HRio: yes i noticed some version issues 2024-10-03 08:38:12 I am just using apk version comparison logic 2024-10-03 08:44:43 i am wondering how often we need to send these emails 2024-10-03 08:44:47 every day, every week? 2024-10-03 08:46:26 If it's going to keep sending emails for all outdated packages, at most once a week 2024-10-03 08:48:17 Previously it would just send one email per version 2024-10-03 09:39:21 it will only send email when a pkg is flagged 2024-10-03 09:39:37 so when pkg is updated the flag is reset 2024-10-03 09:42:51 it could be an option to include the previous flagged packages, but only send on new. 2024-10-03 09:43:04 previously it would send one email per pkg 2024-10-03 13:37:30 I think the fortify upgrade is causing Go test to fail: https://gitlab.alpinelinux.org/fabricionaweb/aports/-/jobs/1542801 2024-10-03 13:40:51 better for #-devel? 2024-10-03 13:42:47 Not in there for the time being, there will probably be others that fail on the same thing 2024-10-03 13:43:12 ACTION remembers yyjson, goes to try a rebuild 2024-10-03 13:50:25 yyjson already has -Wno-stringop-overflow from the last time a fortify upgrade was attempted 2024-10-03 14:08:32 Anyway, it seems this has been reported: https://github.com/jvoisin/fortify-headers/issues/68 2024-10-03 14:11:12 Hehe, as usual, ppc64le is not affected (looking at the Go MR) 2024-10-03 14:53:03 same issue as https://github.com/jvoisin/fortify-headers/issues/68 2024-10-03 14:54:11 do we have smaller testcases? 2024-10-03 14:59:38 Are you asking for other aports that fail with the strongop-overflow error? 2024-10-03 14:59:44 Aports simpler than Go and the kernel 2024-10-03 15:00:07 Probably an isolated test case? 2024-10-03 15:00:39 If so, then yyjson, after removing -DCMAKE_C_FLAGS and -DCMAKE_CXX_FLAGS 2024-10-03 15:01:15 That was a reply to simpler aports, not isolated test cases 2024-10-03 15:03:25 I think even libindi (!72922) has -Wstringop-overflow, but it's not treated as an error 2024-10-04 06:33:14 ACTION found that several previous problems in new pkgs.a.o have been fixed \o/ 2024-10-04 08:37:17 Oh I see pkgs.a.o has been switched to apkbrowser. Nice! 2024-10-04 08:52:24 Yes 2024-10-04 09:01:54 huh, abuuld segfaults https://tpaste.us/qYMw 2024-10-04 09:02:02 abuild* 2024-10-04 09:03:11 It's not abuild that segfaults 2024-10-04 09:04:02 It's modpost that is segfaulting 2024-10-04 09:07:55 right 2024-10-04 09:08:37 but first time see segfaults when building kernel, i.e. more than 30 years 2024-10-04 09:16:13 first time ofc if my memory still work :) 2024-10-04 09:18:18 looks like fortify-headers are buggy 2024-10-04 09:29:26 as expected, downgrading fortify-headers fixes this problem 2024-10-04 09:31:02 ah, looks like ncopa did something about this 2024-10-04 09:35:14 looks like I have to add cron task to do `apk upgrade` every 5 minutes ;) 2024-10-04 10:24:18 apk add apk-cron 2024-10-04 10:24:26 but will only do it daily iirc 2024-10-04 10:28:34 this was joke, I prefer manual upgrade 2024-10-04 16:29:05 Oh I see apkbrowser for Alpine is maintained on https://gitlab.alpinelinux.org. Why a fork, why not take upstream and help out improving that? 2024-10-04 16:29:13 CC carlolandmeter, idk if he is in the channel here 2024-10-04 16:36:34 clandmeter: ^ 2024-10-04 16:36:42 also the footer sounds very much wrong 2024-10-04 16:36:49 all rights reserved for Alpine Linux? 2024-10-04 16:37:11 and the original licence has Martijn Braam on it 2024-10-04 16:37:19 i really don't think we can do that 2024-10-04 16:58:25 what is apkbrowser 2024-10-04 16:58:35 pkgs.a.o 2024-10-04 16:58:46 ah, web app 2024-10-04 16:59:31 to my eyes I don't see diffs with old 2024-10-04 16:59:56 Old one is now at pkgs-old.a.o 2024-10-04 17:01:15 hm, what is different, I wonder 2024-10-04 17:02:27 New one is written in Python, and iirc, also used by pmOS, Chimera, and OpenWrt 2024-10-04 17:03:12 No more manual flagging: https://pkgs.alpinelinux.org/flagging 2024-10-04 17:05:54 written in python? what a degradation :) 2024-10-04 17:06:44 Old one was in Lua, i think the reasoning is, maybe it is easier to get contributors for Python 2024-10-04 17:09:23 not easier but maybe more new programmers who know only python. I don't think that means better quality 2024-10-04 17:09:47 Hopefully no one rewrites it in Rust 2024-10-04 17:10:02 heh, I agree here 2024-10-04 17:10:53 maybe rewrite in dart/flutter ;) 2024-10-04 18:16:36 celeste: OpenWRT? Really? Got a link? 2024-10-05 02:07:42 https://forum.openwrt.org/t/183206 https://lists.openwrt.org/pipermail/openwrt-devel/2024-January/041989.html 2024-10-05 10:15:02 ptrc: originally it was created by alpine, pmos converted it and reused lots the code like templates and other parts. but in any case i just removed the copyright stuff as it has no function anyways. 2024-10-05 10:16:22 PureTryOut: there are now 3-4 copies of the original design, all following its own path. i dont see a problem with that. 2024-10-05 10:18:12 chimera has a lot of changes as well, like using the new binary db. 2024-10-05 10:20:34 and the reason the switch to python is that lua-turbo has issues and not that easy to fix. I assume that is the reason pmos converted it into python. 2024-10-05 10:58:52 I'd have preferred we all collaborated on the original tbh. Now we're all individually making the same fixes and changes besides the few distro specific ones. 2024-10-05 10:58:52 I guess we'll consider the Alpine one the new upstream then and submit changes there to make it usable for pmOS out-of-the-box as well 2024-10-05 11:23:35 I can add one of you to the project if you like 2024-10-06 13:39:51 I don't need merge rights persé if that's what you mean, just someone to merge PR's we might shoot in 😉 2024-10-06 16:36:47 ikke: what is the reason you marked as Draft !73075 2024-10-06 16:37:19 CI failure 2024-10-06 16:37:51 aha, now it is passed. I will merge it if you are ok with this 2024-10-06 16:37:58 yes 2024-10-06 16:38:04 thanks 2024-10-06 16:38:43 It helps with hiding these MRs from the list of open MRs to review 2024-10-06 16:39:00 so once the failure is solved, MR authos can mark the MRs as ready 2024-10-06 16:39:05 authors* 2024-10-06 16:39:44 merged. lets pray 2024-10-07 07:42:23 https://lwn.net/Articles/993083/ food for thought. this is some of reasons I don't like dependencies in packages 2024-10-07 07:42:51 and I fear alpine going to this path 2024-10-07 09:43:00 what we do if we don't want to fulfill users request in pkg? simply ignore or answer and risk to be harassed and waste time 2024-10-07 09:45:05 when I answer with simple reasoning I usually got more question and what not 2024-10-07 11:18:31 sounds like a discussion for #alpine-security instead of infra 2024-10-07 11:19:24 but in general, this is why alpine does not automatically start services just because the package was installed, like debian and ubuntu does 2024-10-07 11:26:07 i am fairly confident that very few of those linux servers are Alpine 2024-10-07 11:53:14 we don't even have a service for cups-browsed 2024-10-07 11:53:24 people would have to set it up manually 2024-10-07 12:03:05 yes, you both are right. I'm just told my feeling loudly 2024-10-07 20:14:56 Receiving a lot of spam user registrations on gitlab the last couple of days, mostly unconfirmed users with semi-random domains 2024-10-07 20:24:01 "registration only via postcard" :) 2024-10-08 08:51:50 i think we can upgrade loongarch64 machine to use dl-cdn now, and upgrade to latest kernel? 2024-10-08 08:52:09 it still uses dev.a.o 2024-10-08 08:52:57 clandmeter: do you think we can reboot che-bld-2? 2024-10-08 08:53:15 hum, maybe after qt5 is built 2024-10-08 09:10:09 what machine is running the CI for loongarch64? 2024-10-08 10:02:54 ncopa: 172.16.24.5/24 2024-10-08 10:03:56 root@172.16.24.5's password: 2024-10-08 10:04:03 my ssh key is not there 2024-10-08 10:04:11 maybe we coudl upgrade the kernel and reboot it? 2024-10-08 10:09:38 will do in a bit 2024-10-08 10:21:09 rebooting che-ci-3 now 2024-10-08 10:30:12 There's also che-ci-2, on 172.16.24.4 2024-10-08 10:40:40 i think we may have performance issues with shared runner on x86_64 2024-10-08 10:41:00 I noticed the host responding slowly 2024-10-08 10:41:03 https://gitlab.alpinelinux.org/admin/runners/145#/jobs 2024-10-08 10:41:07 getting time outs 2024-10-08 10:41:38 maybe someone is borrowing it to mine bitcoins 2024-10-08 10:42:29 Nothingn I can see at least 2024-10-08 10:53:39 then its probably just a tiny rootkit :) 2024-10-08 10:56:43 Speaking of the CI runner 2024-10-08 10:57:12 The new 10-core aarch64 one builds Rust in half the time the 24-core one took 2024-10-08 10:57:58 That's ncopa's macbook 2024-10-08 10:58:44 Ok, quite surprising (to me) that a Macbook is able to build Rust faster than a server 2024-10-08 10:58:51 ncopa: fyi, it;s building scudo_malloc now 2024-10-08 10:59:12 the server has many cores, but each core is not that fast 2024-10-08 11:00:02 cely: thats Apples M3 cpu in a VM 2024-10-08 11:00:26 Ok 2024-10-08 11:00:43 cool that is is so fast 2024-10-08 11:01:08 i have seen M1 do crazy fast things under certain workloads 2024-10-08 11:01:15 i think it was some perl related tests 2024-10-08 11:01:20 Maybe that's why i've heard of people building GHC on riscv64 emulated on Apple's CPUs 2024-10-08 11:01:39 could be 2024-10-08 11:04:38 ncopa: oh, you got M3? 2024-10-08 11:05:12 yup, for my k0s work 2024-10-08 11:05:39 please employ me there :D 2024-10-08 11:05:53 JK 2024-10-08 11:06:00 https://www.mirantis.com/careers/ 2024-10-08 11:18:59 cely: apple silicon is really fast but in my tests two years ago emulation of riscv64 on it is not so fast as I expected 2024-10-08 11:19:19 maybe I should try again 2024-10-08 11:19:55 emulating under linux, not macos I mean 2024-10-08 11:20:48 Ok 2024-10-09 17:33:06 the CI machines are busy 2024-10-09 17:33:16 yup 2024-10-09 17:33:36 https://imgur.com/a/HAoRMFK 2024-10-09 17:33:45 chromium, donet8, donet6, rebuild of stuff (32 packages) against py-numpy 2024-10-09 17:33:49 and my llvm19 work 2024-10-09 17:33:59 What can we do about it except sit and wait? 2024-10-09 17:34:38 The queue is decreasing already, used to be 61 pending 2024-10-09 17:49:29 yeah 2024-10-09 17:49:32 nothing we can do 2024-10-09 18:20:16 back to 40 jobs in the queue 2024-10-09 20:50:49 Fun, another gitlab critical CVE 2024-10-10 07:17:20 thank you for taking care of that ikke <3 2024-10-10 07:19:33 stupid idea for this problem: I am working on llvm19, which includes rebuild clang19 and lots other stuff. For some arches, the llvm19 passes, for some it fails. very time i try to push a fix for something, it will rebuild stuf that already works. is there something we can do to avoid needless rebuilds? 2024-10-10 07:20:21 we could do some reproducible builds magic 2024-10-10 07:21:43 ncopa: would be nice, but also sounds very tricky to get right 2024-10-10 07:21:52 maybe we can create a hash of the APKBUILD+dependency chain. after the build is done, we store the artifacts in a cache storage 2024-10-10 07:21:58 key/value storage 2024-10-10 07:22:13 where the hash of the is the key 2024-10-10 07:22:32 and the artifact is stored for a limited time 2024-10-10 07:22:36 lets say 24 hours 2024-10-10 07:23:00 cache is per ci-host 2024-10-10 07:23:06 Artifacts per pipeline 2024-10-10 07:23:29 before the CI starts the build, it calculate the has from APKBUILD+deps, it irst tries to fetch from the key/value storage cache 2024-10-10 07:23:38 if its missing it builds it all 2024-10-10 07:23:44 if its there its skipped 2024-10-10 07:23:49 Right 2024-10-10 07:24:01 But wouldn't it also need the actual packages? 2024-10-10 07:24:19 that is what i call the "artifacts" 2024-10-10 07:24:25 Ok 2024-10-10 07:24:29 you store the actual packages in the cache 2024-10-10 07:24:52 the tricky part is how to calculate the hash 2024-10-10 07:25:01 We have storage on our mirror infra we could use 2024-10-10 07:25:32 Maybe some object storage fronted 2024-10-10 07:25:40 yes 2024-10-10 07:26:07 that is just an idea at this point 2024-10-10 07:26:49 we need to be able to calculate the hash so that if anything changes, that would result in different binary, we need to get different hash 2024-10-10 07:27:11 so it needs to include the dependency chain 2024-10-10 07:27:49 for example 2024-10-10 07:28:27 we could install the makedepends, trace the depenency chain and get the hash for each package in dependency chain 2024-10-10 07:28:33 and has the APKBUILD 2024-10-10 07:29:22 so that if nothing changes in APKBUILD, nor anything else in the dependency chain, we get the exact same hash, and gan fetch the packages from object storage 2024-10-10 08:32:36 Before you removed the other aports, leaving only llvm19, the artifacts weren't uploaded, as they exceeded the max size 2024-10-10 21:55:47 sounds vaguely like ccache with a shared cache? 2024-10-11 07:54:39 shared ccache sounds like a relatively easy way to solve this. we already support ccache in abuild, so I suppose we could just set up shared storage for sccache and plug that in 2024-10-11 07:55:06 should offload the CI machines on rebuilds 2024-10-11 15:48:01 seems like nld-bld-3 (aka nld8-dev1.alpinelinux.org) does not have dmvpn running? 2024-10-11 15:49:05 It's running, but not working 2024-10-11 15:51:23 ok. its weekend for me now. have a nice weekend! 2024-10-11 15:51:53 o/ 2024-10-11 16:25:32 gitlab sometimes cannot make up its mind about how many users there are 2024-10-14 10:36:07 im reconfiguring build-edge-loongarch64. it now gets ip via dhcp, and configured it to be .10 in dnsmasq 2024-10-14 10:37:09 i suppose we want the build-3-21-loongarch64 to run on same as the build-edge-loongarch64 lxc host? 2024-10-14 10:37:17 on che-bld-2 2024-10-14 10:39:12 We have only one host designated as builder, so I suppose so 2024-10-14 10:39:24 1 builder, 1 dev, 2 ci 2024-10-14 11:08:19 im bootstrapping aports on the 3.21 builders now 2024-10-14 21:21:09 huh, what's happening with pkgs.a.o 2024-10-14 21:21:13 it's constantly out of date 2024-10-14 21:21:32 today it's wasi-compiler-rt: https://pkgs.alpinelinux.org/package/edge/main/x86_64/wasi-compiler-rt 2024-10-14 21:21:43 18.1.8 on pkgs.a.o, in reality 19.x 2024-10-15 12:44:29 seems like ci runners are busy again 2024-10-15 12:51:38 32 pending jobs 2024-10-15 13:20:57 i can start up the 3.21 builders now, right? 2024-10-15 13:24:59 I think so 2024-10-15 13:25:06 Is there enough space? 2024-10-15 13:29:18 i hope so 2024-10-15 13:29:20 :) 2024-10-15 13:29:27 i can check 2024-10-15 13:29:41 for riscv64, yes. 700+G 2024-10-15 13:30:03 but it seems like the riscv64 builders does not mount /tmp as tmpfs 2024-10-15 13:30:18 maybe that would help with performance 2024-10-15 13:31:40 so we want mount /tmp via /etc/fstab in container or via lxc config? 2024-10-15 13:47:36 Usually lxc config 2024-10-15 13:47:49 But don't have a strong opinion 2024-10-15 14:10:46 we have disk space problem for s390x 2024-10-15 14:11:11 on 50G free 2024-10-15 14:15:35 Deleting caches on the various builders should help a bit, but it becomes tighter and tighter 2024-10-15 15:12:50 we have 222G on x86_64 builder 2024-10-15 15:13:05 We can expand the lvs there 2024-10-15 15:13:09 lv* 2024-10-15 15:14:01 👍 2024-10-15 15:14:47 vg0 1 4 0 wz--n- <6.55t 5.45t 2024-10-15 15:15:49 on arm, we have 232.6G for 3 builders 2024-10-15 15:16:27 I can clean it up a bit 2024-10-15 15:32:12 s390x is the most problematic machine at the moment 2024-10-15 15:33:23 330G on arm after clean 2024-10-15 15:33:25 that's also alarming 2024-10-15 15:33:30 110G per builder is challenging 2024-10-16 04:43:52 ikke: whenever you have a moment, could you maybe check the 3.21 ppc64le builder? it's been on main/py3-pexpect for hours, it might be stuck. thanks! 2024-10-16 05:04:02 yeah, it was deadlocked 2024-10-16 05:04:49 thanks 2024-10-16 09:58:45 something wrong with the gitlab currently? I'm seeing: ERROR: Preparation failed: adding cache volume: set volume permissions: create permission container for volume "runner-xrw9mwbw1-project-5-concurrent-0-cache-3c3f060a0374fc8bc39395164f415a70": Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? (linux_set.go:95:120s) 2024-10-16 10:14:28 fabled: that happens when the CI host is very busy 2024-10-16 10:58:11 ncopa: fyi, the loonarch64 3.21 builder is temporarily stopped. It's constantly building and failing libssh2 anyway, but trying to determine what the difference is 2024-10-16 11:03:22 ok 2024-10-16 11:06:22 maybe try this? https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/73580 2024-10-16 12:48:05 the CI runners are busy again 2024-10-16 18:46:47 mps: FYI. kernel headers update breaks busybox build: https://lists.busybox.net/pipermail/busybox/2024-October/090967.html 2024-10-16 18:50:37 omg, who removed CBQ from kernel! (I see Jamal removed but ...). Long ago I based my "TC traffic control - Linux QoS" on it 2024-10-16 18:51:08 more than 20 years ago 2024-10-16 18:51:57 ncopa: TBH this is first time I see that something can't be built with new kernel-headers 2024-10-16 18:52:29 and I think it is clear break of kernel promise of stable API 2024-10-16 18:52:52 so we cannot more rely on this promise 2024-10-16 18:53:59 for my 'guide' ref https://arvanta.net/mps/linux-tc.html 2024-10-16 18:54:31 ncopa: now I see you were right 2024-10-16 18:58:12 though this is already solved for iproute2 2024-10-16 18:58:42 busybox should also 2024-10-16 19:03:05 fix for fedora is here https://bugs.busybox.net/attachment.cgi?id=9760&action=edit 2024-10-16 19:03:19 maybe alpine should pick this patch 2024-10-16 19:03:29 ncopa: ^ 2024-10-16 19:08:05 anyway I will upgrade linux-tools !265962 2024-10-16 19:08:26 uf, again, !73606 2024-10-16 19:08:42 if no one have objections ofc 2024-10-16 23:39:05 ikke: main/zd1211-firmware failed to fetch from the sourceforge source url on the x86_64 builder, but was fine on aarch64 builder. just mentioning it in case it comes up again 2024-10-16 23:39:38 x86_64 builder: https://build.alpinelinux.org/buildlogs/build-3-21-x86_64/main/zd1211-firmware/zd1211-firmware-1.5-r2.log 2024-10-16 23:40:06 aarch64: https://build.alpinelinux.org/buildlogs/build-3-21-aarch64/main/zd1211-firmware/zd1211-firmware-1.5-r2.log 2024-10-16 23:40:40 was able to fetch when testing locally as well, so not sure what happened there 2024-10-17 09:14:00 Something wrong with both builders and CI. Builders (edge) can't upload packages to main, and all CI jobs are failing before they even start building anything 2024-10-17 09:36:45 I'll check in a bit 2024-10-17 10:45:18 ohg 2024-10-17 10:46:48 PureTryOut: 62a7de08570e3db2121c0a35e67f51ab3d81a11f 2024-10-17 11:28:09 What about it? 2024-10-17 11:32:15 Take a good look 2024-10-17 11:32:50 Oh wow haha 2024-10-17 11:33:00 (I already fixed it) 2024-10-17 11:33:10 thanks haha, sorry for that 2024-10-17 11:34:23 commit title wrong as well, derp 2024-10-17 11:35:01 yup :) 2024-10-17 11:35:46 It happens 2024-10-17 11:49:22 CI still doesn't work at all though :( 2024-10-17 11:50:34 did you rebase your branches? 2024-10-17 11:51:41 oh, ha, ofc 2024-10-17 16:11:46 what do we need to make loonarch64 CI not allow to fail anymore? 2024-10-17 16:12:13 https://tpaste.us/bQaM 2024-10-17 16:14:09 should I push that?] 2024-10-17 16:15:39 done 2024-10-17 16:16:39 ha 2024-10-17 16:16:47 thanks! 2024-10-17 16:17:02 you were faster than me: https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/73690 2024-10-18 05:03:28 py3-pexpect may be deadlocked on the 3.21 riscv64 builder, has been building for over 2.5h now 2024-10-18 05:46:36 Seems like it, yes 2024-10-18 05:50:47 in the case of ppc64le it did eventually pass, just mentioning in case you want to skip the test to avoid another deadlock elsewhere 2024-10-18 05:51:44 (pass on retry sometime later, after you terminated the process) 2024-10-18 05:52:02 not sure if it will manifest on more arches 2024-10-18 05:53:10 it already passed on x86_64, so maybe it is okay until next upgrade 2024-10-18 08:21:35 Gitlab seems to be down 2024-10-18 08:36:58 some sort of issues... seems to be somewhat working now? got 500 for various pages earlier 2024-10-18 09:55:21 yeah, gitlab has issues 2024-10-18 10:20:25 seems to be working now"? 2024-10-18 10:21:40 ok 2024-10-18 10:21:56 load has been quite high 2024-10-18 10:22:07 how many loongarch servers did they send us? 2 or 3? 2024-10-18 10:22:18 im talking with docker people now 2024-10-18 10:22:18 4 2024-10-18 10:22:26 oh.. 2024-10-18 10:32:56 i have re-organized the ip addresses of the loongarch machines a bit 2024-10-18 10:33:03 should probably document it in netbox 2024-10-18 10:33:41 We're a bit behind in documenting this setup 2024-10-18 10:35:35 I found a file on che-ctr-1 with the ip addresses 2024-10-18 10:35:40 i have updated it 2024-10-18 10:35:59 for dnsmasq? 2024-10-18 10:37:37 both. there is a /root/ipaddresses.txt 2024-10-18 10:37:43 oh ok 2024-10-18 10:37:51 and I also updated /etc/dnsmasq.d/lan.conf 2024-10-18 10:38:55 i havent updated the address for celie yet in dnsmasq. i think we should move the dev box IPs to .100 and upwards 2024-10-18 19:31:07 are the loongarch64 builders currently undergoing maintenance? 2024-10-18 19:31:48 seems to waiting on "pulling git" for some time 2024-10-18 23:11:24 the x86 package builder is not yet unstuck? 2024-10-19 14:28:53 3.21 s390x builder appears to be stuck 2024-10-19 14:29:07 checking 2024-10-19 14:29:13 thanks 2024-10-19 14:33:05 perl-server-starter apparently 2024-10-19 14:34:19 yes 2024-10-19 18:05:09 Gitlab is 404'ing 🤔 2024-10-19 18:07:53 PureTryOut: maintenance 2024-10-19 18:08:00 Ah ok 2024-10-19 18:08:07 posted in #-devel 2024-10-19 18:10:38 It's booting again 2024-10-19 18:12:29 ack 2024-10-20 05:10:04 3.21 s390x builder might be stuck (strongswan) 2024-10-21 18:50:40 ACTION taking popcorn :) !73898 2024-10-22 19:43:08 https://www.phoronix.com/news/Russian-Linux-Maintainers-Drop 2024-10-22 19:43:26 I hope alpine will not follow this nonsense 2024-10-22 20:08:21 FYI i have updated usa2-dev1 (s390x) but not rebooted it 2024-10-22 20:08:30 ok, thanks 2024-10-22 20:09:00 how many s390x machines do we have? 2024-10-22 20:09:09 for s390x we should have 2 2024-10-22 20:09:29 do you know where the other is? 2024-10-22 20:09:37 ip addr 2024-10-22 20:09:59 148.100.88.62/24 2024-10-22 20:11:01 https://netbox.alpin.pw/dcim/devices/?device_type_id=6 2024-10-22 20:11:22 gitlab runner 2024-10-22 20:11:30 yup 2024-10-22 20:11:40 maybe we should move the lxc dev contaienrs to there if we run out of space 2024-10-22 20:12:02 160G 2024-10-22 20:12:32 51.8G on the other 2024-10-22 20:13:16 Means we really need to do something before 3.22 2024-10-22 20:14:53 drop s390x :) 2024-10-24 06:18:13 It seems https://gitlab.alpinelinux.org/bratkartoffel/aports/-/jobs/1572983 has gotten stuck, probably some process started during the build is still running despite the build being terminated 2024-10-24 06:35:21 https://gitlab.alpinelinux.org/ayakael/aports/-/jobs/1572696 also looks to be going around in a loop, seems it constantly fails to do a background save 2024-10-25 14:15:07 Can someone please terminate the build of webkit2gtk on the Loongarch 3.21 builder? 2024-10-25 14:16:09 I don't think it will succeed, it hangs in CI for me, and i've pushed a commit that does complete for me in CI 2024-10-25 14:24:13 I think it's been building for more than 5.5 hours now, so it's probably hung, probably while compiling llint/LowLevelInterpreter.cpp or linking libjavascriptcoregtk 2024-10-25 14:54:25 Will do in a bit 2024-10-25 15:02:04 Thanks 2024-10-25 15:10:13 done 2024-10-25 16:09:22 cely: is https://gitlab.alpinelinux.org/Celeste/aports/-/pipelines/268171 still relevant? rv64 job runs >9h 2024-10-25 16:11:18 I'll cancel it 2024-10-25 16:12:22 Done, i was hoping to get a result out of that, but i guess it will succeed 2024-10-25 16:13:06 We'll know when the 3.21 builder reaches that, assuming webkit2gtk isn't upgraded before that 2024-10-25 20:24:05 looks like the ci runners except rv64 are waiting on repos to become available again 2024-10-25 20:25:24 Should be restored soon 2024-10-25 20:27:30 okay, thanks! 2024-10-25 20:29:30 oh the mail server at gitlab.alpinelinux.org is down? 2024-10-25 20:29:49 connect to 2024-10-25 20:29:49 gitlab.alpinelinux.org[2a01:7e01:e001:15a:1::2]:25: Connection refused 2024-10-25 20:30:01 But we didn’t find the actual issue yet 2024-10-25 20:30:12 so it could happen again 2024-10-25 21:26:18 fossdd[m]: I see, the IP has changed (and now set statically). I need to update DNS 2024-10-25 21:29:18 ah 2024-10-26 12:00:30 Sorry to bother you all on the weekend. DNS query over quic(quic://dns.adguard.com) always timeout on s390x CI: https://gitlab.alpinelinux.org/lindsay/aports/-/jobs/1575850 . 2024-10-26 12:00:32 The test passes successfully in qemu-user chroot on my machine, so I'm not sure if it's a network issue with the CI runner. 2024-10-26 12:44:05 is https://build.alpinelinux.org/ broken or are edge builders not building 2024-10-26 13:00:52 omni: They have not been enabled yet 2024-10-26 13:01:27 ah 2024-10-26 13:07:28 any plan on when? 2024-10-26 13:11:41 I'm about to start them again 2024-10-26 13:11:48 !74149 2024-10-26 13:12:46 oh 2024-10-26 13:18:57 omni: I already upgraded to 0.69.5 rust-bindgen and tested it locally with building linux-asahi kernel. works fine 2024-10-26 13:19:09 is test so important? 2024-10-26 13:25:55 don't we want to always run tests when available? 2024-10-26 13:26:28 and loongarch64 is failing on building the test suite, before running tests, with 0.69.5 2024-10-26 13:28:09 looks like the older linux-raw-sys-crate 2024-10-26 13:28:16 a lot of pkgs don't run tests for different reasons 2024-10-26 13:31:20 mostly when not available or when broken 2024-10-26 13:31:51 this also fall in 'different reasons' 2024-10-26 13:31:55 and too often for not very good reasons, like "one test is failing on this architecture, disable all tests for all architectures" 2024-10-26 13:32:32 and then forget about it 2024-10-26 13:33:04 or wait for bug report 2024-10-26 13:33:48 I imagine that, running as many tests as possible enables us to catch issues before they hit users (out of wich a small subset may report them) 2024-10-26 13:34:18 I'm not against this 2024-10-26 13:34:42 but in some pkgs test are buggy 2024-10-26 13:35:23 tests* 2024-10-26 13:37:10 btw, if someone wants to take maintainership of rust-bindgen I would be grateful. I'm very bad with rust and in near future don't plan to improve my knowledge about it 2024-10-26 13:37:44 and all 'my' pkgs where rust is used 2024-10-26 13:38:54 and linux-edge because it will need rust soon 2024-10-26 13:44:48 ok 2024-10-26 13:46:21 ACTION don't want to waste time on BigCo programming lang 2024-10-26 13:54:37 edge builders are running again 2024-10-26 14:00:54 \o/ 2024-10-26 14:13:16 thanks ikke 2024-10-28 21:57:53 some issues with gitlab? 2024-10-28 21:58:02 keeps timing out a request to open a new MR 2024-10-28 21:58:52 ptrc: via the webif? 2024-10-28 21:58:57 yeah 2024-10-28 21:59:06 https://ptrc.gay/KvOEcugk 2024-10-28 21:59:31 Has been reported to happen when your fork is a bit out of date 2024-10-28 21:59:41 not sure if that's the case 2024-10-28 22:00:07 might be like 100 commits behind 2024-10-28 22:00:39 `git fetch` is also incredibly slow as well over ssh 2024-10-28 22:01:15 actually both are 2024-10-28 22:01:52 load is a bit highish 2024-10-30 01:24:18 Just mentioning it here for the record, i probably should've ordered my commits differently, but i made sure to push libxml2 with --legacy and the libxslt upgrade togther 2024-10-30 01:26:05 So, while the libxslt upgrade looks like it came before libxml2 --legacy, they were pushed together, and if libxslt got rebuilt against libxml2 without --legacy, that probably means abuild could've gotten the dependency order resolution wrong 2024-10-30 01:29:03 but lesson learnt, next time i'll try to make sure libxslt comes after libxml2 in `git log` order so there is no confusion about this 2024-10-30 01:29:41 Anyway, i think we should consider putting a notice in libxml2 that libxslt needs to be upgraded together with it 2024-10-30 01:32:15 In this round of upgrades, there were separate MRs for them, but since they are so closely connected, i think they should always go into 1 MR (and if libxslt has no upgrade, then perhaps it needs to be pkgrel bumped with the libxml2 upgrade instead) 2024-10-30 07:41:57 Hmm 2024-10-31 11:26:36 git.a.o 502 2024-10-31 11:26:45 yeah, will look at it in a bit 2024-10-31 11:26:47 probably due to load 2024-10-31 11:27:00 right now for me it loads fine 2024-10-31 11:32:53 it is back! thanks