2026-04-01 05:28:29 ^ Issue with the host, linode is working on it 2026-04-02 16:25:36 what's a bad tree object and why is it fatal? 2026-04-02 16:28:57 Is that a trick question? 2026-04-02 16:30:04 https://gitlab.alpinelinux.org/alpine/aports/-/jobs/2285722 2026-04-02 16:31:22 I guess I can just restart it, but I'm not in a rush and am waiting for another riscv64 CI job to succeed (hopefully) and that architecture is a bit limited in resources, not just per job, right? 2026-04-02 16:32:51 It' 2026-04-02 16:32:57 It's an error returned by git 2026-04-02 16:33:09 From the looks of it, it's a transient error 2026-04-07 11:02:33 What keeps github.com/alpinelinux/aports up to date? 2026-04-07 11:06:07 gitlab push synchronization 2026-04-07 11:06:09 built-in feature 2026-04-07 11:06:51 (repo mirroring) 2026-04-07 11:10:49 Since codeberg.org/alpinelinux/aports has become more official it would seem good to sync that as well. 2026-04-07 11:14:47 I think we should be able to set it up, but it will require a user that can push to that repo. When we set it up, gitlab generates an ssh key pair 2026-04-07 11:18:56 I think the ssh key pair can be added to the repository as "deploy key" which avoids needing a dedicated user. 2026-04-07 11:19:17 I think achill has access to that 2026-04-07 11:20:19 yes I can look at it in the evening if you have time 2026-04-07 11:20:33 Later in the evening 2026-04-08 22:33:40 Is build-edge-s390x stuck? 2026-04-09 05:20:35 Sertonix[m]: I've kicked it 2026-04-09 14:38:07 got banned by go-away Request Id a70fdc74fce43abcbd8f3defc61a4053 2026-04-09 14:41:06 oh, it's back now 2026-04-09 14:41:46 yeah, was just due to gitlab restarting 2026-04-09 23:43:00 MASTER THE ART OF HACKING 🕹️... (full message at ) 2026-04-10 11:36:59 I'm tempted to merge !98913 to see if tests pass on build-3-23-x86* 2026-04-10 13:47:30 ikke: would it be possible to trigger a manual pipeline run on https://gitlab.alpinelinux.org/alpine/infra/docker/gitlab-runner-helper/? I'm using this image for the loongarch64 runner in pmOS and the version mismatch between gitlab-runner (18.10) and the gitlab-runner-helper image (18.9) on edge causes CI failures because the CLI flags changed :/ 2026-04-10 13:48:17 afaict this would fix itself in two days when the pipeline runs automatically but until then it would keep failing 2026-04-10 13:49:23 Yeah, I can trigger it 2026-04-10 13:50:53 thanks! 2026-04-10 13:52:27 It's running 2026-04-10 14:15:32 went fine 2026-04-10 16:33:30 Cleaned up the builder 2026-04-11 15:58:28 Is it possible to check if build-3-23-x86 when compiling firefox-esr is low on free memory or if the issue is the address space in a single process? 2026-04-11 15:59:49 https://tpaste.us/voBO 2026-04-11 16:01:19 The lowest the amount of available memory reached is ~200G 2026-04-12 13:47:11 build-edge-s390x does look stuck 2026-04-12 14:06:10 Kicked it 2026-04-13 08:45:17 i think the loongarch64 CI may have problems. apk-tools tests, fails, libuv tests fails. they pass on other architectures 2026-04-13 08:45:36 and I was not able to reproduce the libuv fail on my local loongarch64 2026-04-13 09:16:13 What kind of failures? 2026-04-13 09:24:24 fs_something test 2026-04-13 09:24:28 but it passes now 2026-04-13 10:04:00 upstream dovecot modified the the dovecot-2.4.3.tar.gz tarball 2026-04-13 10:04:16 is it enough to delete is from distfiles.a.o? 2026-04-13 10:04:45 we usually update the filename when changing the checksum 2026-04-13 10:07:10 they seem to have updated the manpages between their old 2.4.3 and the new 2.4.3 2026-04-13 10:07:25 yes, i think that is what they said 2026-04-13 10:07:47 i wonder if we could do som sort of block storage cache for distfiles 2026-04-13 10:08:01 where we store file by hash 2026-04-13 10:15:36 so that they would not have simple filenames? 2026-04-13 10:20:16 ncopa: I usually keep the old filename as reference (rename it) rather than removing it 2026-04-13 10:22:39 old hash: 0f9925f, new hash: 2b97ab9 2026-04-13 10:22:45 from the differences in the tarballs 2026-04-13 10:26:29 !100689 2026-04-13 10:27:45 it can be interesting to be able to compare 2026-04-13 10:28:58 omni: I have both old and new tarballs locally and diffoscope output 2026-04-13 10:34:42 same result, but I wonder why it is so slow 2026-04-13 10:35:58 diffoscope? 2026-04-13 10:36:37 yes 2026-04-13 10:38:01 compared to `tar xf dovecot-2.4.3-distfiles.tar.gz && mv dovecot-2.4.3 dovecot-2.4.3-distfiles && tar xf dovecot-2.4.3.tar.gz && diff -rup dovecot-2.4.3-distfiles dovecot-2.4.3` 2026-04-13 10:39:11 Not sure 2026-04-13 13:48:05 ncopa: I have already been considering to redesign the abuild distcache to use hashes 2026-04-13 13:48:38 So far it isn't more than an idea 2026-04-13 14:09:32 It should at least still include the original filenames to make cleaning up easier (unless you also design a system to take care of that) 2026-04-13 14:18:47 was thinking of a TTL of 2-3 years something 2026-04-13 14:19:16 ikke: That is basically the reason I didn't do it already. 2026-04-13 14:19:33 For edge, the ttl should be lower 2026-04-13 15:28:00 The hashes are in the build files. There would not be a problem on determining the referenced hashes from aports checkout. 2026-04-13 18:00:48 I opened a draft but don't know when I will have the time to make it properly: https://gitlab.alpinelinux.org/alpine/abuild/-/merge_requests/494 2026-04-14 10:21:26 build-edge-riscv64 is finally done with community. now its testing: 1 / 968 2026-04-14 10:22:04 ikke: i think I will add my p550 machine as ci builder, to offload the build-edge-* and upcoming build-3-24-* 2026-04-14 10:22:42 Ok, so then we would stop the ci runner there, you mean? 2026-04-14 10:22:46 i have a startpro64 here on my desk as well which is supposed to be a CI builder eventually, but I have just not had time to find a working kernel 2026-04-14 10:23:11 ikke: not necessarily, but it would schedule fewer jobs there, right? 2026-04-14 10:23:38 we could stop it there too if you think that would help 2026-04-14 10:23:45 Both aproaches work 2026-04-14 10:39:55 im changing the RUNNER_LIMIT to 1 on the scaleway riscv64 CI runners 2026-04-14 10:39:58 they are weak 2026-04-14 10:40:01 ok 2026-04-14 10:57:22 runner is up https://gitlab.alpinelinux.org/admin/runners/265 2026-04-14 11:20:52 ack 2026-04-14 11:40:46 would it be possible to fetch sources before installing dependencies? 2026-04-14 11:41:19 thinking it might shave some time and perhaps free a little bit of resources from CI jobs 2026-04-14 11:41:36 if issues with fetching sources are caught early 2026-04-14 11:42:12 due to missing/broken files/checksums or upstream issues 2026-04-14 11:43:08 could make a difference for aports with a large number of dependencies 2026-04-14 12:07:15 oh, uhm, I just got logged out of gitlab? 2026-04-14 12:34:15 Not sure if it's the case, but there could be packages that need the dependencies to fetch certain sources (custom fetch function for example) 2026-04-14 12:34:36 Even outside of aports, it would be a breaking change then 2026-04-14 12:39:25 wouldn't custom fetch functions be separate from $source and $sha512sums ? 2026-04-14 12:40:40 I mainly thinking of our CI here, and allowing for faster iterations when working on your MR 2026-04-14 12:41:33 Yes, but moving fetch before deps makes it impossible to use dependencies for fetch 2026-04-14 12:43:42 I don't follow, but maybe I just don't understand how sources from $source are fetched, haven't looked 2026-04-14 12:45:16 Add tool-x to depends; Write custom fetch function that uses tool-x 2026-04-14 12:45:29 Fetch fails because tool-x is not installed yet 2026-04-14 12:45:54 Regardless of what is in $source 2026-04-14 12:47:33 ok, I think I've just never thought about adding a fetch() 2026-04-14 12:48:12 It's not common, but it's supported 2026-04-14 12:51:20 ok, it's not that important to me, it was just an idea 2026-04-14 13:44:01 I have been considering to make overwriting of fetch() unsupported to allow fetching in more restricted sandboxes. 2026-04-14 13:50:18 wouldn't it be "nice" if fetch() was the only one allowed to do networks connections unless options="net" 2026-04-14 13:50:55 and perhaps a net_prepare and a net_check to toggle those, and build() would never be allowed to do network connections 2026-04-14 14:00:36 yeah that would be nice 2026-04-14 16:34:19 I would hope we can eventually have networking and the rest in seperate sandboxes with everything that is transfered between them being fixed by checksums. Maybe that means we can't avoid something like fetchdepends. 2026-04-14 16:35:57 (And tests that need networking would require creating a snapshot of the sandbox. Running tests in one copy and creating .apk files in the other) 2026-04-14 16:37:19 Unfortunatly it seems difficult to do that without producing too much overhead 2026-04-14 18:28:13 what about running package(), and freeze the produce, before running check()? 2026-04-14 18:30:01 if we're worried about anything in check() infecting what's produced in build() 2026-04-14 18:31:20 but running tests in a snapshot of the sandbox and then discard that, sure 2026-04-14 19:29:48 I have also thought about sanboxing the builds. been thinking of using crun and OCI images instead of bubblewrap. 2026-04-14 19:29:55 too many ideas. to little time 2026-04-14 19:39:51 but.. bubblewrap is a sandboxing tool whereas crun is an OCI runtime..? 2026-04-14 20:05:12 running tests in a seperate env is certainly a very good idea 2026-04-14 20:05:48 ncopa: maybe we should open a issue for that. afaik some at postmarketos also would rather use some other unprivileged sandboxing tool 2026-04-14 20:06:41 i dont really see benefits/downsides of different things, but looks like some do 2026-04-14 20:11:52 with OCI, you could possibly throw things at k8s and not care 2026-04-14 20:21:57 My research regarding sandboxing has been that there certainly isn't a single one that fits all use cases well. 2026-04-14 20:24:46 sydbox-oci is interesting, but doesn't build on all our architectures 2026-04-15 05:04:54 drats. build-3-2[0-2]-riscv64 are down 2026-04-15 05:05:06 I hope they havent been down for too long 2026-04-15 05:06:46 clandmeter: can you help power cycle the pioneer box? nld-bld-1 i think 2026-04-15 05:06:59 problem is that I have tagged releases 2026-04-15 05:07:04 i am tagging releases 2026-04-15 05:09:25 i think we may need to drop support for riscv64 for 3.22 and older 2026-04-15 05:10:11 i think the builder has been down since 27 March 2026-04-15 05:10:25 and we need to tag releases today 2026-04-15 05:57:17 ncopa: I still had an ssh session, but it froze when I executed a command 2026-04-15 05:57:27 serial console is not working either 2026-04-15 08:18:55 i wonder if we could connect something to the MCU UART 2026-04-15 08:22:48 i think we can power cycle it via the MCU UART 2026-04-15 08:25:41 clandmeter: I think we should get a couple of those: https://ftdichip.com/products/ttl-232r-rpi/ and plug them into the MCU UART so we can remotely powercyle the machines 2026-04-15 08:37:09 im rebooting nld-bld-2 it is very slow for some reason 2026-04-15 10:35:28 ncopa: you can already do that 2026-04-15 10:35:40 i can share the account of the powerplug 2026-04-15 11:58:10 this is stupid: Open 999+ Merged 999+ Closed 999+ All 999+ 2026-04-15 11:58:26 I am actualy interested in the real numbers 2026-04-15 12:17:11 No, I have graphs in Zabbix that keep track of it as well 2026-04-15 12:17:19 oh, sorry, misread 2026-04-15 12:22:35 I'm just complaining about the dumbing down of the interface 2026-04-15 12:22:40 it's not your fault 2026-04-15 12:25:36 bah, i think the bld-nld-1 died again 2026-04-15 12:26:22 does not look like we will be able to do any new 3.22 and older releases for riscv64 2026-04-15 12:26:33 oh ig is back 2026-04-15 13:59:37 is the CI s390x builder busy or unavailable? 2026-04-15 14:02:03 Busy, 11 pending jobs 2026-04-15 14:03:23 k, thanks 2026-04-15 16:00:59 build-3-20-riscv64 is now running from my hifive preimiere p550 2026-04-15 16:01:11 same as the CI 2026-04-15 16:01:26 will see how much slower it is than pioneer 2026-04-15 16:02:05 but hopefully it is more reliable 2026-04-16 06:27:50 interesting finding. building rust was faster on hifive premiere p550 (3h 56 min) than milk-v pioneer (4h 19min) 2026-04-16 06:57:21 rust build is single threaded? 2026-04-16 07:07:14 it doesn't scale as well with threads, yes 2026-04-16 07:07:34 the final parts of the build for cargo and rust are mostly single threaded 2026-04-16 07:19:40 that would explain it 2026-04-16 07:22:19 i think that applies to may builds 2026-04-16 13:56:14 the alpine-netboot-3.23.4-s390x.tar.gz appears to be broken 2026-04-16 19:02:45 just an observation, lxc-cloen build-3-23-loongarch64 to the 3-24 takes unexpectedly long time 2026-04-16 19:03:12 not sure whats going on, but I suspect disk io is insanely slow 2026-04-16 19:03:18 same with ppc64le 2026-04-16 19:03:57 The loongarch hosts should have nvme disks, right? Would not expect IO to be slow 2026-04-16 19:04:24 but it is 2026-04-16 19:04:32 has takes 15mins? 30mins? 2026-04-16 19:04:37 and still not done 2026-04-16 19:05:13 i have been able to do the entire operation for the other architectures and started the bootstrap 2026-04-16 19:05:29 and loongarch64 and ppc64le are stil copying data 2026-04-16 19:05:56 could also be it is slow due to the current load 2026-04-16 19:17:01 it had the distfiles of all history 2026-04-16 19:17:50 the ppc64le 2026-04-16 19:58:22 it helped to delete 250G data from distfiles/ 2026-04-16 19:59:54 I can imagine 2026-04-16 20:28:53 where can I find the msg.a.o mosquitto config? 2026-04-16 20:29:13 I intend work a bit on the messaging, so build.a.o shows 'offline' 2026-04-16 20:29:19 instead of idle 2026-04-16 20:31:19 i found it 2026-04-17 08:27:40 I'd like to upgrade alpine-msg to alpine 3.23. Is it ok that I do that? should I update docs somewhere? netbox? 2026-04-17 08:28:48 Fine with me. If we track that container in netbox, you can update the `platform` field to alpine 3.23 2026-04-17 08:30:06 will do 2026-04-17 08:30:29 i wonder how the mqtt-exec clients will react when server disapears and comes back 2026-04-17 08:30:54 I'm planning to work on fixing the build server status, so it says offline when it is offline 2026-04-17 08:33:34 I think mqtt-exec service will fail and be restarted 2026-04-17 08:34:18 As long as it's not offline for too long, it should work fine (supervisord will give up if restarting too many times) 2026-04-17 08:34:54 ncopa: How do you intend to mark a builder as offline? It's something I've been thinking to add as well, but can be tricky 2026-04-17 08:51:54 stop_post() { ... } 2026-04-17 08:52:19 stop_post() { su -s /bin/sh -c "mosquitto_pub -h $mqtt_broker -t $will_topic -r -m $will_payload" $exec_user; } 2026-04-17 08:52:32 will_payload="offline" 2026-04-17 08:53:25 Ok, so only if explicitly stopped 2026-04-17 08:53:36 Ack 2026-04-17 08:56:36 will_payload will set the state when broker unexpectedly lose connection 2026-04-17 09:27:12 I also wanted to tackle builders that are stuck for quite some time, but that's a different problem to solve 2026-04-17 10:02:15 i have been thinking of something that allows us to watch logs realtime 2026-04-17 10:02:21 or semi-realtime 2026-04-17 10:06:38 assuming the build logs are streamed to files, seems doable with something like vector (or fluentbit) watching them and sending somewhere else. usually for me that somewhere is victorialogs 2026-04-17 10:07:21 i've built a lot of observability stacks lately :p 2026-04-17 10:07:30 or pipelines, rather 2026-04-17 10:21:35 i was thining logging to a file and have a parallel job doing rsync --archive --partial --inplace something 2026-04-17 10:27:05 seems a bit strange to me to create your own solution for watching/reading/sending log files when purpose-built ones exist 2026-04-17 10:28:13 but... not my bikeshed to paint, so anyways :) 2026-04-17 10:28:53 i'd like to keep the dependency chain minimal because we occationally bootstrap new architectures 2026-04-17 10:29:28 right now the build infra uses MQTT (mosquitto) to track the state of the builders and to pass messages 2026-04-17 10:29:47 i am open to use off the shelf software to solve this 2026-04-17 10:31:45 let me rephrase it: I'd love to discuss how to modernize the building infra so we can reliable track builder state, monitor the builds, prepare for a future redesign of building infra 2026-04-17 10:31:57 ~ # rc-update 2026-04-17 10:31:57 Error relocating /sbin/rc-update: rc_set_user: symbol not found 2026-04-17 10:33:07 fair enough 2026-04-17 10:33:20 ncopa: pmos is suggesting buildbot 2026-04-17 10:33:59 i wonder what happened with rc-update after upgrade to 3.23 2026-04-17 10:37:10 ~ # apk info --who-owns /lib/librc.so.1 2026-04-17 10:37:10 ERROR: /lib/librc.so.1: Could not find owner package 2026-04-17 10:38:05 I wonder if this is a bug in apk-tools 2026-04-17 10:38:27 it does not appear to have deleted the files in /lib when upgrading 2026-04-17 10:42:29 find /usr /lib -type f | xargs apk info --who-owns 2026-04-17 10:42:57 any idea how to delete all the files that has no owner 2026-04-17 10:44:45 ERROR: /lib/apk/db/scripts.tar: Could not find owner package 2026-04-17 10:44:50 I suppose you don't want that to be removed 2026-04-17 10:44:53 yeah 2026-04-17 10:45:33 unless I keep /etc/apk/world, nuke it all, and reinstall 2026-04-17 10:54:40 ok its solved 2026-04-17 10:54:56 mosquitto is running 2026-04-17 10:55:03 but the builders has not re-attached to it 2026-04-17 11:21:30 it does not look like the builders reconnect to msg.a.o 2026-04-17 11:27:59 it looks like the builders are actually picking up the builds 2026-04-17 11:28:10 but build.alpinelinux.org does not pick up the status 2026-04-17 11:29:53 We need to add it to build-server-status 2026-04-17 11:30:52 https://gitlab.alpinelinux.org/alpine/infra/build-server-status/-/blob/master/backend/mqtt.go?ref_type=heads#L60 2026-04-17 13:55:37 ncopa: Do you want me to make that change? 2026-04-17 13:59:05 Do we want it to behave the same as idle? (resetting counters and activity? 2026-04-17 14:15:15 ncopa: what builders did you add the will topic to? 2026-04-17 14:16:56 or payload 2026-04-17 16:06:40 could the libyuv tarball be copied from edge distfiles to 3.24 distfiles? 2026-04-17 16:07:09 it is fetched from googlesource.com and will therefore never be the same checksum 2026-04-17 16:07:23 ugh 2026-04-17 16:10:45 copied 2026-04-17 16:12:11 ikke: currently it is will_topic="build/$(hostname)" 2026-04-17 16:12:19 will_retain=yes 2026-04-17 16:12:46 Im thinking we should change it to a status topic. build/$(hostname)/status 2026-04-17 16:13:19 for now I don't think we need any changes to the backend/mqtt.go 2026-04-17 16:13:36 I have a PoC that works with current setup 2026-04-17 16:16:05 I was also thinking of doing something like cating the contents of the libyuv archive, pipe it through sha512sum and fetch the source tarball in prepare() and verify the contents against the checksum there 2026-04-17 16:17:50 oogly, but so is GOOG 2026-04-17 16:35:18 ncopa: right, I later noticed there is a default 2026-04-17 20:51:51 i have updated the /etc/conf.d/mqtt-exec.aports-build on 3-19 -> 3.23 + edge builders 2026-04-17 20:51:56 not 3.24 yet 2026-04-17 20:52:12 they should now publish 'offline' whenever they are offline 2026-04-18 02:10:05 https://t.me/+c-LC9ed_hBgwYmZh 2026-04-18 09:55:38 where is build-edge-armv7? 2026-04-18 09:58:44 did it last build something 2026-04-15? 2026-04-18 10:22:58 something is wrong with !100798, the security upgrade commits are not in 3.20-stable as far as I can see 2026-04-18 10:27:50 now they are, through !100957, and I had to manually trigger 3.20-stable 2026-04-18 10:27:56 algitbot: hello? 2026-04-18 10:28:06 https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/100798 2026-04-18 10:28:12 https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/100957 2026-04-18 11:02:00 I might be able to have a look at it tonight unless someone beats me to it 2026-04-18 11:04:44 build-edge-armv7 and/or builders not always being triggered by merges? 2026-04-18 11:56:10 could spice-protocol tarball be added to 3.24 distfiles for rebuild please? upstream ssl cert expired 4d ago, unable to fetch tarball since https://distfiles.alpinelinux.org/distfiles/v3.23/spice-protocol-0.14.5.tar.xz 2026-04-18 12:59:18 Done 2026-04-18 13:09:05 thanks! 2026-04-18 21:31:13 build-edge-armv7 \o/ 2026-04-19 00:18:36 the output in the activity table at build.a.o for build-edge-armv7 looks odd 2026-04-19 01:39:04 now I see it on build-edge-riscv64 2026-04-19 01:39:16 is it intentional to show test output there? 2026-04-19 06:28:54 it's not expected 2026-04-19 10:51:43 back when I asked, the output didn't look too bad, just odd, now.. 2026-04-19 10:51:58 is it just from the edge builders? 2026-04-19 11:07:17 uh-oh! 2026-04-19 11:07:30 logs are not being saved! 2026-04-19 11:11:54 for example, missing 0.9.0-r0 log here https://build.alpinelinux.org/buildlogs/build-edge-aarch64/community/yaml-cpp/ 2026-04-19 11:19:20 I think the latest uploaded buildlog I can find is https://build.alpinelinux.org/buildlogs/build-edge-x86_64/community/intel-media-driver/intel-media-driver-26.1.6-r0.log 2026-04-19 11:19:33 41 commits back in the master branch 2026-04-19 11:19:50 so nothing since has uploaded build logs? 2026-04-19 11:20:19 ikke, ncopa: ^^ 2026-04-19 12:01:28 ncopa made some changes, probably related 2026-04-19 12:03:36 the buildrepo variable from conf.d/mqtt-exec.aports-build is missing 2026-04-19 12:38:28 omni: logs are uploaded again 2026-04-19 12:39:45 thanks! 2026-04-19 12:40:13 and the intermediate are lost? 2026-04-19 12:41:04 not salvagable by editing some other logs? (which would probably be tedious if possible) 2026-04-19 12:42:33 omni: I don't think so 2026-04-19 12:42:39 what do you mean with editing some other logs? 2026-04-19 12:43:13 since the output showed up at build.a.o, I thought that may have been stored in other logs somewhere 2026-04-19 12:44:06 and that you could cut snippets by hand and produce the missing build logs 2026-04-19 12:44:16 possibly not worth the effort 2026-04-19 12:44:18 Yeah, it's there, but like you said, tedious 2026-04-19 12:45:38 ok, good to know that they at least exist somewhere 2026-04-19 12:46:34 and if not superseeded by rebuild/upgrade logs, we'll get logs for the 3.24 builds eventually 2026-04-19 12:53:56 ugh 2026-04-19 12:54:17 what did I mess up this time? 2026-04-19 12:54:23 the upload of the logs? 2026-04-19 12:54:25 yes 2026-04-19 12:54:29 buildrepo= was missing 2026-04-19 12:54:36 meaning build output was sent directly to mqtt 2026-04-19 12:54:55 (specifically the -l argument to buildrepo) 2026-04-19 12:55:36 I assume that affects all builders 2026-04-19 12:55:52 i have edited the conf.d/mqtt-exec 2026-04-19 12:55:58 i have edited the conf.d/mqtt-exec.aports-build 2026-04-19 12:56:07 yes, I've restored it there on all builders 2026-04-19 12:56:12 thanks 2026-04-19 12:56:14 sorry 2026-04-19 19:19:40 where are built 3.24 packages stored? 2026-04-19 20:35:18 On the builders, until a repo is finished 2026-04-19 20:35:21 uh-oh, now I'm seeing it on build-edge-ppc64le again 2026-04-19 20:35:54 the build log output on build.a.o 2026-04-19 20:38:07 ikke: ok, I'd like to know if rust 1.94.1-r1 is built on the 3.24 builders, it doesn't look like it from the build logs 2026-04-19 20:39:05 The builders would keep trying to build rust if it wasn't 2026-04-19 20:43:12 ok, I thought they could just as well have moved on to other aports, and it's quite opaque to me when the log looks like this https://build.alpinelinux.org/buildlogs/build-3-24-x86_64/main/rust/rust-1.94.1-r1.log 2026-04-19 20:51:54 let's go then 2026-04-19 23:42:17 hello guys I wonder if built packages use some kind of object file like ccache. saw that you had problems with packages like electron. won't it save build bot resources to recompile just actually changed files? 2026-04-19 23:44:38 already suggested that, quite a long ago 2026-04-19 23:48:00 there were problems related to that? 2026-04-19 23:49:11 there are problems in whole builders design, something that's intended to be fixed 2026-04-19 23:55:46 ccache mainly works when building the exact same version multiple times. Otherwise a small change in a define or directory name can easily invalidate most of the cache. 2026-04-19 23:56:59 264-[16:13:44] if *any* of the inputs to the build change, including compiler versions or any dependency version, you have to rebuild anyway 2026-04-19 23:57:15 welp, it was in alpine-devel 2026-04-19 23:57:37 2026-03-19.log, time: 16.05 2026-04-19 23:59:19 IMO, it's worth to try using it but after fixing build infra completely 2026-04-19 23:59:45 ccache is very good for local development, that is for sure 2026-04-20 00:02:32 as you said before compiler update will purge the cache and most huge packages get updated roughly at the same interval as clang/gcc do, no? 2026-04-20 00:10:53 I checked clang and chromium were built with 1w+ delay from each other so it should work yea, maybe just download/upload ccache files directly from the APKBUILD? that won't require major infrastructure changes 2026-04-20 00:23:01 Having APKBUILDs do networking is something which we try to avoid. 2026-04-20 00:26:14 sccache with a builder local SCCACHE_DIR? 2026-04-20 01:08:35 In my experience sccache was slow and unreliable, but that might not be the case on the builders. For builders it might be important that multiple sccache instances using the same directory are not supported. A reason for me to not bother fixing rootbld with sccache 2026-04-20 11:00:32 ikke: can you help me with the CI? I don't know what I have to do to fix. https://gitlab.alpinelinux.org/ncopa/build-server-status/-/jobs/2314236 2026-04-20 11:04:12 it is extremely useful to see when a build server is offline 2026-04-20 11:05:48 Ah, you used a fork. That means there are no runners assigned 2026-04-20 11:06:59 You need to assign a project runner to your fork in the CI/CD settings 2026-04-20 11:12:44 maybe I shuould have pushed the branch to the origin repo instead of forking it? 2026-04-20 11:13:36 That would've been the easiest 2026-04-20 11:21:52 I have fixed it 2026-04-20 11:22:52 I have various improvements to the build.a.o. I discovered that you cannot really delete servers from there (eg the current `build-edge-aarch64/status` that is listed under `host`) 2026-04-20 11:23:52 I think we should have a build//state topic that can be either 'online', 'offline' or 'lost' 2026-04-20 11:24:43 the builders that support that will get a badge 2026-04-20 11:24:48 EOL build servers who does not list those should still work, just dont show a badge (so we know which servers has it implemented) 2026-04-20 11:25:42 we set the mqtt-exec will to 'lost', stop_post to 'offline' and start_post to 'online' 2026-04-20 11:56:06 ncopa: if there are plans to redo mqtt topics, then i could suggest, 2026-04-20 11:56:08 builder/3-24-x86_64/errors, builder/3-24-x86_64/status, builder/3-24-x86_64/builds 2026-04-20 11:56:58 having an issue on gitlab to discuss could also attract more suggestions 2026-04-20 11:59:37 using a mix-n-match of redis/mqtt can be done to simulate something like /sys,/proc tree 2026-04-20 12:01:26 if they get normalized over period, then other CI,build-systems can use the endpoints 2026-04-20 12:01:38 kinda standard 2026-04-20 12:04:54 right. I wonder which project I should put the issue in 2026-04-20 12:05:02 I do not have RPI's, but I use old mobile to attach to a server via usb/RNDIS to control some aspects of server during and post boot 2026-04-20 12:05:15 I am working on the mqtt topics, and will alos try work on the logging 2026-04-20 12:05:42 i guess it comes under "infra" 2026-04-20 12:06:15 current plan is to keep backwards compat, so we can migrate slowly 2026-04-20 12:06:20 in small steps 2026-04-20 12:08:27 back-compat can be done by retaining old topics but also adding the new end-points, and gradually moving builders to it 2026-04-20 12:09:16 adding new end-points can also help in designing nice analytics around it 2026-04-20 12:09:48 pls do add timestamps also :-) 2026-04-20 12:12:53 ikke: I suppose there is no easy way to run end-to-end testing with docker compose from CI? https://gitlab.alpinelinux.org/ncopa/build-server-status/-/jobs/2314256 2026-04-20 12:13:30 https://gitlab.alpinelinux.org/alpine/infra/build-server-status/-/merge_requests/31/diffs?commit_id=4d6f3101ccaa5acb1e720c541643c3ed50bf11eb 2026-04-20 12:17:04 if timestamps is not practical, then maybe add "seconds-lapsed" between last msg 2026-04-20 14:33:17 ikke: I'm a bit eager to get https://gitlab.alpinelinux.org/alpine/infra/build-server-status/-/merge_requests/31 pushed to prod. Do you mind if I do that? 2026-04-20 14:38:33 ncopa: if you believe it's working, go ahead 2026-04-20 18:03:05 do I need to restart the service or something after the new image is created? 2026-04-20 18:35:25 ncopa: re:^ which project ?, suggestion, https://gitlab.alpinelinux.org/alpine/infra/infra/v2 OR https://gitlab.alpinelinux.org/alpine/infra/v2 2026-04-20 18:36:03 here, we can gather specs/howtos/rfcs 2026-04-20 18:36:18 and can have its own wiki-pages 2026-04-20 18:37:17 it should not have codes, but can link to other projects or POC under /infra 2026-04-20 19:11:09 ncopa: pull the image on deu5-dev1 (~/build-server-status) and restart the compose service 2026-04-20 19:29:35 ncopa: I've deployed it now 2026-04-20 19:32:01 thanks! 2026-04-20 19:33:43 I had to hard refresh the browser, because it still wanted to connect via websockets 2026-04-20 19:40:08 aarch64 now has the online marker 2026-04-20 19:40:57 i testing it 2026-04-20 19:41:13 Right 2026-04-20 19:41:15 it works as expected, with one minor error 2026-04-20 19:41:20 The badge background is a bit too light for me 2026-04-20 19:41:35 It now appears like the text is bleading in the background 2026-04-20 19:41:42 huh 2026-04-20 19:42:19 so the text is difficult to read? 2026-04-20 19:42:29 https://paste.pictures/8MhYYV5h2b.png 2026-04-20 19:42:50 thats how it looks for me as well 2026-04-20 19:43:03 IMHO, good enough 2026-04-20 19:43:11 Without knowing it's a badge, it's not really clear 2026-04-20 19:43:24 too low contrast (the badge background, not the text) 2026-04-20 19:44:06 alright, may have a look at that later 2026-04-20 19:44:52 its relativaly easy to test locally now. (make e2e-up; make e2e-down) 2026-04-20 19:45:48 Perhaps something like #caeed4? 2026-04-20 19:46:16 what is still "broken" is when a builder is "lost" (non-clean shutdown) and comes back. It will still show up as "lost" because it is the openrc server that set it to "online". 2026-04-20 19:46:40 so it will continue as "lost" til the service starts up again. (reboot, or manually stop/start) 2026-04-20 19:47:27 Will have to change mqtt-exec to properly fix it. I think its low prio. 2026-04-20 19:47:46 may even be good to get indication that something was uncleanly shut down so we can have a look at it 2026-04-20 19:48:35 for this to work properly we need to reconfigure the builders: 2026-04-20 19:49:48 will_topic="build/$(hostname)/state" 2026-04-20 19:49:48 will_payload="lost" 2026-04-20 19:49:48 start_post() { su -s /bin/sh -c "mosquitto_pub -h $mqtt_broker -t build/$(hostname)/state -r -m online" "$exec_user"; } 2026-04-20 19:49:48 stop_post() { su -s /bin/sh -c "mosquitto_pub -h $mqtt_broker -t build/$(hostname)/state -r -m offline" "$exec_user"; } 2026-04-20 19:50:45 The build//state is owned by mqtt-exec.aports-build. 2026-04-20 19:52:24 but we dont need to change all builders at once, like we did before 2026-04-20 19:53:20 we can now also start add new subtopics, improve the mqtt layout, without need to reconfigure all builders at once 2026-04-20 20:04:36 I have a fix for the badge, I think I will just push it 2026-04-20 20:07:22 https://gitlab.alpinelinux.org/alpine/infra/build-server-status/-/merge_requests/32 2026-04-20 20:13:23 should be fixed now, but I don't know how to delete the cached css 2026-04-20 20:21:21 i think the image is not rebuild if there is only changes in css? 2026-04-20 20:30:05 oh interesting. there is a bug. apparently riscv64 does not publish the state after mqtt-exec started up. probably a race of some sort 2026-04-20 20:33:24 One thing riscv64 is good at is finding race conditions 2026-04-20 20:37:05 indeed :) 2026-04-20 20:37:25 to fix it i think we need to change mqtt-exec. I may do so at a later time 2026-04-20 20:37:48 i wonder if we should have the state indicator as a dot, infront of the host name 2026-04-20 20:38:03 green/grey/red dot 2026-04-20 20:38:19 instead of the full text "online/offline/lost" 2026-04-20 20:38:36 to save space on the page 2026-04-20 20:56:24 green/grey/red dot - +1 2026-04-20 20:57:38 problem though is that it may confuse the green dot with build "success"? 2026-04-20 20:57:42 if there are plans to attach rpi to physical servers, LED bulb can be done too, kinda "AIR Gap Indicator" 2026-04-20 20:58:30 I do that for fun, with "Samsung SM-J201F" 2026-04-20 20:59:10 well then 2dots 2026-04-20 21:26:35 instead of dot, use icon of monitor or font 2026-04-21 00:05:42 build-edge-armhf/status foo 2026-04-21 06:11:36 refresh. and delete local cache 2026-04-21 15:31:09 can some one help to fix this issue? '/var/cache/distfiles/dovecot-2.4.3.tar.gz.part' saved 2026-04-21 15:31:10 https://gitlab.alpinelinux.org/alpine/aports/-/jobs/2315698/viewer 2026-04-21 15:31:27 please :) 2026-04-21 15:34:43 https://tpaste.us/avlE 2026-04-21 15:42:55 The upstream source code has changed, you need to update the checksum 2026-04-21 15:45:51 sorry, I was calculating the checksum over the redirect page 2026-04-21 19:19:53 yeah, upstream changed the tarball. I dont know which checksum is the correct one 2026-04-21 19:30:55 The one I currently get is in the sha512sums, so a bit confused 2026-04-21 19:31:06 (directly curled, nothing cached) 2026-04-21 19:34:45 maybe we have the wrong one on distfiles? 2026-04-21 19:35:27 It fails on CI, no distfiles involved 2026-04-21 19:36:40 maybe someone complained to them and they moved back the old one on the upstream mirror? 2026-04-21 19:38:47 https://dovecot.org/mailman3/archives/list/dovecot@dovecot.org/thread/V37BEIZTNQODMWIHMTQK5DSOM5RDJ7NF/