2025-05-01 05:12:58 Hello, could i be added to the alpinedev group for uploading to dev.a.o/archive? (i gather this is the preferred way of fixing source files not file issues, instead of pointing the source to distfiles.a.o) 2025-05-01 08:35:48 cely: I added you there 2025-05-01 08:55:04 Thanks :) 2025-05-01 10:42:48 usa-che-1 shared runner is out of disk space https://gitlab.alpinelinux.org/haesbaert/aports/-/jobs/1830933 2025-05-01 10:46:33 checkiung 2025-05-01 10:46:35 checking 2025-05-01 10:51:05 hmm, there is 63G available atm 2025-05-01 12:50:03 bootstrapping jdk8 on armhf and aarch64 2025-05-01 12:59:56 cc1plus: out of memory allocating... https://build.alpinelinux.org/buildlogs/build-edge-armv7/community/neochat/neochat-25.04.0-r0.log 2025-05-01 13:02:07 32-bits, means it may also have run out of address space 2025-05-01 13:02:22 ah OK 2025-05-01 13:03:23 The builder has >250G memory 2025-05-01 13:03:27 looks like that successful build of nats-server at the same time was not uploaded? 2025-05-01 13:03:41 It only uploads once the entire repo finished 2025-05-01 13:03:42 https://pkgs.alpinelinux.org/packages?name=nats-server&branch=edge&repo=&arch=&origin=&flagged=&maintainer= 2025-05-01 13:03:51 ah explains 2025-05-01 13:04:28 do you know why armhf is listed two times in the link? 2025-05-01 13:04:55 and with different versions 2025-05-01 13:07:16 There were some storage issues, I had to recreate the index. Probably both the old and the new version are in the package repo 2025-05-01 13:07:38 ah 2025-05-01 14:19:25 bootstrapping openjdk11 on loongarch64 2025-05-01 14:40:45 Oh wait, i think for openjdk11 you have to bootstrap openjdk11-loongarch 2025-05-01 14:42:00 Or at least if you didn't bootstrap that, you'll have to bootstrap it also, iirc openjdk11-loongarch is able to build openjdk11 but not vice versa 2025-05-01 14:46:00 I suppose we would not get the server variant otherwise:? 2025-05-01 14:46:25 It did manage to get built quite quickly 2025-05-01 14:46:56 I think openjdk11 will just fail to build openjdk11-loongarch, as it lacks some feature 2025-05-01 14:47:18 And why is/was openjdk11-loongarch necessary again? 2025-05-01 14:48:19 Oracle doesn't want to accept the Loongarch patches for the Server variant, and relying on just the Zero variant is very slow 2025-05-01 14:48:53 right, what I mentioned above 2025-05-01 14:48:57 From what i remember, using community/abcl, the difference was between half an hour vs half a day 2025-05-01 14:49:11 s/using/building/ 2025-05-01 14:50:32 and would I install -loongarch from edge then instead of -bootstrap 2025-05-01 14:50:34 ? 2025-05-01 14:51:43 Yes, that should do it 2025-05-01 14:51:46 ack 2025-05-01 14:52:03 Thanks 2025-05-01 14:52:39 Only jdk11 has this issue, others you can bootstrap whichever is convenient, but bootstrapping -loongarch should be faster 2025-05-01 14:57:21 Finished 2025-05-01 15:06:59 Speaking of OpenJDK, could you have a look at the openjdk8-jre-base .apk built for 3.22 x86_64? I'm specifically thinking about the /usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64/server/libjvm.so file (this is the path it has in the edge .apk) 2025-05-01 15:08:23 There's a "libjvm.so: No such file or directory" error in 3.22 Octave build log, that's why i'm asking 2025-05-01 15:10:50 file usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64/server/libjvm.so 2025-05-01 15:10:55 usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64/server/libjvm.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=fee12924f7c5dccef1515b2ea70412c94fa4e22b, stripped 2025-05-01 15:12:04 Hmm, ok, not sure why Octave couldn't find it 2025-05-01 15:12:42 I guess i'll wait for the next retry and see if it still happens 2025-05-01 15:27:55 Can't reproduce the libjvm.so not found error in edge on the CI 2025-05-01 15:29:00 and i see the octave APKBUILD has !check for x86 for due to: "x86 libjava.so cannot find libjvm.so" 2025-05-01 15:30:10 Now i'm wondering if it could be caused by /usr/lib/jvm/default-jvm on the builder pointing to some other version (not java-1.8-openjdk) 2025-05-01 15:31:56 which could be the case if a higher jdk version was also installed (java-common.trigger just sorts the dirs, and symlinks the highest version available to default-jvm) 2025-05-01 15:37:20 Hmm, never mind, maybe it will be fine after jdk's higher than 8 are bootstrapped on x86_64 2025-05-01 15:37:50 /usr/lib/jvm/default-jvm -> java-8-openjdk 2025-05-01 15:40:22 Yeah, thanks for looking into it, i'll check Octave again after jdk>8 are available for it to use, i think there's a possibility it will build fine then 2025-05-02 14:30:16 Seems like the excessive requests to cgit have finally stopped 2025-05-02 14:32:15 awesome 2025-05-02 14:33:29 I switched from drop (502) to block (403) 2025-05-02 14:44:24 ikke: yeah generally when those scrapers scrap and then see a 502 they'll just try again later 2025-05-02 14:44:49 I've heard some AI bots try again when the return code isn't 200 2025-05-02 14:53:36 it's interesting to me that none of the countries have prosecuted this (where dos attacks have been litigated) 2025-05-02 15:22:07 that's because it's AI and AI gets to be immune in the eyes of the law™ 2025-05-02 19:12:29 bootstrapping ghc on aarch64 2025-05-03 10:57:34 bootstrapping openjdk21 on x86_64 2025-05-03 13:44:33 uh, dev.a.o is out of space? 2025-05-03 13:44:45 like, 100% used 0 bytes left 2025-05-03 13:51:20 Now there should be plenty of space left 2025-05-03 14:18:40 thanks! 2025-05-05 07:16:13 ikke: py3-loky might need releasing from edge ppc64le, sorry about that. the package had already rebuilt on other arches, but the tests may be a little fragile 2025-05-05 11:40:56 mio: I've killed the build 2025-05-05 13:59:13 ikke: thanks! 2025-05-06 13:51:02 im gonna bootstrap openjdk17 on build-3-22-x86_64 now 2025-05-06 15:26:46 someone is sending spam via alpine-devel list https://lists.alpinelinux.org/~alpine/devel/%3C2349dc138a0c4f7cbd8f53ef3aa65ccc%40semperincolumem.com%3E 2025-05-06 15:27:40 Yes 2025-05-06 15:28:16 i got a warning from linode that they will block something 2025-05-06 15:34:05 Yup, they already have 2025-05-06 15:34:10 I need to respond 2025-05-06 16:13:48 I responded to them 2025-05-06 16:22:22 hello, is openjdk21 still bootstrapping on loongarch64? 2025-05-06 16:22:27 (3.22) 2025-05-06 16:22:44 sorry, no 2025-05-06 16:22:52 started the builder again 2025-05-06 16:23:10 great, thanks :) 2025-05-06 20:23:25 blaboon: ping 2025-05-06 20:23:49 We have an issue with one of our vps 2025-05-06 20:24:01 Port 25 is blocked 2025-05-06 20:24:18 24913954 2025-05-06 20:24:26 Ticket 2025-05-06 20:45:39 i just removed the outbound block from that instance 2025-05-06 20:46:05 i informed our support staff too, so you'll probably see some followup in the support ticket at some point 2025-05-06 21:17:39 Thanks 2025-05-07 07:12:48 im bootstrapping openjdk21 on buid-3-22-aarch64 2025-05-08 09:54:20 loongarch64 and riscv64 CI fail to resolve x.org: https://gitlab.alpinelinux.org/fossdd/aports/-/pipelines/321644 2025-05-08 10:38:09 on loongarch64: 2025-05-08 10:38:09 >>> pixman: Fetching https://www.x.org/releases/individual/lib/pixman-0.46.0.tar.xz 2025-05-08 10:38:09 wget: bad address 'www.x.org' 2025-05-08 10:50:10 when, really, x.com is the bad address 2025-05-08 17:05:44 ipv4 vs ipv6 thing? www.x.org doesn't have a AAAA record 2025-05-08 17:07:29 x.com doesn't (need to) exist 2025-05-09 15:12:07 3.20 x86_64 ran out of space, `fatal error: error writing to /tmp/ccmmFlpj.s: No space left on device` while building linux-lts 2025-05-09 15:15:33 build-3-20-x86_64 builder that is 2025-05-09 15:16:27 ncopa: ^ (sorry, not sure who to notify) 2025-05-09 16:37:37 Strange, I've recently expanded the volume 2025-05-09 16:46:06 no idea, just reporting the sights ... gcc failing on 3.22 x86_64 with a similar error 2025-05-09 18:21:21 thanks for notifying. i supose it is /tmp on tmpfs 2025-05-09 18:24:17 no. it's not 2025-05-09 18:26:55 ikke: im expanding it 2025-05-09 18:31:15 Thanks 2025-05-09 19:27:16 ncopa: thanks! 2025-05-09 22:22:50 Is the ppc64le CI down? https://gitlab.alpinelinux.org/sertonix/aports/-/jobs/1845156 2025-05-10 07:48:08 looks like ppc64le CI is down indeed 2025-05-10 08:50:40 im bootstrapping openjdk17 on build-3-22-aarch64 2025-05-10 19:07:12 bootstrapping openjdk8 on build-3-22-x86 2025-05-10 19:23:29 bootstrapping openjdk21 on build-3-22-s39x 2025-05-11 15:43:18 bootstrapping ghc on build-3-22-x86_64 2025-05-11 16:34:45 bootstrapping openjdk8 on build-3-22-s90x 2025-05-11 16:42:53 bootstrapping openjdk8 on build-3-22-loongarch64 2025-05-11 18:34:03 bootstrapping openjdk21 on build-3-22-loongarch64 2025-05-11 18:34:44 bootstrapping openjdk11 on build-3-22-aarch64 2025-05-12 05:39:31 bootstrappking openjdk17 on build-3-22-s390x 2025-05-12 05:39:40 ncopa: can you start ^ again when finished? 2025-05-12 05:47:21 n/m, it already finished 2025-05-12 07:30:29 we need to get the ppc64le back online. any idea how? who do we contact? 2025-05-12 09:56:30 ncopa: I can open a ticket on the osuosl support page 2025-05-12 10:18:40 would be great. thanks! 2025-05-12 10:23:55 Sent an email to the support addrress 2025-05-12 17:49:42 It's back :-) 2025-05-12 17:53:00 :-) 2025-05-12 18:02:06 could someone also prod the 3.22 armv7 builder please? thanks 2025-05-12 18:03:50 done 2025-05-12 18:04:05 thanks! 2025-05-13 10:01:05 Repology seems to be failing to update after go-away was deployed on cigt: https://repology.org/repository/alpine_edge 2025-05-13 10:02:03 Also see https://github.com/repology/repology-updater/issues/1495 2025-05-13 10:28:12 armhf/aarch64 edge builders are stuck 2025-05-13 10:29:38 i'll have a look 2025-05-13 10:34:12 re repology and go-away. I wonder if it would make sense to have a static website with the aports tree, that is automatically `git pull`ed from on git push events 2025-05-13 10:34:44 that way we dont waste cgit cpu time to generate the aports tree 2025-05-13 11:12:46 Yeah, that sounds sensible 2025-05-13 13:08:03 we could just set up a small script that listens on mqtt and `wget --mirror`s the actual cgit page, i guess? 2025-05-13 13:10:51 i was thinking a mqtt listener that simply do if [ -d aports ]; then git -C aports pull; else git clone ...; fi 2025-05-13 13:11:11 ah, do they expect just the plaintext files? 2025-05-13 13:11:17 i think so 2025-05-13 13:11:49 https://github.com/repology/repology-updater/blob/master/repos.d/alpine/alpine.yaml#L63 2025-05-13 13:27:37 actually, does repology use a specific user agent? 2025-05-13 13:27:40 we could just whitelist that 2025-05-13 13:27:59 https://github.com/repology/repology-updater/blob/c93777454f35b591822c6cb1161a90a9461ca537/repology/fetchers/http.py#L36 2025-05-13 13:28:00 yeah 2025-05-13 13:28:05 that sounds waaaaay simpler 2025-05-13 13:36:22 Nod 2025-05-13 14:17:23 good idea 2025-05-13 14:18:21 ikke: how do we add the user agent to the allowlist? 2025-05-13 14:25:06 It's defined in /srv/compose/cgit/config/go-away on deu1-dev1 2025-05-13 14:25:30 You could copy and adjust one of the existing desired crawlers / bots 2025-05-13 14:25:43 But I can do that in a bit as well 2025-05-13 14:50:43 I added - 'userAgent.contains("repology-fetcher/0 ")' 2025-05-13 14:51:11 do I need to rebuild or restart the container? 2025-05-13 14:52:04 why dont they use github repo? 2025-05-13 14:52:47 ncopa: you can restart the go-away container 2025-05-13 14:53:26 clandmeter: excellent question 2025-05-13 14:56:23 From the Github issue, it's the link checker that's affected? 2025-05-13 14:56:30 > It's not the updater, it's linkchecker 2025-05-13 15:09:09 Apparently there is a different user agent for that 2025-05-13 19:09:04 ikke: I assume you cleaned up the go-away config. Thank you! 2025-05-13 19:10:42 ncopa: yes, the link checker was still blocked 2025-05-13 19:21:22 it works now. https://github.com/repology/repology-updater/issues/1495#issuecomment-2877638457 2025-05-13 20:11:00 Great 2025-05-13 20:29:37 bootstrappking openjdk11 on build-3-22-x86_64 2025-05-14 05:04:26 ncopa: fyi, I've requested an extra ppc64le VM from osuosl which was approved. It can act as a CI host 2025-05-14 05:10:32 bootstrappking openjdk11 on build-3-22-s390x 2025-05-14 18:19:51 durrendal: We have one server available to setup as a mirror, maybe we can work together on it to automate the process? 2025-05-14 18:56:47 bootstrappking openjdk17 on build-3-22-loongarch64 2025-05-14 20:33:47 bootstrappking openjdk17 on build-3-22-ppc64le 2025-05-14 22:13:40 ikke: slow reply, hectic day today. I would love to help out though! Do you have a deadline, or specific time you want the server up by? 2025-05-15 16:31:27 would it be possible to have the tarballs of paper-gtk-theme-2.1.0-r2 and pcc-libs-20230603-r0 copied over to 3.22 distfiles or someplace where the 3.22 builders can access them? the upstream source has been returning 404 for more than 2 weeks, and in the case of paper-gtk-theme the github repo no longer exists 2025-05-15 16:33:47 ack 2025-05-15 16:34:06 could you give the filenames? 2025-05-15 16:34:30 https://distfiles.alpinelinux.org/distfiles/v3.21/paper-gtk-theme-2.1.0.tar.gz 2025-05-15 16:34:38 https://distfiles.alpinelinux.org/distfiles/v3.21/pcc-libs-20230603.tgz 2025-05-15 16:34:45 thanks! 2025-05-15 16:35:25 done 2025-05-15 16:36:04 thanks, appreciated :) 2025-05-15 17:00:35 sorry, could you also add pcc as well? just unlocked after pcc-libs rebuilt https://distfiles.alpinelinux.org/distfiles/v3.21/pcc-libs-20230603.tgz 2025-05-15 17:01:00 from the same upstream as pcc-libs 2025-05-15 17:01:30 done 2025-05-15 17:01:52 thanks! 2025-05-15 17:02:31 Maybe would be good to open issues for those packages if upstream is gone 2025-05-15 17:03:34 sure, on it 2025-05-15 17:15:12 durrendal: there is no hard deadline that we know of 2025-05-16 00:17:02 ikke: okay great! Do we have any references on how the existing mirror servers are setup? Would love to read through what we have currently. 2025-05-16 00:18:28 Also let me know what time generally is most convenient for you, I'll try and bend my schedule around it if I can :) 2025-05-16 05:11:12 durrendal: https://gitlab.alpinelinux.org/alpine/infra/compose/alpine-mirror-sync is the meat of it, combined with traefik in front 2025-05-16 13:08:03 ikke: thanks a ton! I'll dig through this as soon as I've got some time today. My initial thought just glancing at the compose file is that it should be fairly straight forward to come up with a generic deployment mechanism for anything using docker compose. 2025-05-16 13:08:39 secrets could be handled through ansible-vault initially, though it'd be better to use something like openbao to source those 2025-05-16 13:09:20 I'm certain there's more to the deployment than just pulling in the compose file though 2025-05-16 15:08:28 yeah, but just slightly 2025-05-17 21:56:31 > Aparently the production builders have lua-term pre-installed, but its files are either incomplete or in a non-default location. In this way luarocks skips the installation and the compiler doesn't find the files later. 2025-05-17 21:57:16 any idea what could need lua-term on the builders? 2025-05-17 21:57:32 in the dependency graph there's just lua-busted 2025-05-17 21:57:47 but lua-aports doesn't touch either 2025-05-18 03:02:45 ptrc: it's not pre-installed 2025-05-18 14:26:16 Working on deploying renovate-bot on a kubernetes cluster: https://gitlab.alpinelinux.org/alpine/infra/k8s/cluster-nld12/-/tree/master/deployments/apps/renovate-bot 2025-05-18 14:26:32 But deploying it through CI/CD, not manually 2025-05-18 14:27:21 And it has been deployed here: https://gitlab.alpinelinux.org/alpine/infra/k8s/cluster-nld12/-/jobs/1857474 2025-05-18 15:51:49 awesome! 2025-05-19 11:54:42 LSP cmd connects to lsp server via rpc.connect using hostname 2025-05-19 11:54:42 test/functional/plugin/lsp_spec.lua:5445: test/functional/plugin/lsp_spec.lua:5452: server must receive `initialize` request 2025-05-19 11:55:09 that error on build-3-22-x86_64 when building neovim happens due to ::1 is missing in /etc/hosts. I have fixed it now 2025-05-19 12:57:36 looks like loongarch64 CI cannot lookup www.x.org? https://gitlab.alpinelinux.org/alpine/aports/-/jobs/1858905#L137 2025-05-19 14:00:17 I'm bootstrapping openjdk11 on build-3-22-ppc64le 2025-05-20 16:41:06 Yo dawg, I heard you liked kubernetes and renovate, so I put renovate in kubernetes, so that renovate can update renovate running in kubernetes 2025-05-20 16:49:09 hah 2025-05-21 13:27:26 OSError: [Errno 28] No space left on device 2025-05-21 13:27:36 How can i clean up the x86_64 ci? 2025-05-21 13:44:13 Which runner? 2025-05-21 13:54:30 https://gitlab.alpinelinux.org/admin/runners/235 2025-05-21 13:55:06 https://gitlab.alpinelinux.org/alpine/aports/-/jobs/1861967 2025-05-21 14:08:41 You can ssh into x86-64.ci.alpinelinux.org 2025-05-21 14:08:53 And then run docker system prune -af 2025-05-21 14:23:54 ssh: Could not resolve hostname x86-64.ci.alpinelinux.org: Name does not resolve 2025-05-21 15:09:00 sorry, it's with an _ 2025-05-21 15:09:04 root@x86_64.ci.alpinelinux.org 2025-05-22 04:58:38 cz.a.o has 275GB of space left 2025-05-22 14:12:41 uh so gitlab is down 2025-05-22 14:16:52 oof, checking 2025-05-22 14:17:46 apparently it's up again 2025-05-22 14:19:47 ah yeah 2025-05-22 14:26:34 something caused heavy load 2025-05-22 15:08:40 you aren't supposed to use _'s in hostnames 2025-05-22 15:08:52 ahuh 2025-05-22 16:38:09 some packages were built with old libprotobuf on the ppc64le builder. (maybe old version was installed by mistake?) 2025-05-22 16:38:17 i have cleaned those up now 2025-05-22 16:39:17 ok, thanks 2025-05-22 16:59:21 ncopa: just setup a new kubernetes cluster for CI 2025-05-22 16:59:30 k0stctl makes that extremely easy :) 2025-05-22 17:03:48 there is also a tool, k0smotron, where you can use kubernertes to manage multiple clusters (eg spin up and take down) 2025-05-22 17:04:10 eg a cluster of clusters 2025-05-22 17:04:40 heh 2025-05-22 17:04:42 kubeception 2025-05-22 17:04:48 but i think k0smotron may have limits on which infra it supports 2025-05-22 17:05:57 i wanted to run k0s on bare metal, managed by tinkerbell 2025-05-22 17:06:04 but thats a bit offtopic :) 2025-05-22 17:07:54 That would be cool 2025-05-22 17:22:04 first architectures are done with 3.22 builds 2025-05-22 17:24:30 \o/ 2025-05-22 17:30:25 thanks for triaging and fixing the remaining builds! 2025-05-22 17:32:57 \o/ 2025-05-23 11:52:53 nu_: ^ 2025-05-23 17:37:45 the arm builders are online but seem to be having trouble building, builds fail but no build logs 2025-05-23 17:44:52 Apparently no ipv4 2025-05-23 18:04:10 okay, thanks for checking 2025-05-23 18:37:11 checking 2025-05-23 18:40:18 should be working now 2025-05-23 18:41:16 builders are back now, thanks! 2025-05-25 12:47:47 are all arm* and aarch64 builds on the same machine? I forget 2025-05-25 12:48:33 nodejs has failed to build a couple of times now on aarch64 2025-05-25 12:48:57 c++: fatal error: Killed signal terminated program cc1plus 2025-05-25 13:34:42 omni: yes, a single host as builder 2025-05-25 13:37:42 right, and the builds have passed now 2025-05-25 14:41:17 could someone check on the 3.22 armv7 and x86_64 builders? they seem to be stuck 2025-05-25 21:01:48 please disregard previous message, the builders are fine now 2025-05-26 05:02:49 mio: I didn't ;-) 2025-05-26 05:03:25 :) 2025-05-26 09:07:31 the aarch64 CI builder has storage issues, sometimes runs out of space but I guess is cleaned up after runs? 2025-05-26 10:18:37 omni: the builder has 700G+ free 2025-05-26 10:24:31 huh, ok, that's odd, because several times the past couple of days I've seen aarch64 CI jobs fail due to running out of space 2025-05-26 10:27:45 Oh, CI host 2025-05-26 10:27:48 that's different 2025-05-26 10:28:25 We have multiple CI hosts, so it depends on what runner it takes 2025-05-26 10:39:23 ah, ok 2025-05-26 10:56:44 Should be cleaned up now 2025-05-26 11:12:55 👍 2025-05-27 09:36:46 I would like to run `apk dot --errors` and publish the result somewhere visible 2025-05-27 09:37:36 ideally it should be run on builders, or master mirror when new packages are uploaded 2025-05-27 09:42:19 cd 2025-05-27 09:42:28 https://tpaste.us/kn8Z 2025-05-27 10:38:07 I have opened an issue for the 3.22/main dependency graph: https://gitlab.alpinelinux.org/alpine/aports/-/issues/17191 2025-05-27 10:48:13 sorry that was wrong channel 2025-05-27 11:33:20 ncopa: Be aware that the output of apk dot --errors can (and does) differ between arches 2025-05-27 11:55:29 yeah, i know, i was thinking that every arch should build its own svg graph 2025-05-27 11:55:52 the idea is to make those visible, early 2025-05-27 11:57:35 i wonder if it is possible to filter the apk dot --errors. If we could filter the graph to only include errors were a give list of packages are involved, then we could do it from CI 2025-05-27 11:57:42 after the .apk is generated 2025-05-27 12:00:17 i suppose something like: if apk dot --errors | grep -w "$pkgname"; then genereate_svg && upload_svg_artifact && exit_with_error; fi 2025-05-28 08:24:03 hi o/ per ncopa's suggestion, i'm volunteering to help out with infra if you guys have anything that needs taking a look at 2025-05-28 08:35:44 lotheac: hey, thank you for offering your help. What are you experienced in? 2025-05-28 08:37:21 "everything" :-) i've done sysadmin/sre/etc. stuff as my dayjob since 2008. tcp/ip, observability, kube, etc. 2025-05-28 08:38:01 used to have a hobby OS project https://unleashed.31bits.net/ 2025-05-28 08:41:44 most recently at $CLIENT i've redone SSO infra, created mesh-VPN infra (tailscale/headscale), created autoscaling kube clusters & metrics/logging onto them on hetzner cloud 2025-05-28 08:43:32 maybe i should have just linked my cv :) 2025-05-28 08:46:31 Heh 2025-05-28 08:49:05 guess it's not a bad idea anyhow, have some lists of technologies there https://lotheac.fi/cv.html 2025-05-28 10:00:20 looks like the CI is having some sort of problem: https://gitlab.alpinelinux.org/ncopa/alpine-conf/-/jobs/1872144 2025-05-28 10:01:59 passed after 3-4 retries 2025-05-28 10:17:08 Hey everyone :)leso-kn here with a quick infra question: I recently submitted the paged-markdown-3-pdf aport which is based on Lua. The Gitlab CI pipeline passed so it was merged, but on the buildozers the build fails due to a missing lua module. I know that there is some lua tooling for the builders, so I was wondering if the required file might have been removed or is installed in a non-standard location? 2025-05-28 10:19:24 Apologies for the formatting, just getting started with IRC. The module is lua-term and the missing file which should be provided by it is /usr/share/lua/5.1/term/colors.lua 2025-05-28 10:20:10 Apologies for the formatting, just getting started with IRC. The module is lua-term (pre-installed on the builders) and the missing file which should be provided by it is /usr/share/lua/5.1/term/colors.lua 2025-05-28 10:24:51 Leso: These are the only lua packages installed on the builders: https://tpaste.us/8B7b 2025-05-28 10:26:00 And in CI these: https://tpaste.us/r5Jj 2025-05-28 10:29:59 Thanks for the quick reply! Hmm, very interesting, according to the build log the bundler found a pre-installed lua-term v0.7-1 – the version that is packaged for alpine, while the latest available via luarocks would be 0.8-1 2025-05-28 10:31:00 > see the require error near the end in https://build.alpinelinux.org/buildlogs/build-edge-aarch64/testing/paged-markdown-3-pdf/paged-markdown-3-pdf-0.1.3-r0.log 2025-05-28 10:40:54 I cannot find term/colors.lua anywhere on the builder (not even in the src/.luarocks dir) 2025-05-28 10:42:02 So not sure why luarocks says 0.7-1 is installed 2025-05-28 10:47:36 Exactly, the compilation error is triggered because the 'lua-term' module is detected as installed, but 'term/colors.lua' is missing from the filesystem. The lua-term module should provide it (see https://pkgs.alpinelinux.org/contents?name=lua5.1-term&branch=edge&repo=main&arch=x86_64) 2025-05-28 10:48:11 Do the builders happen to have '/usr/lib/luarocks/rocks-5.1/lua-term/0.7-1/rock_manifest' or '/usr/lib/luarocks/rocks-5.4/lua-term/0.7-1/rock_manifest'? These would mark lua-term as installed 2025-05-28 10:50:13 https://tpaste.us/PQaQ 2025-05-28 10:50:24 That's without any deps installed 2025-05-28 10:51:37 (this is on build-edge-aarch64) 2025-05-28 10:56:13 okay, that's very strange. In that case the build should succeed, perhaps there were some leftovers from a different build when it got merged 2 weeks ago. Would you have a moment to trigger the build manually before I submit a MR to re-enable the aport? 2025-05-28 10:56:29 testing/paged-markdown-3-pdf 2025-05-28 10:57:49 Seems to pass now 2025-05-28 10:58:20 Nice :) That's a bit concerning, but cool, I'll submit a MR then 2025-05-28 10:58:38 lrc 1.0.1-1 depends on lua-term (not installed) 2025-05-28 10:58:40 Installing https://luarocks.org/lua-term-0.8-1.rockspec 2025-05-28 11:01:27 Perfect, that's how it should be 2025-05-28 11:25:44 pass https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/84874 2025-05-28 11:25:53 Thanks a lot for your time :) @ikke 2025-05-28 11:28:28 No problem 2025-05-28 14:03:42 anybody know what this is? https://gitlab.alpinelinux.org/razzydm/razzy-dm 2025-05-28 14:08:06 and this: https://gitlab.alpinelinux.org/XalayanMarie/abdera.amyth 2025-05-28 14:12:32 dunno why, but gitlab feels unusual sluggish 2025-05-28 14:41:37 it's probably due to "AI" 2025-05-28 14:44:32 Average amount of requests has increased over time 2025-05-28 14:52:45 time to set up countermeasures? :) 2025-05-28 15:10:48 I already have some counter measures, but I fear the moment I have to go any further 2025-05-28 15:11:47 how so? 2025-05-28 15:12:50 Because it most likely will affect legimimate users as well 2025-05-28 15:13:35 seems like a tradeoff that cannot be helped in the world we live in 2025-05-28 15:17:28 i get the reluctance though, but it seems to me most projects in the publicly-accessible software-forges have already made that tradeoff with anubis et al. 2025-05-28 15:18:03 I already deployed something for git.a.o 2025-05-28 15:18:51 cool :) 2025-05-28 15:19:16 did it help? 2025-05-28 15:19:19 yes 2025-05-28 15:20:49 from ~40r/s to now ~1r/s 2025-05-28 15:22:36 it's an unfortunate development for the internet at large, but 40x resource usage is enough in my book to false-positive some legitimate users 2025-05-28 15:23:28 depending on the value of "some" ;) 2025-05-28 15:35:59 fwiw - anarcat said (for tor's GL) he just banned alibaba at the ip level, blocked fb/meta's crawler in robots.txt, and added a crawl-delay to robots.txt 2025-05-28 15:36:26 to address their slowdown 2025-05-28 15:51:34 invoked: we were at some point stoked by random ips with random user agents 2025-05-28 15:51:44 No manual filter would help 2025-05-28 15:52:20 random user agents kills the anubis approach 2025-05-28 15:52:30 which i didn't think would last long anyway 2025-05-28 15:53:11 what do you mean, kills that approach? 2025-05-28 15:54:13 'If the client has a User-Agent that does not contain "Mozilla", the client is allowed through.' 2025-05-28 15:54:29 that's the first test in anubis 2025-05-28 15:54:51 It's not completly random 2025-05-28 15:55:02 ah 2025-05-28 15:55:10 mostly old chrome versions 2025-05-28 17:27:46 is build-edge-aarch64 intentionally not building? 2025-05-28 17:28:04 the kubernetes runner are a bit flaky today 2025-05-28 17:28:06 https://gitlab.alpinelinux.org/fossdd/aports/-/jobs/1872822 2025-05-28 17:28:12 dial tcp 10.96.0.1:443: connect: no route to host 2025-05-28 17:33:36 ok 2025-05-28 17:35:52 yeah 2025-05-28 17:36:28 ERROR: Job failed (system failure): prepare environment: setting up credentials: Post "https://10.96.0.1:443/api/v1/namespaces/gitlab-ci/secrets": dial tcp 10.96.0.1:443: connect: no route to host. Check https://docs.gitlab.com/runner/shells/index.html#shell-profile-loading for more information 2025-05-28 17:49:49 Yeah, those hosts have not proven to be very stable 2025-05-28 18:05:35 where are they hosted? 2025-05-28 18:06:45 cloudon 2025-05-28 18:10:49 do you mind if I take a look? 2025-05-28 18:12:04 no 2025-05-28 18:12:16 I'll pm youi 2025-05-28 18:12:26 👍 2025-05-29 05:02:33 what's the kube control plane topology? are etcd and apiserver set up in a HA configuration? 2025-05-29 05:09:55 i'd be interested in making that stuff more reliable :) 2025-05-29 05:13:53 lotheac: I know k0s is in use, don't know the exact config, but it's likely (at least partially) in gitlab 2025-05-29 05:15:04 cheers, i'll try to find it 2025-05-29 05:15:07 lotheac: https://gitlab.alpinelinux.org/alpine/infra/k8s 2025-05-29 05:15:10 thanks! 2025-05-29 05:16:23 looks like single controller 2025-05-29 05:16:53 cluster-nld13/k0sctl.yaml has four nodes with role: controller+worker though 2025-05-29 05:17:04 https://gitlab.alpinelinux.org/alpine/infra/k8s/cluster-nld12/-/blob/master/terraform/k0sctl.yaml.tftpl?ref_type=heads#L10 (at least for nld12 2025-05-29 05:17:08 i haven't used k0s but i'm assuming that means a HA cplane 2025-05-29 05:17:44 yeah, that's a "HA" workerplane 2025-05-29 05:18:57 ("HA" because 4 controllers probably isn't a great idea... I mean it's better than 1, but...) 2025-05-29 05:19:01 a bit unclear to me which cluster the gitlab runners are in, can't find manifests for those 2025-05-29 05:19:40 k0s/k0sctl is pretty easy to pick up if you have k8s experience 2025-05-29 05:19:52 i figured as much :) 2025-05-29 05:20:27 kubeadm is what i'm most used to 2025-05-29 05:24:50 i don't suppose it would be overly difficult to add more cplane nodes to that cluster, but i obviously don't have the whole picture here 2025-05-29 05:25:23 i mean there is already a LB there (although dunno if it's the LB for the apiserver or something else), etc 2025-05-29 05:29:19 My guess is it's a tradeoff between having to rebuild a controller (which should be easy with the config all in tf/k0s/etc) if there's an outage or burning money on multiple cnotrollers 2025-05-29 05:31:09 well, you'd generally want a HA cplane for things like k8s upgrades anyway to avoid downtime. those machines running the cplane don't generally need to be very beefy 2025-05-29 05:33:06 i have a couple nanopi r6s's under my desk doing that for my personal cluster, and at one of my clients' infra 3 minimal-sized aarch64 cloud nodes per cluster... at 3.79 EUR/apiece/month 2025-05-29 05:35:05 upgrades aside, failure resiliency would be nice when the cloud node fails 2025-05-29 05:35:28 i would say that's worth a few dozen dollars a month :-) 2025-05-29 05:36:01 lotheac: In many cases, we have to do with what we get 2025-05-29 05:36:21 if I were to guess, nld13 is the ci runners which is the 4 controller+worker nodes, so it should be HA 2025-05-29 05:36:41 if so, i wonder what the failure was 2025-05-29 05:36:49 The current one is not yet in gitlab but similar to nld13, except one controller 2025-05-29 05:36:52 ikke: yeah, that's understood of course. 2025-05-29 05:37:03 It means no dedicated controllers 2025-05-29 05:37:11 no loadbalancer 2025-05-29 05:37:23 why does it mean that? 2025-05-29 05:37:35 cost? 2025-05-29 05:37:44 keep in mind that just 3 months ago (or less?) alpine got basically evicted from one of their main infra hosters 2025-05-29 05:37:58 right, i had forgotten about that 2025-05-29 05:38:09 lotheac: No, they are not general cloud providers, we cannot just buy random resources 2025-05-29 05:38:34 but they generally are cloud stuff and not bare metal? 2025-05-29 05:38:54 It depends, but in many cases bare-metal 2025-05-29 05:39:21 i guess i meant more like who owns the metal 2025-05-29 05:39:36 and the space it's in 2025-05-29 05:39:48 The companies that sponsor us the HW 2025-05-29 05:40:51 an external LB is not a requirement for a HA cplane if you have some say in how the hardware is networked 2025-05-29 05:40:55 In the case of that CI cluster, it happens to be VMs, but we just receive those VMs 2025-05-29 05:40:59 eg. kube-vip etc. 2025-05-29 05:41:10 (or any other virtual-ip solution) 2025-05-29 05:42:16 I just learned the other day k0s supports an internal loadbalancer like you see with nld13 2025-05-29 05:42:34 i learned that just now :) 2025-05-29 05:43:45 so are you saying that there is a certain fixed number of VMs that you receive for that use, and you don't want to dedicate more than one of them to running the cplane, in the interest of having more resources to run the actual workloads? 2025-05-29 05:44:37 lotheac: yeah, the VMs itself are beefy (at least in bare specs), and it would be a pitty to dedicate it just as a controller. 2025-05-29 05:46:20 then, how about running the cplane itself somewhere else on less beefy stuff? and connecting the worker nodes over wg or somesuch 2025-05-29 05:46:48 For CI, HA is not necessarily important, but it depends on the failure mode 2025-05-29 05:47:22 If a runner is unavailable, but we have other runners to pick up the job, that's fine 2025-05-29 05:47:29 less babysitting and manual recovery of nodes is pretty good for qol ;) 2025-05-29 05:48:05 but if a runner picks up jobs, but that fails for some reason technical reason, that would be annoying 2025-05-29 05:48:17 lotheac: that's why for nld13, I have multiple controllers 2025-05-29 05:49:25 sure. i'm just trying to understand why not for the CI cluster too 2025-05-29 05:49:46 I tried initially, but the cluster was not usable 2025-05-29 05:50:02 I then learned that for multiple controllers you need a loadbalancer 2025-05-29 05:50:13 and later I learned that you can have an internal loadbalancer 2025-05-29 05:50:35 hehe :) 2025-05-29 05:51:10 Fairly new to k8s, so I'm learning as I go 2025-05-29 05:51:48 thanks for the description, sounds reasonable enough. i'm offering to help if you want it anytime :) 2025-05-29 05:52:04 yes, that's very appreciated 2025-05-29 05:52:36 I am commissioning nld13 (the k8s cluster is already running like described in that k0sctl file) 2025-05-29 05:53:45 are you using any config management for the nodes themselves aside from k0sctl? 2025-05-29 05:54:12 Not at the moment, was looking a bit into ansible 2025-05-29 05:54:29 But, the config of those nodes is also very minimal 2025-05-29 05:54:43 in my experience it's better not to, unless you need that to create the cluster in the first place. because you can use the control-plane itself to manage stuff even if it needs host netns etc 2025-05-29 05:55:11 right 2025-05-29 05:55:50 one weird hack i did recently was create static pods for kube-vip (the virtual ip thing for the apiserver address) using a daemonset that runs on cplane nodes only and just writes to /etc/kubernetes/manifests/ when stuff in my kube git repo changes 2025-05-29 05:56:09 it's a bit strange but it is pretty nice to avoid ansible etc. 2025-05-29 05:56:57 of course there is a chicken-and-egg problem there that needs resolving at initial cluster creation time, but adding new cplane nodes needs no such hackery 2025-05-29 05:58:48 another thing i did semi-recently (on a cloud cluster) was spinning up tailscale containers as a daemonset using host netns, which gives me connectivity to new autoscaled nodes etc. without having to ansible 2025-05-29 05:59:48 if circumstances dictate, it'd also be possible to run the cluster internal network on top of a vpn like that and you could have nodes from different providers in the same cluster. of course latency is not optimal, but it might not matter 2025-05-29 05:59:54 Do you have experience with having the control plane living somewhere else? 2025-05-29 06:00:23 not in practice, but i've been thinking about that 2025-05-29 06:01:10 just need to establish an internal network first to build the cluster on. that could be tailscale or just plain wg 2025-05-29 06:03:08 semantically the control plane being somewhere else doesn't matter much to the worker nodes that are being added (or removed) into the cluster. they spin up and run the necessary initialization procedure to join the cluster by talking to the apiserver 2025-05-29 06:03:40 so you would just need to make sure they can talk to the apiserver from where you're spinning them up -- potentially by just creating a wg link first 2025-05-29 06:54:06 we could have a control plane in some reliable cloud. 3 (smallish) nodes. the worker nodes can be anywhere, and dont need be able to route to each other. at least not for CI 2025-05-29 06:55:29 yeah 2025-05-29 06:56:40 i would try to make it so that worker nodes _can_ communicate though, there might be assumptions baked into things 2025-05-29 07:49:55 Would we have a single cluster for all CI hosts? (Currently we have 2 separate sponsors providing us compute) 2025-05-29 07:52:39 i find fewer clusters tends to be easier to manage but you could do it either way (even multiarch/heterogeneous nodes etc) 2025-05-29 07:55:10 if the usecases for separate sponsors’ nodes are identical i think it makes sense to put them in the same cluster. unless there is some security boundary that prevents that 2025-05-29 07:58:27 Having 3 control nodes per cluster would quickly add up 2025-05-29 08:06:22 that’s also a good point in favor of fewer clusters 2025-05-29 08:35:04 lotheac: FYI in case you didnt know. I am also on the k0s team. 2025-05-29 08:42:40 ah, I did not :) that makes sense 2025-05-29 08:43:20 So using k0s makes a lot of sense for us :) 2025-05-29 08:45:30 yeah definitely some synergies 2025-05-29 08:46:21 I have no experience with alternatives, but setting up a new cluster with k0sctl is minutes of work 2025-05-29 08:46:24 maybe i should try out k0s then :) 2025-05-29 08:47:09 it’s not hard with kubeadm either… haven’t tried rancher but have been exposed to some cloud vendor managed stuff as well 2025-05-29 08:48:33 For now, I have chosen to install / manager kubernetes clusters myself instead of using cloud managed options just to get a better feel of what it entails 2025-05-29 08:49:05 k0s still abstracts a lot though 2025-05-29 08:59:42 that’s definitely better anyway. the cloud vendors doing managed kube overcharge for simple stuff 2025-05-29 09:00:44 and generally seek to lock you in :) 2025-05-29 09:02:08 nod 2025-05-29 09:04:12 Not sure if it's feasible for you to do so, but perhaps you could work on a terraform setup similar to what is done for nld12 for a 3 node controller setup with a loadbalancer? 2025-05-29 09:38:52 sure. i don’t much like terraform but i can do that. which cloud provider would you like to work with? 2025-05-29 09:39:10 linode 2025-05-29 09:39:36 alright, i’ll take a stab after i’m done with dinner :) 2025-05-29 11:16:25 i wonder how much it would help to add a cache in front of gitlab.a.o. Would be interesting to see access log stats, which pages are the 20 most visited, or how many pages are visited more than once. 2025-05-29 14:04:47 For me, trying to look at the commit list of a certain file/directory in gitlab always results in an error 500 2025-05-29 14:05:07 Request ID: 01JWE5PXANP44Z3YYXQ9V59S2M 2025-05-29 14:06:07 "exception.message": "4:Deadline Exceeded." 2025-05-29 14:06:36 apparently gitaly is not able to get the data within 30 seconds 2025-05-29 14:14:08 Oh 2025-05-29 14:19:32 It comes up more often, but not sure what we can do about it 2025-05-29 14:19:46 It's specifically aports that has this issue 2025-05-29 14:45:05 Kladky: use cgit service: git.a.o 2025-05-29 14:47:14 Is this git command helpful? git commit-graph write --changed-paths 2025-05-29 14:47:20 Thanks, but at that point, I'd be more convenient to just cd aports and git log from there 2025-05-29 14:57:31 qaqland: gitlab itself is already generating commit graphs 2025-05-29 14:59:30 It may help to use --changed-paths, but I'd need to test it first 2025-05-29 15:38:19 ikke: https://gitlab.alpinelinux.org/lotheac/cluster-ci/ 2025-05-29 15:39:12 quite a different approach than cluster-nld12 initialization had tbh 2025-05-29 15:41:07 Thanks, I'll take a look later. Note that for communication between the control nodes, we could use a private lan in linode 2025-05-29 15:41:49 my poc doesn't, it uses the tailnet for everything 2025-05-29 15:42:03 (allowing each node, whether cplane or not, to be anywhere) 2025-05-29 15:42:26 well, aside from the lb placing constraints on the cplane nodes having to be in linode 2025-05-29 15:43:23 I have to think about it. We do have our own automeshing VPN (dmvpn) 2025-05-29 15:44:09 though setting it up is slightly more manual 2025-05-29 15:44:24 i'm also not really fond of having the apiserver and the k0s apis available on public internet at all (but that's what the LB does). some creative routing would help make that internal within the meshvpn 2025-05-29 15:45:33 i haven't used dmvpn but i guess not much reason why it couldn't be usable here as well even if it's a bit manual; the cluster is not going to have much node churn 2025-05-29 18:26:24 lotheac: If we want to make the control plane only available over VPN, it does mean we would need to host a loadbalancer ourself, right? 2025-05-29 18:26:49 (It's something I have been thinking about) 2025-05-30 00:43:24 ikke: yeah, essentially. k0s gives multiple options there though. 2025-05-30 00:44:38 technically i suppose it would also be possible to have only the k8s api available on a public lb, while having the rest of the control plane only on vpn too 2025-05-30 00:46:03 k0s is a bit different here than what i was used to, since it expects not just the kube apiserver lb, but also konnectivity and the k0s api 2025-05-30 00:46:24 https://docs.k0sproject.io/v1.33.1+k0s.0/networking/#required-ports-and-protocols 2025-05-30 00:46:48 and https://docs.k0sproject.io/v1.33.1+k0s.0/high-availability/#load-balancer 2025-05-30 00:48:29 and the "multiple options" i mentioned is are the builtin loadbalancers you pointed out earlier: https://docs.k0sproject.io/v1.33.1+k0s.0/cplb/ & https://docs.k0sproject.io/v1.33.1+k0s.0/nllb/ -- although there's no reason why we couldn't use another virtual-ip solution on top of the vpn as well (options there depending on what layer that vpn operates on) 2025-05-30 00:54:11 i didn't find an option with linode to make the LB available only on a vpc/private interface; seems their nodebalancers always assume you want to connect to them over the internet. some other providers allow private-only LBs. the reason i mention this is, exposing the k8s api on the internet is attack surface that does not generally need to be there. 2025-05-30 00:55:21 that notwithstanding -- i have had setups where there is a cloud-managed LB, but it has been firewalled or otherwise configured thus that it is only successfully routed to from inside the vpn 2025-05-30 01:50:49 https://lotheac.fi/s/k0s-meshvpn.drawio.svg this might help visualize what i'm talking about 2025-05-30 01:51:30 it's a bit messy but hopefully it helps 2025-05-30 02:11:54 the poc i linked yesterday is essentially "option 2" 2025-05-30 09:03:04 looks like gitlab broke 2025-05-30 09:03:43 I dont appear to have access to that server 2025-05-30 09:05:16 yeah gitlab is on its knees 2025-05-30 09:18:34 You should have access, but port 2222 2025-05-30 09:19:34 aha. thanks 2025-05-30 09:21:03 Seems to have been a load spike 2025-05-30 09:21:14 But it recovered again 2025-05-30 09:21:43 load avg is 16-18 2025-05-30 09:21:58 we need 15x more cpu power to handle the current load 2025-05-30 09:22:11 Not necessarily an increase in requests 2025-05-30 09:34:50 how can i run `gitlab-ctl tail`? 2025-05-30 09:57:23 We do not use omnibus 2025-05-30 09:57:35 Which log do you want to tail? 2025-05-30 09:58:13 /srv/compose/gitlab/log 2025-05-30 09:58:41 Nginx is in s6/nginx/current 2025-05-30 10:29:49 wanted to investigate what is eating the CPU 2025-05-30 10:35:10 At the moment it seems to be gitally running git gc on a project 2025-05-30 10:35:48 But that seems to be at most 2 cores 2025-05-30 10:57:04 question, the x86_64 tag makes it only run on x86_64? https://gitlab.alpinelinux.org/alpine/alpine-conf/-/blob/master/.gitlab-ci.yml?ref_type=heads#L9 2025-05-30 10:57:22 if I remove it, it will run on any arch with docker-alpine? 2025-05-30 10:58:09 x86_64 is currently busy and I'd prefer to run the tests https://gitlab.alpinelinux.org/alpine/alpine-conf/-/merge_requests/256 2025-05-30 10:58:40 never mind. I pushed it without waiting for CI 2025-05-30 11:42:29 ok i have emerngecy woth ppc64le machine now 2025-05-30 11:42:34 its unresonsive 2025-05-30 11:42:38 we need it up ASAP 2025-05-30 11:43:13 very on fortunate 2025-05-30 11:47:52 who do I have to call to powercycle the ppc64le machine? 2025-05-30 12:07:24 ok will do release without ppc64le 2025-05-30 12:23:35 /o\ 2025-05-30 12:52:46 so, i have a problem 2025-05-30 12:52:56 I tagged the release 2025-05-30 12:53:10 when ppc64le comes back up, I have a short window 2025-05-30 12:53:49 while its building the release on ppc64le I will have to log in, and git checkout 3.22-stable 2025-05-30 12:54:05 before it starts to build the package that was pushed after the branch was done 2025-05-30 12:54:26 otherwise it will start building those packages and upload them to v3.22 2025-05-30 12:54:40 Oof 2025-05-30 12:55:14 You could disable the ssh key on dl-master 2025-05-30 13:16:38 good idea 2025-05-30 13:16:42 maybe I should do that 2025-05-30 13:17:36 i disabled ppc64le key on dl-master 2025-05-30 14:14:12 ikke: are there any sponsors we should add/remove from release notes? 2025-05-30 14:14:35 I was thinking about that, but probably for the next release 2025-05-30 14:26:49 ok good thanks 2025-05-30 15:13:27 ikke: can you please help me with "Invalidate /alpine/latest-stable/* on dl-cdn" on https://gitlab.alpinelinux.org/alpine/aports/-/issues/17188 2025-05-30 15:17:30 ncopa: ack 2025-05-30 15:18:36 It's running 2025-05-30 15:23:10 thanks! 2025-05-30 15:29:57 ikke: can you please also help me post something on mastodon? 2025-05-30 15:34:34 ncopa: https://fosstodon.org/deck/@alpinelinux/114597512294140956 2025-05-30 15:39:31 i dont remember who has access to the bluesky account? 2025-05-30 15:39:40 you? 2025-05-30 15:41:12 ha! that is what I was afraid of 2025-05-30 15:42:01 it's back 2025-05-30 15:44:31 im on it 2025-05-30 15:44:33 thanks! 2025-05-30 16:27:47 I don't think I have the password for x.com/alpinelinux seems it has been changed 2025-05-30 16:48:19 which email did we use for twitter? social at a.o? 2025-05-30 18:48:26 do we need to do anything to sync the pkgs.a.o 3.22 repo? https://pkgs.alpinelinux.org/packages?name=&branch=v3.22&repo=&arch=x86_64&origin=&flagged=&maintainer= 2025-05-30 18:51:22 no it should fetch the branches automatically 2025-05-30 18:51:38 hmm 2025-05-31 04:56:17 Are there any consistency checking tools for APKINDEX, e.g. check whether all packages in the current APKINDEX are fully synchronized? 2025-05-31 04:57:03 our mirror sometimes loses a few packages 2025-05-31 08:23:16 there is https://mirrors.alpinelinux.org/ 2025-05-31 11:02:10 fossdd|m: link is not helpful, i met an internal issue with local mirror site :| 2025-05-31 11:25:44 qaqland: It's something I could add to repo-tools 2025-05-31 12:02:32 ikke: thx :) 2025-05-31 12:04:44 qaqland: do you want to check it through the local filesystem? 2025-05-31 12:27:48 ikke: yes, want to do verification after completing rsync 2025-05-31 13:35:22 qaqland: are you able to test https://gitlab.alpinelinux.org/alpine/infra/repo-tools/-/merge_requests/1? 2025-05-31 14:11:57 ikke: I can't but I have contacted two friends from different schools. it may take some time 2025-05-31 14:12:15 ok 2025-05-31 14:12:28 It very simply just checks if the package file exists, nothing more