2022-08-01 11:14:25 linux 5.19 is probably last in 5.x series, Linus announced that next will be 6.0 probably 2022-08-01 22:02:39 ah 2022-08-01 22:02:40 https://twitter.com/Linode_Status/status/1554223973696606210 2022-08-01 22:02:47 > Our team is investigating a connectivity issue in our London data center. 2022-08-01 22:03:15 pkgs.a.o dead :C 2022-08-01 22:03:43 yeah, that's the host i'm talking about 2022-08-02 03:38:28 Fun times 2022-08-02 10:53:11 cleaned up .cache/yarn and .cache/go on all arm*/aarch64 builders for 3.15, 3.16 and edge 2022-08-02 10:53:15 sorry 2022-08-02 10:53:20 .cargo, not .cache/go 2022-08-02 10:53:26 since go is building atm 2022-08-02 20:00:17 ikke: could you deploy aports-turbo changes when you're not too busy? https://ptrc.gay/ffDBrVKE 2022-08-02 20:02:58 ptrc: can you remind me thursday, have more time then 2022-08-02 20:03:03 sure thing :) 2022-08-04 09:02:25 Lesigh 2022-08-04 11:29:59 67G free atm 2022-08-04 14:02:23 ptrc: fyi, I'm working on aports-turbo at 2022-08-04 14:02:25 atm 2022-08-04 14:03:12 was about to remind you a while ago, but saw that you also merged some mrs, thanks :) 2022-08-04 14:03:59 np, I just noticed that the branch drop-down first item is still disbaled 2022-08-04 14:04:00 disabled* 2022-08-04 14:04:17 Fixing that 2022-08-04 14:12:30 doesn't it default to edge? 2022-08-04 14:12:42 as in, when there's no param in the url 2022-08-04 14:15:01 yeah, https://gitlab.alpinelinux.org/alpine/infra/aports-turbo/-/blob/a7ce1829/aports.lua#L18 2022-08-04 14:15:36 i mean, i don't know if there's a good reason to even allow showing packages from all branches 2022-08-04 14:15:38 Yes, but wasn't the idea that you can go back to that state? 2022-08-04 14:15:54 oh, right 2022-08-04 14:16:02 It used to be that way 2022-08-04 14:16:08 was too heavy on the db 2022-08-04 14:16:35 so the dbs were split per version 2022-08-04 17:05:43 clandmeter: apparently `deploy: replicas: 4` does work with docker compose, unlike what the documentation mentions 2022-08-04 17:06:06 (as the replacement for scale: 4) 2022-08-05 04:41:36 ikke: yes i think it depends on compose version? 2022-08-05 04:41:48 clandmeter: probably 2022-08-05 04:41:58 as set in yml 2022-08-05 04:41:59 I noticed that docker compose config translated it to that 2022-08-05 04:42:10 not the actual binary 2022-08-05 04:42:28 even with version: "2" it translated it internally 2022-08-05 04:42:31 2.4 vs 3.x 2022-08-05 04:42:36 "2.4" 2022-08-05 04:42:51 and complained about that it was deprecated :) 2022-08-05 04:43:08 2.4 is deprecated? 2022-08-05 04:43:20 no 2022-08-05 04:43:24 the scale setting 2022-08-05 04:44:14 it mentions this on 3.x i guess? 2022-08-05 04:44:17 or also on 2.4? 2022-08-05 04:44:22 also when you use 2.4 2022-08-05 04:44:41 that is weird 2022-08-05 04:44:52 as scale is specific to 2.2 and higher iirc 2022-08-05 04:44:57 https://gitlab.alpinelinux.org/alpine/infra/docker/aports-turbo/-/blob/master/docker-compose.prod.yml 2022-08-05 04:45:48 we should probably also update configs on traefik 2022-08-05 04:46:25 i did some modifying at work to get better ssl and header support 2022-08-05 04:46:29 secure headers 2022-08-05 04:46:32 ah ok 2022-08-05 04:46:48 i can share it with you if you like 2022-08-05 04:47:03 clandmeter: this is partial docker compose config output with that set: https://tpaste.us/Pjeb 2022-08-05 04:47:36 clandmeter: it might be that it's docker compose 2 that started doing that 2022-08-05 04:47:45 looks ok to me if it works 2022-08-05 04:47:52 it works 2022-08-05 04:48:04 but with version: "3" and setting replicas also works 2022-08-05 04:48:07 so it will spin 4 replicas ? 2022-08-05 04:48:10 yes 2022-08-05 04:48:16 both work 2022-08-05 04:48:16 good 2022-08-05 04:48:27 the latter doesn't give you warnings 2022-08-05 04:48:31 i think i used the old config as else i needed to pass it on cmdline 2022-08-05 04:48:43 yes, but now it doesn't anymore 2022-08-05 04:49:08 most of the 3.x stuff is not needed for me anyways so i always stick to 2.4 2022-08-05 04:49:31 https://tpaste.us/6E6B 2022-08-05 04:49:36 I just don't like the warnings :P 2022-08-05 04:49:44 this is running atm 2022-08-05 04:49:47 im ok with 3 2022-08-05 04:49:55 sooner or later we need to migrate anyways 2022-08-05 04:50:02 if it work 2022-08-05 04:50:02 s 2022-08-05 04:50:07 yup 2022-08-05 04:50:18 The documentation says docker compose would ignore that part 2022-08-05 04:50:21 but it apparently doesn't 2022-08-05 04:50:25 i just got back from schiphol :) 2022-08-05 04:50:30 what a mess :) 2022-08-05 04:50:56 https://docs.docker.com/compose/compose-file/compose-file-v3/#deploy 2022-08-05 04:51:20 'The following 2022-08-05 04:51:22 sub-options only takes effect when deploying to a swarm with docker stack deploy, and is ignored by docker-compose up and docker-compose run, except for resources." 2022-08-05 04:51:34 nod 2022-08-05 04:52:28 is there any pref/need to switch to podman? 2022-08-05 04:54:12 I was thinking about that 2022-08-05 04:54:27 security would be one reason 2022-08-05 06:47:06 your problem is that you dont get replicas? maybe consider run kubernetes? 2022-08-05 07:16:35 ACTION taps the sign 2022-08-05 07:16:36 https://xeiaso.net/blog/do-i-need-kubernetes 2022-08-05 07:56:15 ncopa: no, we do get replicas 2022-08-05 07:56:49 ncopa: we use that for aports - turbo because it's just a single process 2022-08-05 11:28:12 ikke https://lists.alpinelinux.org/lists/%7Ealpine returns 404 2022-08-05 11:36:37 Try to refresh. For some reason nginx likes to return 404 from time to time 2022-08-05 12:14:57 ok thanks 2022-08-05 18:01:57 ok, gitlab 15.1 image got built :) 2022-08-05 18:32:57 https://gitlab-test.alpinelinux.org 2022-08-08 07:00:45 ikke: mosquitto upstream stopped ignoring sigpipe in the client library \o/ https://github.com/eclipse/mosquitto/commit/0c9d9f21633c5dbb482893a9d6bdf40111829925 2022-08-08 07:01:02 so after the next release we can remove the workarounds from mqtt-exec etc 2022-08-08 07:01:21 Nice! 2022-08-08 09:02:35 congrats! I don't mind backporting that patch 2022-08-08 10:00:03 nmeum: nice work, seems like the smallest things take the most time (years in this case) to get fixed :) 2022-08-09 08:53:18 this gitlab mailing list integration upgrade is rough 2022-08-09 08:53:25 I'm not sure I want to finish reimplementing it or commit to long-term maintenance 2022-08-09 08:53:30 let's discuss possible solutions 2022-08-09 08:55:37 option 1) get rid of gitlab (: 2022-08-09 08:55:44 god, I wish 2022-08-09 08:57:08 some ideas: (2) get some volunteers to help finish & maintain it; (3) get some maintainers to review patches on the lists; (4) deprecate submitting patches via email 2022-08-09 08:57:49 What kind of project is it? Maybe I can devote a bit of time on it 2022-08-09 08:58:00 what kind of language 2022-08-09 08:58:39 Go 2022-08-09 08:58:48 https://git.sr.ht/~sircmpwn/hashiru 2022-08-09 08:59:06 starting to get burnt out on it so I just yeeted a broken WIP commit onto master for posterity 2022-08-09 09:00:30 regarding option (4), I'm pretty tired of pushing against gitlab and I'm prepared to accept defeat, but it will likely mean that I step down as the maintainer of many packages 2022-08-09 09:01:07 3 is hard 2022-08-09 09:01:30 though lots of people still use the mailing lists for submitting patches, I can't continue being the only one spending days maintaining an integration between the two systems 2022-08-09 09:01:43 I'd like to try 2 2022-08-09 09:02:38 aight 2022-08-09 09:02:57 you can get an idea of the current thrust of the work by reading over this: https://git.sr.ht/~sircmpwn/hashiru/commit/750b1f1afabb387e90ed51ddade7c3605d7c916f 2022-08-09 09:03:10 I was working on the webhook which submits a gitlab MR when a patch comes in 2022-08-09 09:09:28 also, re: (3) being difficult 2022-08-09 09:09:59 I mean, finding people willing to do it who also have push access 2022-08-09 09:10:07 does not feel great where a minor inconvenience of reviewing patches on a mailing list is held in higher regard than a major inconvenience of tasking one person with the development and maintenance of a complex software band-aid 2022-08-09 09:10:33 it would be nice if we were able to identify any kind of compromise 2022-08-09 10:20:51 ddevault: It's not a nice experience to act as a humane bridge between 2 systems 2022-08-09 10:20:58 human* 2022-08-09 10:21:27 it doesn't have to be two systems 2022-08-09 10:21:37 no need to manually copy patches to gitlab, just review them on the list 2022-08-09 10:21:49 It's not just reviewing, it's also CI 2022-08-09 10:22:04 And the feedback CI gives 2022-08-09 10:23:35 well, the CI can probably be run on builds.sr.ht as well 2022-08-09 10:24:19 then alpine would need to run 2 separate workflows, two different CI images / systems and all that 2022-08-09 10:24:24 same scripts 2022-08-09 10:24:25 That means either 2 systems, or moving to build.sr.ht completely, which is another discussion 2022-08-09 10:24:44 or, it may be possible to write a simpler API integration which submits gitlab pipelines when patches come in 2022-08-09 10:24:54 rather than the whole merge request syncronization dance 2022-08-09 10:25:36 pipelines are tied to commits, so it would need to push to gitlab either way 2022-08-09 10:25:44 well, maybe 2022-08-09 10:25:52 imo it breaks workflow if half the stuff is on gitlab and the other half somewhere else 2022-08-09 10:26:01 could make a few small edits to the build scripts wherein the pipeline runs against master, then fetches and applys the patch from the mailing list before testing it 2022-08-09 10:26:07 and i don't think alpine would want to move to a mail-patches-only workflow 2022-08-09 10:26:27 how does this "break the workflow" 2022-08-09 10:26:54 the mailing lists are still in use, abandoning them breaks people's workflows to a much more significant extent 2022-08-09 10:27:13 but I'm sick of fighting this gitlab crap so if that's how it has to be then so be it 2022-08-09 10:27:50 if this discussion goes even slightly in the direction of "mailing list patches aren't worth supporting" then I'm going to concede that point immediately and take my leave 2022-08-09 10:28:10 I'm not rehashing that fight again 2022-08-09 10:28:16 if you're an alpine developer who reviews stuff, you can't easily go through the whole list, mark as duplicate, reference in a comment easily, etc. 2022-08-09 10:28:25 ddevault: I have always been a proponent of keeping the MLs alive 2022-08-09 10:28:28 sure you can. 2022-08-09 10:28:31 and supported the effort 2022-08-09 10:28:34 i'm not saying that mailing lists are bad / not worth it, the opposite actually 2022-08-09 10:29:04 i'm saying that having two *separate* systems would be an inconvenience 2022-08-09 10:29:38 right now the "inconvenience" is signficantly greater than what you describe and is entirely borne by me 2022-08-09 10:30:31 fwiw, if there's a way i could help support the gitlab<->ml integration, i'd be more than happy to help 2022-08-09 10:30:41 And I also offered my help 2022-08-09 10:30:46 sure 2022-08-09 10:30:50 send some patches 2022-08-09 10:30:54 via email, please :) 2022-08-09 10:30:59 which project? 2022-08-09 10:31:05 Linked earlier 2022-08-09 10:31:06 see links above 2022-08-09 10:31:16 ah, sure 2022-08-09 10:34:41 hm, maybe it would be possible to simplify this even further and use the existing email gitlab integration? it would be just a daemon that's forwarding any incoming new patches to the aports mr inbox and forwarding the responses back to the appropriate thread (if it wouldn't be possible to just CC: back the ML in the patch itself) 2022-08-09 10:35:07 if I recall correctly, I don't think it forwards responses back, and it expects patches as attachments, not the standard git send-email format 2022-08-09 10:35:13 but it would be better than nothing 2022-08-09 10:37:49 ah 2022-08-09 10:38:03 i got a response "Unfortunately, your email message to GitLab could not be processed. You are not allowed to perform this action. If you believe this is in error, contact a staff member." 2022-08-09 10:38:05 ikke: ? 2022-08-09 10:38:26 hmm, not sure 2022-08-09 10:38:34 Need to check 2022-08-09 10:38:47 ptrc: did you try to send it to the main aports project/ 2022-08-09 10:39:01 this is from the "Create merge request" button 2022-08-09 10:39:17 mhm, i thought it's gonna choose my fork automatically, maybe not? 2022-08-09 10:39:46 wouldn't be an issue with the official bridge though 2022-08-09 10:48:21 ddevault: what would be the email format for git.sr.ht repos? the clone section only says "You can also use your local clone with git send-email." :/ 2022-08-09 10:50:40 https://git-send-email.io 2022-08-09 10:51:03 sorry, email *address* format 2022-08-09 10:51:28 but i guess the one from the guide with repo name replaced should work 2022-08-09 10:52:23 ~sircmpwn/public-inbox@lists.sr.ht is fine 2022-08-09 10:52:26 or just sir@cmpwn.com 2022-08-09 16:32:41 0.. 2022-08-09 16:34:35 0? 2022-08-09 16:35:19 that was still in the chat buffer :P 2022-08-09 16:35:37 :) 2022-08-09 16:36:01 wondering what is taking so much space on usa9 2022-08-09 16:36:05 hovering 95 since release 2022-08-09 16:36:10 just slowly reached it 2022-08-09 16:36:46 Still 400G free in the vgs 2022-08-09 16:36:58 sadly usa10 is not usable 2022-08-09 16:37:06 still not resolved the reboots 2022-08-09 16:38:59 what was the issue again 2022-08-09 16:39:05 (yes, my memory sucks) 2022-08-09 16:39:28 i assume it reboots at random with the above usa10 down/up things 2022-08-09 16:40:19 yes, and even more often when it does not alert 2022-08-09 16:40:54 average uptime is 7h 2022-08-09 16:41:03 over last 30 days 2022-08-09 16:41:13 reboots without any indication why 2022-08-09 16:42:05 already tried to switch to linux-edge 2022-08-09 16:44:49 I _can_ try to stop the ci container see if that helps 2022-08-09 16:45:07 sure 2022-08-09 16:47:20 any clue why in general there is only just a couple of hours of logs in /var/log/messages(.0)? 2022-08-09 16:48:02 for me it's a lot, so i assume something deletes them on the server? 2022-08-09 16:48:28 I've noticed that everywhere 2022-08-09 16:48:56 the default is 200KB rotation 2022-08-09 16:49:04 i pass -s0 2022-08-09 16:49:20 where? 2022-08-09 16:49:24 conf.d/syslog 2022-08-09 16:49:29 right 2022-08-09 16:49:36 `busybox syslogd -h` has the options 2022-08-09 16:49:54 all the logging infrastructure is horrific 2022-08-09 16:50:03 but sadly good logging integration requires good service management 2022-08-09 16:50:10 so there's not much to fix 2022-08-09 16:51:20 the choices are generally 'unbounded unrotated logs' or 'no logs', but at least for /var/log/messages you can raise that or pass -b10 or something or both 2022-08-09 16:51:45 yes, set -s 1024 -b 14 now 2022-08-09 18:58:52 clandmeter: What is left for setting up the new mirror infra? Is there something I can do? 2022-08-09 18:59:09 each releases puts more pressure on NLD3 storage 2022-08-09 19:07:28 what are the changes 2022-08-09 19:08:37 1. We get a newer generation set of servers 2022-08-09 19:08:52 2. We get at least one server with lots of storage for our mirror infra 2022-08-09 19:09:12 i mean to the mirror setup so they don't all have to be on one server 2022-08-09 19:09:31 The idea was to use anycast 2022-08-09 19:09:47 but that depends on if we can get more nodes 2022-08-09 19:09:53 hmm 2022-08-09 19:10:04 my head doesn't understand how that would allow splitting the files 2022-08-09 19:10:14 the anycast would just 404 if they're not there 2022-08-09 19:10:26 That would not be about splitting any files 2022-08-09 19:10:30 those would be complete mirrors 2022-08-09 19:10:35 what NLD3 now provides 2022-08-09 19:10:40 well.. then all the things are still on the same server 2022-08-09 19:10:42 doesn't gain much 2022-08-09 19:10:52 This is separate from the build infra 2022-08-09 19:10:57 aware 2022-08-09 19:11:07 i mean not having to store everything of everywhere in one place 2022-08-09 19:11:18 Well, we are stuck with that for the time being 2022-08-09 19:11:26 everyone syncs from us using rsync 2022-08-09 19:11:31 from rsync.a.o 2022-08-09 19:11:35 mhm 2022-08-09 19:11:42 That's not something you easily refactor 2022-08-09 19:12:47 at some point, we _can_ think about moving older versions somewhere else (like we had ancient.a.o) 2022-08-09 19:13:01 but as older versions are a lot smaller, that does not add up to a lot 2022-08-09 19:14:00 But as we seem to have at least one large-storage node from equinix, that should not matter 2022-08-09 19:14:13 Solving this for the builders is another issue 2022-08-09 19:14:34 But that's something we can change while still aggregating everything on the mirror infra 2022-08-09 19:42:21 psykose: I think one thing we will need in the future is someting that can accept packages from the builders (signed), and will resign it with cannonical keys and generate apk indexes 2022-08-09 19:43:02 yeah 2022-08-09 19:43:05 makes sense 2022-08-09 19:43:32 (with the canonical ubuntu keys? /s) 2022-08-09 19:43:36 ncopa mentioned that in the last conference 2022-08-09 19:43:41 psykose: :D 2022-08-09 19:43:59 :) 2022-08-09 22:30:13 ikke: could u upgrade/fix/whatever the special zabbix plugin and move it to community and also backport it, since otherwise it's impossible to upgrade it without adding openssl3 to deu1 2022-08-10 04:26:42 psykose: It's deprecated, since that zabbix now supports proper plugins for agent2 that do not need to be compiled directly with the agent. I've tried to convert it, but it was not working yet atm. 2022-08-10 06:49:13 ikke: all 3 servers are already running 2022-08-10 06:49:23 Right 2022-08-10 06:49:27 but thats it 2022-08-10 06:49:29 running :) 2022-08-10 06:49:42 Did we get confirmation that we could keep using all 3? 2022-08-10 06:49:49 i think so 2022-08-10 06:49:52 Ok 2022-08-10 06:50:00 i think we also need to add more servers to replace the others 2022-08-10 06:50:04 Yes 2022-08-10 06:58:10 looks like the nld one has zfs configured 2022-08-10 07:00:29 ikke: i guess we need to think about how we want to set them up 2022-08-10 07:00:49 do we want to do it based on containers or just plain os 2022-08-10 07:01:35 or do you want to use some config mgnt 2022-08-10 07:39:46 ikke: anycast ip is added to the 3 servers 2022-08-10 07:40:10 but im unsure how to test it :) 2022-08-10 07:56:17 Setup a webserver where each server has some different content 2022-08-10 07:56:51 And then try to access the anycast ip from different locations 2022-08-10 08:15:25 yes thats the thing 2022-08-10 08:15:28 one is in asia 2022-08-10 08:15:35 i dont have access to something in asia 2022-08-10 08:15:58 but i think its working correctly 2022-08-10 08:19:02 not* 2022-08-10 08:20:35 if is ssh to the elastic ip from usa5, it will end up on nld 2022-08-10 08:21:06 i have a feeling we will need to setup bgp 2022-08-10 08:28:36 Hmm, ok 2022-08-10 08:29:06 To setup BGP, we need to know where to connect to 2022-08-10 08:32:04 let me first try to figure out if it really works or not 2022-08-10 08:32:22 accessuing it from a dallas server does seem to work 2022-08-10 08:32:28 but that could be related to the same dc 2022-08-10 08:33:08 ok 2022-08-10 08:33:15 it looks like it works this way 2022-08-10 08:38:31 yup, i spinned some linodes in the same region and they all connect the the related server 2022-08-10 08:38:43 to* 2022-08-10 09:49:16 that is crude 2022-08-10 09:49:30 apk upgrade just broke my systems kernel 2022-08-10 09:53:35 How so? 2022-08-10 09:59:31 not sure what happend, but the running kernel is not the installed kernel 2022-08-10 10:01:21 that's normal? 2022-08-10 10:02:55 when you upgrade, you won't have same kernel until you reboot 2022-08-10 11:29:23 panekj: I don't think clandmeter is unaware of that, so wonder what he means 2022-08-10 21:59:55 what i mean is, the kernel located in boot is still the old kernel 2022-08-10 22:00:00 but the modules are updated 2022-08-11 18:51:37 psykose: making progress with the zabbix agent2 plugin 2022-08-11 18:51:58 ah, right, that thing 2022-08-11 18:51:59 thanks! 2022-08-11 18:52:06 let me know what you've done and how it goes 2022-08-11 18:52:43 https://gitlab.alpinelinux.org/alpine/infra/zabbix-agent2-plugins 2022-08-11 18:52:53 that's still the old code that needs to be built in-tree 2022-08-11 18:55:49 how does it work now 2022-08-11 18:56:23 a separate binary that reads from a socket 2022-08-11 18:56:32 and is started by the agent (after you add it to the config) 2022-08-11 18:58:16 mmhmm 2022-08-11 18:58:43 It's a common strategy, as dynload is not really a thing 2022-08-11 18:58:50 /dlopen 2022-08-11 19:01:19 I suppose it's not kept running, just executed when needing data 2022-08-11 19:01:25 I don't see it in the processlist 2022-08-11 19:01:48 makes sense 2022-08-11 20:09:11 https://gitlab.alpinelinux.org/alpine/infra/zabbix-agent2-plugins/-/merge_requests/1 2022-08-11 20:13:37 looks alright at a glance 2022-08-11 20:20:10 mergedf 2022-08-11 20:20:20 Now I need to update that package 2022-08-11 20:51:20 psykose: !37516 2022-08-11 21:03:42 lookin good 2022-08-12 07:08:17 ikke: how do i know if i would add a conflicting a record to gitlab tf? 2022-08-12 08:17:03 for now i just added the hosts to linode dns directly, if you have time please add them to tf 2022-08-12 15:23:23 ikke: how big are the aports turbo .sqlite databases 2022-08-12 15:24:31 23G in total 2022-08-12 17:23:52 ACTION sends a big hug to ikke  2022-08-12 17:24:27 😊 2022-08-12 17:25:20 nld should be in sync i believe 2022-08-12 17:25:28 the other 2 are ssllooww 2022-08-12 17:28:44 clandmeter: you need help adding the dns records to linode tf? 2022-08-12 17:29:00 yes i think that makes sense 2022-08-12 17:29:07 ill config the other two traefiks 2022-08-12 17:29:35 Note that, because you already added them to linode, we need to manually import them 2022-08-12 17:29:44 in the tf state 2022-08-12 17:29:56 i was a bit unsure about how to get it done 2022-08-12 17:30:38 https://gitlab.alpinelinux.org/alpine/infra/linode-tf/-/blob/master/domains_mirrors.tf 2022-08-12 17:30:48 Just copy resource for an A record and adjust 2022-08-12 17:33:20 Do these machines also have IPv6 setup? 2022-08-12 17:38:51 yes they do 2022-08-12 17:39:21 ok, nice 2022-08-12 17:39:22 but im not sure about the anycast ip and how that would work 2022-08-12 17:39:34 I would expect it would work somewhat similar 2022-08-12 17:40:19 How did you setup the anycast ip for ipv4? 2022-08-12 17:40:28 add it to lo 2022-08-12 17:40:49 aha, interesting 2022-08-12 17:41:10 i guess the routing is via the other ipv4 address 2022-08-12 17:42:20 I don't see it in the route table 2022-08-12 17:51:09 ok the setups should be ok 2022-08-12 17:51:15 although i cannot test it :) 2022-08-12 17:56:03 ikke: i modified the sync script on us and sg 2022-08-12 17:56:11 removed the delayed updates 2022-08-12 17:56:22 cause i think that will break if the sync is not complete 2022-08-12 17:56:39 so for init sync let it run with that switch 2022-08-12 17:56:50 without* 2022-08-12 17:57:39 and we need to monitor if the mqtt-exec thingy actually works :) 2022-08-12 17:57:43 i need to go now 2022-08-12 18:25:22 alright 2022-08-13 07:55:30 gbr2 upgraded to alpine 3.16 2022-08-13 09:04:17 ikke: did or will you add the entries to tf? 2022-08-13 09:09:15 I did not do it yet 2022-08-13 09:16:39 ikke: are you ok with the hostname setup i choose? 2022-08-13 09:27:46 Didn't check it in detail yet 2022-08-13 09:41:27 I'll look at it in a bit 2022-08-13 11:41:06 psykose: I've fixed the agent2 setup on deu1 2022-08-13 13:23:43 ikke: sorry to annoy you but I don't have anyone except you to ask. why this doesn't skip arm64 `[[ $arch = */${_carch}/ ]] && continue` 2022-08-13 13:23:51 in ash script 2022-08-13 13:24:28 in APKBUILD actually 2022-08-13 13:25:06 isn't it $CARCH 2022-08-13 13:25:17 test script is https://tpaste.us/Z4xl 2022-08-13 13:25:41 panekj: ^ 2022-08-13 13:27:20 panekj: for kernels we set _carch differently from $CARCH 2022-08-13 13:28:03 on arm64 CARCH is aarch64 though actual ARCH is arm64 2022-08-13 13:28:58 Is [[ even supported by ash? Maybe with the bash compat enabled 2022-08-13 13:28:59 */ doesn't expand 2022-08-13 13:29:04 But we avoid it 2022-08-13 13:29:31 mps: add set -o xtrace under #! 2022-08-13 13:29:35 hah, I thought it is supported 2022-08-13 13:29:57 Use case / esac 2022-08-13 13:30:09 ok, will recode it 2022-08-13 13:33:01 `case "$arch" in $_carch) continue ;; esac` 2022-08-13 13:33:57 uhm, also this doesn't work 2022-08-13 13:35:37 because $arch is not equal to $_carch 2022-08-13 13:35:57 if you add the xtrace opt and run it, it will show you what it actually evaluates 2022-08-13 13:36:22 oh, think I found bug 2022-08-13 13:36:34 panekj: yes 2022-08-13 13:37:05 arch is actually ./arch/arm64/ 2022-08-13 13:39:58 btw, we don't use linux-edge4virt on any infra machine? 2022-08-13 15:24:06 mps: not that I'm aware of 2022-08-13 15:24:13 most use lts, and in a few cases edge 2022-08-13 15:38:36 i'll restart deu1/7 in a bit 2022-08-13 15:42:01 ikke: ok, fine. I'm working on 'removal' of the linux-edge4virt. will try to make linux-edge to work fine in VM 2022-08-13 20:24:49 ikke: did you check? 2022-08-13 20:25:11 clandmeter: I've been kinda busy today 2022-08-13 20:25:18 Just back 2022-08-13 20:27:33 np, just checking 2022-08-13 20:39:11 so nld.t1, usa.t1 and sgp.t1? 2022-08-13 20:40:42 and t1.a.o 2022-08-13 20:40:53 Looks fine to me 2022-08-13 20:41:36 not sure if it makes sense to host everything under *.mirror.a.o?: 2022-08-13 20:50:42 clandmeter: https://gitlab.alpinelinux.org/alpine/infra/linode-tf/-/merge_requests/26 2022-08-13 20:59:21 Note that the pipeline now says they will be created, as they have not been imported yet. 2022-08-13 21:06:08 depends on what you want to use it for? 2022-08-13 21:16:22 This is fine 2022-08-13 21:16:33 I just like to overly group things 2022-08-14 09:02:41 ikke: so this mr will overwrite what is in linode? 2022-08-14 09:03:33 It depends on linode handles it exactly, but right now, it would try to add new records, as it's not aware of the existing records 2022-08-14 09:05:24 lets try? 2022-08-14 09:06:04 Sure. Note that after it's merged, you still need to run the final job manually, it will not start automatically 2022-08-14 09:06:43 which final job? 2022-08-14 09:06:50 In the pipeline for master 2022-08-14 09:07:13 Branch pipelines only show what will change, the pipeline for master contains the job that actually applies it 2022-08-14 09:07:46 you mean the deploy part? 2022-08-14 09:08:32 yes 2022-08-14 09:08:56 If you click on the play button, it will execute that job 2022-08-14 09:09:03 Which you just did :) 2022-08-14 09:09:28 yes i was just reviewing previous steps 2022-08-14 09:09:52 interesting, it succeeded :) 2022-08-14 09:10:20 It just created additional records 2022-08-14 09:10:57 So we now have those duplicate 2022-08-14 09:11:24 yup 2022-08-14 09:11:27 i see them, :) 2022-08-14 09:11:55 The ones with a default TTL are the ones that you created manually 2022-08-14 09:12:06 nod 2022-08-14 09:12:11 let me delete those 2022-08-14 09:13:12 gone 2022-08-14 09:13:31 interesting it accepts it with the same value 2022-08-14 09:13:39 just different ttl 2022-08-14 09:13:44 which is actually the same 2022-08-14 09:15:00 We should also clean up old records at some point 2022-08-14 09:15:04 a lot of stale once 2022-08-14 09:15:06 ones* 2022-08-14 10:07:05 there is an MR for it right? 2022-08-14 10:07:15 Some 2022-08-14 10:07:19 For some* 2022-08-14 10:33:18 I've been on holiday for a while so I might have missed things. The riscv64 repos seem quite out of date, is the builder just slow or did something happen to it? 2022-08-14 10:33:35 Glad there is CI for it now though, even if it's emulated and just slow 2022-08-14 10:36:54 PureTryOut: it's blocking / hanging on rust all the time 2022-08-14 10:37:14 maybe now on gomplate 2022-08-14 10:38:22 but it being emulated does not help 2022-08-14 10:40:25 gomplate is still building 2022-08-14 10:46:32 yeah ok 2022-08-14 10:47:01 any chance on getting hardware at some point? some vendor that might want it? 2022-08-14 10:47:06 In other news, rv64 now supports rust 2022-08-14 10:47:26 PureTryOut: from what I understood, there simply does not exist any suitable hw yet 2022-08-14 10:47:35 everything is like rpi-grade hw 2022-08-14 10:47:38 darn 2022-08-14 10:47:45 I have one here 2022-08-14 10:50:42 oh, which one? 2022-08-14 10:50:54 starfive 2022-08-14 10:51:21 https://rvspace.org/ 2022-08-14 10:52:40 PureTryOut: btw, there is also #alpine-riscv64 if you're interested 2022-08-14 10:53:00 oh yes, definitely am. Thanks! 2022-08-14 10:53:50 Are there more arch specific channels like that? 2022-08-14 10:54:39 I think roma riscv notebook is on horizon 2022-08-14 10:54:48 PureTryOut: not that I'm aware of 2022-08-14 10:56:33 server grade riscv could be expected next year 2022-08-14 10:58:47 ok thanks 2022-08-14 10:59:12 mps: https://www.theregister.com/2022/07/01/riscv_roma_laptop/ this one>? 2022-08-14 10:59:42 yes 2022-08-14 11:00:08 ok interesting 2022-08-14 11:16:21 mps: For some reason, my wg tunnels never work after boot, I always have to restart them 2022-08-14 11:16:26 did you notice that? 2022-08-14 11:16:39 (apparently due to it not able to resolve wg.a.o on boot) 2022-08-14 11:16:49 hm 2022-08-14 11:17:19 I usually start it from /etc/local.d/ 2022-08-14 11:18:00 but on my other machines it works from /etc/network/interfaces, not related to alpine infra 2022-08-14 11:19:04 for alpine infra I keep keys and config in 'secure' place, not on rootFS 2022-08-14 12:55:29 do 'we' have x86_64 qemu VM on infra which is booted with ovmf -bios parameter 2022-08-14 13:24:08 I don't think we have any x86_64 vms 2022-08-14 13:34:23 ikke: ok, thanks for info 2022-08-14 13:35:04 I can't start alpine x86_64 virt iso with ovmf 2022-08-14 13:35:27 armv7 and aarch64 works fine 2022-08-14 19:57:44 ikke: the aarch64 ci seems to take like 5 minutes just to self-upgrade or install makedepends 2022-08-14 19:58:34 hmm 2022-08-14 20:03:42 ci vm is not busy or something 2022-08-14 20:44:19 same goes for the other arm ones really 2022-08-14 20:44:24 just looks like some 10kb/s networking 2022-08-14 20:46:28 https://tpaste.us/4oZX 2022-08-14 20:46:49 that's on the aarch64 runner host 2022-08-14 20:48:02 dunno what you want me to say :) https://gitlab.alpinelinux.org/alpine/aports/-/jobs/802238 2022-08-14 20:50:59 psykose: I saw it 2022-08-14 20:51:17 Just trying to figure out what's going on 2022-08-14 20:53:15 Mind if I restart docker on the host? 2022-08-14 20:56:52 go for it 2022-08-14 20:56:58 nothing important rn 2022-08-14 23:45:26 still the same if you changed anything 2022-08-15 08:40:42 linux-tools fails on some arches because there is eudev-dev in cache, iiuc 2022-08-15 08:55:51 nah eudev-dev is pulled in through pcsc-lite. That one however is in cache. Remind me, when are we switching to rootbld again? 😅 2022-08-15 09:02:07 PureTryOut: when it is actually suitable to run on builders 2022-08-15 09:02:56 in what way is it not? 2022-08-15 11:29:49 we're missing "net" on probably almost all go/rust aports 2022-08-15 11:30:13 (or proper way of fetching stuff) 2022-08-15 11:52:56 Is that a problem though? Easy enough to add if a builder fails on it 2022-08-15 11:53:01 And CI could be changed first to catch it 2022-08-15 13:32:47 ikke i think the new t1's are ready 2022-08-15 13:32:51 can you add them to monitoring? 2022-08-15 13:33:52 when you think they are healty, we can decomission the old mirrors 2022-08-15 13:35:05 we probably need to reconfig fastly 2022-08-15 13:35:35 or maybe just point to the current hostnames via cname to new one 2022-08-15 13:36:03 and we need to update rsync.a.o to point to t1.a.o 2022-08-15 13:39:34 I think we can just remove all the *.alpinelinux.org mirrors and keep only dl-cdn and rsync.a.o 2022-08-15 13:47:42 They are still used 2022-08-15 13:48:02 nl.a.o dl-*.a.o 2022-08-15 13:48:11 I see them regularly popping up 2022-08-15 14:02:36 clandmeter: maybe we should make an announcement somewhere that these mirrors are deprecated? 2022-08-15 14:03:14 lets just keep the records, but remove them from resources? 2022-08-15 14:04:18 From what resources? 2022-08-15 14:04:31 mirrors.a.o ie 2022-08-15 14:04:35 Yes 2022-08-15 14:04:46 thats it mostly i guess 2022-08-15 14:04:51 Ahuh 2022-08-15 14:05:07 and whenever we bump into it we replace it with cdn 2022-08-15 14:06:17 i think this was already our procedure, so not much changes regarding that. 2022-08-15 14:06:20 just mirrors for now 2022-08-15 14:07:01 would be nice to get some stats per region 2022-08-15 14:18:35 Install vnstat? 2022-08-15 14:29:33 what about zabbix? 2022-08-15 14:37:38 be sure to sweep the wiki for those subdomains 2022-08-15 14:41:12 clandmeter: I'll add it 2022-08-15 15:32:12 ikke: you could set CI to run rootbld 2022-08-15 15:45:34 Not how it works currently 2022-08-15 17:36:29 clandmeter: any idea why /dev/null is only readable by root on usa.t1 and sgp.t1? 2022-08-15 17:36:50 is it even a character device? 2022-08-15 17:37:01 i've had some weird stuff where something deleted devnull before 2022-08-15 17:37:37 yes, it is 2022-08-15 17:39:24 strange 2022-08-15 17:39:32 something probably just chmod'd it 2022-08-15 17:40:39 same for /dev/full 2022-08-15 17:50:52 clandmeter: I've added the t1 servers to Zabbix 2022-08-15 17:51:05 clandmeter: Is there anything specific you wanted to monitor? 2022-08-15 19:33:15 ikke: some regular stats i guess? 2022-08-15 19:33:19 like bw 2022-08-15 19:33:22 Yes, I have the basic linux monitoring 2022-08-15 19:33:24 disk usage should not be an issue 2022-08-15 19:33:27 :) 2022-08-15 19:33:32 clandmeter: I may hope not 2022-08-15 19:34:40 i was just wondering how well the bw would be distributed 2022-08-15 19:35:23 Probably still lopsided 2022-08-15 19:35:26 But we'll see 2022-08-15 19:36:38 clandmeter: do you have an idea why permissions in /dev may be messed up slightly? 2022-08-15 19:36:49 zabbix agent failed because it could not read /dev/null 2022-08-15 19:36:52 on sgp and usa 2022-08-15 19:37:14 interesting 2022-08-15 19:37:26 well these servers are generated from beta images 2022-08-15 19:37:50 so anything special you bump into we should add to a ticket 2022-08-15 19:37:58 then i will follow that up 2022-08-15 19:38:11 But it's not consistent 2022-08-15 19:38:23 what is the real problem? 2022-08-15 19:38:34 i mean what permissions does it have? 2022-08-15 19:38:39 and what is exepected 2022-08-15 19:38:44 660 root:root 2022-08-15 19:38:50 666 root:root 2022-08-15 19:39:01 and nld does not have this issue? 2022-08-15 19:39:04 No 2022-08-15 19:39:09 hmm 2022-08-15 19:39:31 I also noticed that /dev/console has group tty on nld, but not on the others 2022-08-15 19:39:37 and probably more differences 2022-08-15 19:40:29 clandmeter: what was your idea to use as backend for dl-cdn? 2022-08-15 19:40:33 all 3 servers? 2022-08-15 19:40:55 ok can you write them down somewhere? then i will do a fresh install of alpine on a new machine and see if its from the image or post install 2022-08-15 19:40:57 or just t1.a.o and let anycast do the rest 2022-08-15 19:41:06 good question 2022-08-15 19:41:21 not sure if anycast can cause issues 2022-08-15 19:41:32 what happens if one of them is out of sync 2022-08-15 19:41:56 I suppose that is something we should monitor 2022-08-15 19:42:24 we could try the anycast hostname and see what happens 2022-08-15 19:42:33 https://zabbix.alpinelinux.org/zabbix.php?action=map.view&sysmapid=7 2022-08-15 19:42:53 I didn't adjust the bottom part yet 2022-08-15 19:43:04 is it master or dl-master? 2022-08-15 19:43:12 Both records exist 2022-08-15 19:43:25 i thought dl-master was the "official" naming 2022-08-15 19:43:45 as master could be.... confusing in other concepts 2022-08-15 19:43:53 and master is bad naming ; 2022-08-15 19:43:56 ;-) 2022-08-15 19:44:26 anyways just keep it, master should be working. 2022-08-15 19:44:57 I guess we cname rsync to t1. 2022-08-15 19:45:10 Yes, I think so 2022-08-15 19:46:09 remove the other hosts from mirrors and cname them to t1 also or if they didnt support rsync we could cname them to cdn 2022-08-15 19:46:52 Maybe we start with moving rsync 2022-08-15 19:50:41 before we do that, first let them sync for a day or 2 2022-08-15 19:50:46 then i will check if all is ok 2022-08-15 19:51:08 i made some modifications to the compose setup for mirror 2022-08-15 19:51:19 i hope i didnt break anything 2022-08-15 19:51:28 i think it was semi broken 2022-08-15 19:53:58 Ok 2022-08-15 19:54:10 I just added standard mirror monitoring to the hosts in zabbix 2022-08-15 19:54:18 :D 2022-08-15 19:54:42 Need to set a macro 2022-08-15 20:04:42 clandmeter: nld.t1 was last updated 7h ago apparently 2022-08-15 20:05:00 ok let me check 2022-08-15 20:05:50 same with the other 2 2022-08-15 20:06:02 15:00 utc was last time 2022-08-15 20:07:09 that is weird 2022-08-15 20:07:13 i see it has new files 2022-08-15 20:07:37 what do you reference for time? 2022-08-15 20:07:42 last-update file 2022-08-15 20:07:49 last-updated* 2022-08-15 20:08:10 what is the uptime time on that one? 2022-08-15 20:08:17 updatetimer* 2022-08-15 20:08:28 every hour or so? 2022-08-15 20:08:34 5m 2022-08-15 20:08:50 when the file gets updated? 2022-08-15 20:09:02 or when you check it? 2022-08-15 20:09:07 when I check it 2022-08-15 20:09:26 last updated is 1300 2022-08-15 20:11:01 and its set to 20:00 atm 2022-08-15 20:11:08 so i guess it gets updated every hour 2022-08-15 20:11:19 there is an hourly cron indeed on dl-master 2022-08-15 20:11:37 i have a cron that does a complete sync every hour 2022-08-15 20:11:43 i guess that is failing 2022-08-15 20:11:51 currently it specifally updates the repo that is updated 2022-08-15 20:11:56 right 2022-08-15 20:11:59 not the whole mirror 2022-08-15 20:12:05 based on msg.a.o anouncements? 2022-08-15 20:12:09 yup 2022-08-15 20:12:23 just need to figure out why it does not update every hour 2022-08-15 20:12:47 i would prefer to have an additional container that could trigger a full sync in the mqtt-exec container 2022-08-15 20:13:36 Not easy without network, I suppose 2022-08-15 20:14:04 interesting, it does work when i run run-parts manually 2022-08-15 20:14:15 maybe it fails due to missing tty or something 2022-08-15 20:14:41 What would require a tty? 2022-08-15 20:14:53 docker compose exec 2022-08-15 20:15:01 ah, you need -T 2022-08-15 20:15:08 thats what i have :) 2022-08-15 20:15:37 heh 2022-08-15 20:15:45 crond is not running 2022-08-15 20:15:50 ok, that does not help 2022-08-15 20:16:04 but what _is_ running then? 2022-08-15 20:16:32 what do you mean what is running? 2022-08-15 20:16:50 oh, you run this on the host I guess 2022-08-15 20:17:05 yes 2022-08-15 20:17:18 btw, do you know tmux command set synchronize-panes? 2022-08-15 20:17:18 its the only interface to run commands in a container 2022-08-15 20:17:49 ACTION mumbles something about a socket :P 2022-08-15 20:19:10 did you start crond on usa already? 2022-08-15 20:19:27 I did not 2022-08-15 20:19:43 I'm fumbling with Zabbix 2022-08-15 20:19:52 zabbix is not in default runlevel 2022-08-15 20:20:16 oh, right 2022-08-15 20:20:18 forgot that 2022-08-15 20:21:37 ok they all run crond now :) 2022-08-15 20:21:49 need to add that to my list of fixes 2022-08-15 20:22:00 i guess we enable that on default installs iirc 2022-08-15 20:23:50 https://zabbix.alpinelinux.org/zabbix.php?action=dashboard.view&dashboardid=12 2022-08-15 20:24:01 clandmeter: yes, we do 2022-08-15 20:26:04 clandmeter: should we make a ticket on gitlab to keep track of these issues?> 2022-08-15 20:26:21 which issues? 2022-08-15 20:26:48 equinix alpine installation issues 2022-08-15 20:27:54 yes 2022-08-15 20:27:58 i just created the first one :) 2022-08-15 20:28:06 https://gitlab.alpinelinux.org/clandmeter/alpine-disk-image/-/issues 2022-08-15 20:28:51 if you can add the permission issues, then ill check that later. 2022-08-15 20:29:38 Yes, was working on it 2022-08-15 20:31:25 there is another strange behaviour is see different from nld and the other two 2022-08-15 20:31:49 when i do docker compose exec, the output on nld is nice with correct line breaks 2022-08-15 20:32:03 on the other two its scrolling from left to right without linebreaks 2022-08-15 20:32:57 ah the version is different 2022-08-15 20:33:22 ah and i upgraded nld to latest alpine version 2022-08-15 20:44:26 ikke: are you ok if I upgrade those other two boxes? 2022-08-15 20:44:46 clandmeter: fine with me 2022-08-15 20:44:59 you only changed dev/null right? 2022-08-15 20:45:02 yes 2022-08-15 20:46:33 i think i see whats going on 2022-08-15 20:46:37 there is a new mdev.conf 2022-08-15 20:48:05 they replace the whole file with just one line 2022-08-15 21:03:04 clandmeter: btw, I'm thinking about setting up our own registry 2022-08-15 21:03:15 just for infra images initially 2022-08-15 21:03:19 you mentioned it before yes 2022-08-15 22:05:52 clandmeter: did you do that? 2022-08-15 22:06:02 yup 2022-08-15 22:06:06 ok 2022-08-15 22:06:38 equinix has a bug with mdev.conf, issue is filled. 2022-08-15 22:06:43 i fixed it manually 2022-08-15 22:06:48 i hope 2022-08-15 22:06:52 ok 2022-08-16 02:08:01 ikke: was the anitya-watch script from aports-turbo also updated? it flagged libsoup3 as outdated, even though that version is pre-release 2022-08-16 02:19:06 https://apps.fedoraproject.org/datagrepper/v2/id?id=4673d17c-4791-4719-bfcc-ee0a3ff17bf1&is_raw=true&size=extra-large 2022-08-16 02:19:07 https://pkgs.alpinelinux.org/flagged?origin=libsoup3 2022-08-16 07:30:52 ikke: usa5 crash again? 2022-08-16 07:31:25 Looks like it 2022-08-16 07:31:47 I was just thinking how it has been stable lately 🤔 2022-08-16 07:32:05 wonder what we should do about riscv rust 2022-08-16 07:33:12 we should move the builder to lxc 2022-08-16 07:33:17 and updated qemu 2022-08-16 07:33:33 see if that makes a difference 2022-08-16 07:33:44 i guess the issue is qemu related? 2022-08-16 07:50:01 looks like usa5 is still working 2022-08-16 07:52:02 something is wrong with network 2022-08-16 07:54:33 clandmeter: it gets in some deadlocked state. Often I can still login via console, but then certain commands make it completely blocked 2022-08-16 07:54:50 i dont think thzats the case now 2022-08-16 07:54:53 Ok 2022-08-16 07:55:08 see some weird msg in ringbuffer 2022-08-16 07:55:14 something with the bond 2022-08-16 07:55:30 and its not related to james 2022-08-16 07:55:49 jikes now the kernel is acting up 2022-08-16 07:56:49 interesting, ringbuffer initially didnt mention anything 2022-08-16 07:56:56 but trying some stuff make it crash 2022-08-16 07:57:47 Heh 2022-08-16 09:49:51 ikke: now that we got those 3 t1 boxes up and running 2022-08-16 09:49:57 we need to think about migrating the rest of the infra 2022-08-16 09:50:04 Indeed 2022-08-16 09:50:37 i guess the mirrors are up to date now 2022-08-16 09:50:44 i dont see any red flags 2022-08-16 10:04:19 ikke: did you verify if all devices are ok now? 2022-08-16 10:20:16 clandmeter: yes, that looks better 2022-08-16 10:21:25 https://zabbix.alpinelinux.org/zabbix.php?action=dashboard.view&dashboardid=12 2022-08-16 13:01:59 `+NAME = Hurr durr I'ma ninja sloth` https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?h=v6.0-rc1 :D 2022-08-16 13:19:56 Heh 2022-08-16 15:37:34 ptrc: it should be all deployed at the same time, part of the same source 2022-08-16 19:56:11 clandmeter: https://gitlab.alpinelinux.org/alpine/infra/mirrors/-/merge_requests/3 2022-08-16 21:21:22 looks like already gone :) 2022-08-16 21:26:10 ? 2022-08-16 21:27:03 aah 2022-08-17 07:53:31 ikke: https://feedback.equinixmetal.com/networking-features/p/global-ipv6-anycast-support 2022-08-17 08:09:53 Dus nog geen support 2022-08-17 08:10:12 yes no support yet 2022-08-17 08:10:27 😉 2022-08-17 13:55:17 uh CI on https://gitlab.alpinelinux.org/alpine/infra/docker/build-base isn't running for me 2022-08-17 13:58:15 for example, https://gitlab.alpinelinux.org/PureTryOut/build-base/-/jobs/806045 2022-08-17 13:58:26 > This job is stuck because of one of the following problems. There are no active runners online, no runners for the protected branch , or no runners that match all of the job's tags: docker-alpine x86_64 2022-08-17 15:37:49 i think we may need alpinelinux/alpine-gitlab-ci:3.15-x86 2022-08-17 15:38:07 Hmm, ok. Specific reason? 2022-08-17 15:38:21 seems like it fails to downgrade? https://gitlab.alpinelinux.org/J0WI/aports/-/jobs/806043 2022-08-17 15:38:39 ERROR: libcrypto1.1-1.1.1q-r0: trying to overwrite etc/ssl1.1/cert.pem owned by ca-certificates-bundle-20220614-r2. 2022-08-17 15:39:49 Ok, not specific to x86 2022-08-17 15:41:04 i think we need alpine-gitlab-ci:3.[345]-* 2022-08-17 15:41:18 docker images for version branch 2022-08-17 15:41:48 Right, I've been trying to avoid that if possible :) 2022-08-17 15:42:17 But if necessary, I'll work on that 2022-08-17 15:42:34 other option is to add a replaces in libcrypto1.1 2022-08-17 15:44:28 PureTryOut: you need to enable shared runners here: https://gitlab.alpinelinux.org/PureTryOut/build-base/-/settings/ci_cd 2022-08-17 15:44:32 I can do that for you 2022-08-17 15:57:12 ikke: do you have other ideas how to solve it? I think the proper thing is to have docker images per release branch 2022-08-17 15:57:15 ncopa: oh thanks, that's a TIL. That wasn't needed for aports 🤔 2022-08-17 15:57:33 PureTryOut: was this an old fork? 2022-08-17 15:57:35 PureTryOut: someone probably enabled it for you 2022-08-17 15:57:48 New forks should have it enabled by default 2022-08-17 15:58:20 aports? Yeah think so, from first day we switched to Gitlab from Github 2022-08-17 15:58:26 ncopa: well, it was nice being able to properly downgrade 2022-08-17 15:58:34 PureTryOut: build-base 2022-08-17 15:58:42 Oh no that was newly made 2022-08-17 15:58:47 jhmm, ok 2022-08-17 15:59:41 oh, it's because projects under the docker namespace use group runners 2022-08-17 15:59:50 not shared runners 2022-08-17 16:00:44 ncopa: having images per release complicates things a bit, making sure it's supported for every new release etc 2022-08-17 16:00:55 understand 2022-08-17 16:01:26 But if necessary, we can make it happen 2022-08-17 16:01:49 do you ahve other ideas who to fix it? 2022-08-17 16:02:07 other than the replaces=? 2022-08-17 16:02:16 we just add a replaces=ca-certificates-bundle in libcrypto1.1? 2022-08-17 16:06:11 i think we may need a replaces in ca-certificates-bundle as well 2022-08-17 16:07:04 It's specically an issue between edge and 3.15 2022-08-17 16:07:17 oh, its edge.. 2022-08-17 16:07:22 yes 2022-08-17 16:07:31 3.16 does not have /etc/ssl1.1 2022-08-17 16:07:32 i saw alpine....:latest-x86 2022-08-17 16:07:39 https://pkgs.alpinelinux.org/contents?file=certs&path=%2Fetc%2Fssl1.1&name=&branch=v3.16 2022-08-17 16:07:49 and thought it was 3.16 2022-08-17 16:11:10 ERROR: ca-certificates-bundle-20220614-r2: trying to overwrite etc/ssl1.1/cert.pem owned by libcrypto1.1-1.1.1q-r0. 2022-08-17 16:11:10 ERROR: ca-certificates-bundle-20220614-r2: trying to overwrite etc/ssl1.1/certs owned by libcrypto1.1-1.1.1q-r0. 2022-08-17 16:11:31 yup. we need a replaces 2022-08-17 16:12:22 ncopa: I think alpine is the exception where :latest does not mean actual latest :P 2022-08-17 16:18:18 ok, i have another workaround: apk add --upgrade ..... || apk fix 2022-08-17 16:19:28 i think we can do that in our alpine-gitlab-ci image 2022-08-17 16:20:33 right 2022-08-18 00:12:20 that's not a very good workaround because it's useful the other 95% of the time on edge to find things that actually conflict 2022-08-18 00:12:31 and fix them 2022-08-18 15:25:52 cleanup time ^ :) 2022-08-20 07:25:21 what happened with abuild rootpkg. I always get `make: *** No rule to make target 'install'. Stop.` 2022-08-20 07:26:52 nothing happened to it 2022-08-20 07:27:43 ok, what is cause then 2022-08-20 07:28:08 No idea? 2022-08-20 07:28:32 just did 'apk upgrade' nothing else 2022-08-20 07:54:47 that sounds like broken package() 2022-08-20 07:55:26 what you are trying to rootpkg doesn't have 'install' in Makefile 2022-08-20 07:55:27 or builddir 2022-08-20 08:11:29 panekj: tried with more pkgs 2022-08-20 08:12:46 what package 2022-08-20 08:13:08 haproxy for example 2022-08-20 08:14:02 https://img.ayaya.dev/f30EO4qfqXDM 2022-08-20 08:16:28 I see, but doesn't work in my dev lxc 2022-08-20 08:17:10 something in your environment? 2022-08-20 08:17:38 nothing changed for long 2022-08-20 11:08:48 Newbyte: ping 2022-08-20 11:09:54 #13877 2022-08-20 11:10:55 ping for what 2022-08-20 11:11:43 psykose: ^ 2022-08-20 11:11:53 issue above 2022-08-20 11:11:56 you can respond in the issue page? 2022-08-20 11:12:18 and you think I don't know that? ;p 2022-08-20 11:12:22 apparently? 2022-08-20 22:40:00 mps: fyi, you forgot to change the filename of the asahi tarball 2022-08-21 07:27:50 ptrc: I see. forgot to copy local changes to dev lxc, thank you for noticing 2022-08-25 18:06:51 gitlab security announcement 2022-08-25 18:08:07 https://about.gitlab.com/releases/2022/08/22/critical-security-release-gitlab-15-3-1-released 2022-08-25 18:10:41 time for a bunch of upgrades 2022-08-25 18:14:46 I don't have access to a pc :-) 2022-08-25 18:16:07 if only i did :p 2022-08-25 18:16:47 what is 'pc'? 2022-08-26 07:56:50 Linode had issues again in gbr 2022-08-26 07:58:55 yep 2022-08-26 07:58:59 didnt last too long 2022-08-26 11:34:46 uh, CI seems broken. It can't find any changes and thus all arches are failing. https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/38246/pipelines 2022-08-26 11:35:56 no, someone just pushed a syntax error without checking it 2022-08-26 11:36:20 (is fixed) 2022-08-26 11:39:38 ah ok 2022-08-31 14:29:58 Hmm, we now have a deadline for the equnix legacy bare-metal servers. So we need to make effort to migrate our infra 2022-08-31 14:30:27 November 30th 2022-08-31 14:30:53 clandmeter: ^