2021-10-01 08:37:46 where can i find the build-edge-riscv64? 2021-10-01 08:38:02 would it be an idea to add it to build-edge-riscv64.alpin.pw? 2021-10-01 09:32:22 ikke: clandmeter: https://github.com/naggie/dsnet 2021-10-01 09:32:57 wireguard tool to manage hub 2021-10-01 09:33:20 and we have !26020 MR 2021-10-01 09:34:34 I put it on hold to check and test it later today 2021-10-01 11:19:39 ncopa: its a docker container 2021-10-01 11:30:00 ncopa: the host is usa5-dev1.alpinelinux.org 2021-10-01 11:31:26 gre interface should be 172.16.0.9 2021-10-03 07:10:15 I got this on x86_64 developer lxc https://tpaste.us/mqvg 2021-10-03 07:12:11 same from aarch64 lxc 2021-10-03 07:14:33 looking on url I see certificate is not expired 2021-10-03 07:18:56 hmm, strange. FF connects ok 2021-10-03 07:20:16 looks like problem is on some of the mirrors of the https://invisible-mirror.net/ 2021-10-03 07:29:23 Does it still have an old certificate bundle? 2021-10-03 07:29:38 I mean, the container 2021-10-03 07:30:20 An old LE root cert expired 3 days ago 2021-10-03 07:37:33 looks like, but not sure because FF says that cert expires on Oct 30 2021 2021-10-03 07:38:02 openssl s_client shows that cert expired 2021-10-03 07:38:25 That cert, or one of the certs in the chain? 2021-10-03 07:38:36 cert 2021-10-03 07:39:07 openssl s_client -connect 160.153.42.69:443 -servername invisible-mirror.net 2021-10-03 07:39:39 'notAfter=Oct 30 17:19:46 2021 GMT' 2021-10-03 07:40:01 It's not Oct 30 yet 2021-10-03 07:40:13 yes 2021-10-03 07:40:31 but download doesn't work 2021-10-03 07:40:45 What does the chain look like? 2021-10-03 07:42:01 https://tpaste.us/REqo 2021-10-03 07:42:21 https://tpaste.us/9P97 2021-10-03 07:43:38 how did you got chain 2021-10-03 07:45:15 I'm not sure is the problem on their or our side 2021-10-03 07:45:17 openssl s_client returned it, below the part you pasted 2021-10-03 07:45:25 ah 2021-10-03 07:45:36 I think they have the expired intermediate in their bundle 2021-10-03 07:47:29 hmm, could we add locally this intermediate to overcome this 2021-10-03 07:47:46 there will be more sites with this issue I think 2021-10-03 07:48:32 I didn't thoroughly analyzed this 2021-10-03 07:49:01 The operators of invisible-mirror.net need to fix this 2021-10-03 07:49:20 I thought that should be fixed on clients 2021-10-03 07:49:40 hmm, should I write to upstream to fix this 2021-10-03 07:49:42 They serve the expired intermediate 2021-10-03 07:49:57 clients typically only have root certificates 2021-10-03 07:51:03 yes, I had to add commercial certs to one of my client servers few days ago 2021-10-03 07:52:16 ikke: iirc you have good urls with explanation of this? 2021-10-03 07:52:41 Not from the top of my head 2021-10-03 07:53:00 would be nice to have them when reporting this issue to upstreams when we see these problems 2021-10-03 07:53:30 ok, I have basic one 2021-10-03 07:53:58 https://letsencrypt.org/docs/dst-root-ca-x3-expiration-september-2021/ 2021-10-03 07:54:39 so, FF fixed this somehow in code, I presume 2021-10-03 07:56:08 hmm, /etc/ssl/certs/ca-cert-DST_Root_CA_X3.pem exists 2021-10-03 07:57:44 yes, but have this 'Not After : Sep 30 14:01:15 2021 GMT' 2021-10-03 07:57:58 So it's expired 2021-10-03 07:58:17 hmm, should we upgrade it 2021-10-03 07:58:23 yes 2021-10-03 07:58:29 That one should be removed 2021-10-03 07:58:35 I thought ours was new enough, but apparently is not 2021-10-03 07:58:53 our certs lags behind 'a lot' 2021-10-03 07:59:32 I wanted to take care for them but didn't had enough time, and without time for such things is not good idea 2021-10-03 08:03:22 we are using mozilla ones? 2021-10-03 08:05:14 yes 2021-10-03 08:05:27 make update in the ca-certifactes project updates them 2021-10-03 08:06:28 looking this https://github.com/archlinux/svntogit-packages/blob/packages/nss/trunk/PKGBUILD 2021-10-03 08:06:43 https://gitlab.alpinelinux.org/alpine/ca-certificates 2021-10-03 08:06:57 yes, I have it opened 2021-10-03 08:07:39 so we download them and keep in our archives 2021-10-03 08:08:02 in gitlab 2021-10-03 08:08:10 Seems like make update does not remove the DST root 2021-10-03 08:10:48 I still think the issue is that they provide an expired intermediate 2021-10-03 08:11:08 also 2021-10-03 08:11:30 but how we could fix the 'Internet' 2021-10-03 08:11:57 We can and should not 2021-10-03 08:12:22 We need to make sure our ca-certificates is up-to-date 2021-10-03 08:12:33 all 'clients' had more than a year to fix this on their side but I know not all done this 2021-10-03 08:12:46 agree 2021-10-03 08:14:29 openssl s_client -connect invisible-mirror.net:443 -showcerts 2021-10-03 08:14:46 so who of two of us will fix this 2021-10-03 08:15:18 I can make an MR to update ca-certificates 2021-10-03 08:15:58 good, and thanks 2021-10-03 08:16:34 alpine would die without you 2021-10-03 08:23:01 looks like wireguard could solve this CA mess for internet 2021-10-03 08:25:43 question is then: who will hijack it as happened with CA 2021-10-03 08:26:17 The problem is how to establish trust 2021-10-03 08:26:46 heh, I don't trust current CA system 2021-10-03 08:28:33 I have all this 'solutions' in my head but didn't even tried to write anything because I know it is doomed, clean and simple solutions doesn't return big money 2021-10-03 08:29:41 how many times CA system is intentionally made broken for someone benefits 2021-10-03 08:30:32 (better to think about breakfast now :) ) 2021-10-03 08:57:15 mps: if you look here: https://crt.sh/?caid=183267 You see that this CA has 3 intermediate certificates, where 2 are expired. invisible-mirrors apparently offers one of the expired ones. 2021-10-03 09:08:17 ikke: yes, I saw this with s_client, but this site is more comprehensive and detailed. thanks for url, good to have it in notes 2021-10-03 15:42:53 ikke: I reported problem upstream, and Thomas (ncurses developer) fixed it. 2021-10-03 15:43:52 cool 2021-10-04 16:06:30 looks like a big outage at fb 2021-10-04 16:06:57 anything else except fb/wa/insta? 2021-10-04 16:12:44 Only noticed FB 2021-10-04 16:12:48 (ie, whatsapp) 2021-10-04 16:14:37 sometimes bad news are good news :D 2021-10-04 16:15:38 i just noticed my whatsapp msgs didnt work 2021-10-04 16:16:00 now i read all fb related services are MIA 2021-10-04 16:16:41 "I'm seeing similar DNS errors for many non-Facebook sites." 2021-10-04 17:19:15 auch: https://www.reddit.com/r/sysadmin/comments/q181fv/looks_like_facebook_is_down/hfda42z/?utm_source=reddit&utm_medium=web2x&context=3 2021-10-04 17:19:44 apparently a BGP issue, and the people who know how to fix it cannot access it 2021-10-04 17:24:11 sounds fun 2021-10-04 17:30:54 there are 3 types of nerds: (1) those with console access, (2) those with authentication knowledge, and (3) those who understand BGP :-) 2021-10-04 17:44:21 https://twitter.com/GossiTheDog/status/1445063880963674121 2021-10-04 17:48:30 reddit comment has been deleted now 2021-10-04 18:00:15 ikke: not just the comment, the poster's reddit account is deleted 2021-10-04 18:04:04 oh lol 2021-10-04 18:04:22 It was not the most representable username 2021-10-04 18:07:24 I'm also assuming (if he is indeed a FB employee) he didn't have management approval to post that statement and is now trying to "hide" from any mole hunt lol 2021-10-04 18:15:31 ahuh 2021-10-04 18:15:37 though, probably a bit too late for that now 2021-10-04 18:20:54 depends on whether he used "login with Facebook" for his Reddit account lol 2021-10-04 18:42:48 hi, jirutka noted that the master mirror is not syncing with dl-cdn.alpinelinux.org 2021-10-04 18:52:20 🀐 2021-10-04 18:58:16 Ariadne: does jirutka read this channel? 2021-10-04 18:58:19 oh 2021-10-04 18:58:26 Now I understand 2021-10-04 18:58:31 i do not believe so 2021-10-04 18:58:42 he has stopped using irc entirely as far as i know 2021-10-04 18:58:44 I thought you pinged him, but you just relayed the message 2021-10-04 19:01:44 ok, dl-t1-2 is synced again 2021-10-05 06:23:55 ikke: what happend? 2021-10-05 07:17:44 clandmeter: I was figuring out why a package was not synced so I temporarily disabled crond, but ofcourse if forgot to enable it again 😢 2021-10-05 07:18:02 haha 2021-10-05 07:18:07 happens :) 2021-10-05 07:18:38 atleast you didnt break routing :) 2021-10-05 07:18:45 yet 2021-10-05 07:18:48 :P 2021-10-05 11:37:48 mps: whats the status of wg? 2021-10-05 11:38:57 clandmeter: works for me about two weeks, iirc 2021-10-05 11:39:10 it does not work for me 2021-10-05 11:39:15 very stable, no one problem noticed 2021-10-05 11:39:38 heh, you didn't created your keys, I think :) 2021-10-05 11:40:46 exactly, i have no config 2021-10-05 11:40:49 :) 2021-10-05 11:40:58 so please enable me, or tell me what to do 2021-10-05 11:40:59 we have wiki, let me find it 2021-10-05 11:41:15 im on windows 2021-10-05 11:41:25 https://gitlab.alpinelinux.org/alpine/infra/infra/-/wikis/Alpine-wireguard-VPN 2021-10-05 11:42:08 clandmeter: huh, I thought you are my friend ;p 2021-10-05 11:42:21 windows, phew 2021-10-05 11:42:45 I forgot how to setup wg on windows 2021-10-05 11:44:30 first on the wg site is windows client https://www.wireguard.com/install/ 2021-10-05 11:45:13 here is one guide https://serversideup.net/how-to-configure-a-wireguard-windows-10-vpn-client/ 2021-10-05 11:46:11 can you add my pub key? 2021-10-05 11:46:20 sure 2021-10-05 11:46:29 post it to me or ikke 2021-10-05 11:47:16 and you will provide me an ip? 2021-10-05 11:47:41 yes 2021-10-05 11:48:23 for now you have option to chose one except of allocated two currently ;) 2021-10-05 11:48:51 172.16.252.2 and 172.16.252.1 are already allocated 2021-10-05 11:49:01 ok .3 i guess 2021-10-05 11:49:07 np 2021-10-05 11:49:08 which mask? 2021-10-05 11:49:14 32 2021-10-05 11:49:24 /32 2021-10-05 11:49:29 https://netbox.alpin.pw/ipam/prefixes/44/ip-addresses/ 2021-10-05 11:49:47 the SaveConfig is linux only? 2021-10-05 11:50:05 I think so 2021-10-05 11:50:19 alpine will die without ikke 2021-10-05 11:52:52 mps: i pm'ed you my key 2021-10-05 11:52:56 let me know when you added it 2021-10-05 11:52:59 ok 2021-10-05 11:53:02 please :) 2021-10-05 11:53:22 clandmeter: your wish is my command :) 2021-10-05 11:54:58 ready 2021-10-05 11:55:10 try to test it 2021-10-05 11:57:36 is there an smtp server i can use to send email from *@alpinelinux.org? 2021-10-05 11:57:46 smtp.a.o 2021-10-05 11:58:03 mps it works 2021-10-05 11:58:11 but need to go into meeting now. 2021-10-05 11:58:13 will test later 2021-10-05 11:58:15 do I need auth? 2021-10-05 11:58:39 yes or add yourself to config 2021-10-05 11:58:43 clandmeter: ok, cul 2021-10-05 11:59:06 ncopa: i need to go into meeting, maybe there is some instruction on the host. 2021-10-05 11:59:12 else i can check later. 2021-10-05 11:59:46 I added few days ago daliass mxclient in testing 2021-10-05 12:00:59 that will probably not work, or you need to add your ip to spf 2021-10-05 12:01:16 clandmeter: im looking at the config on smtp.a.o now. thanks 2021-10-05 12:05:52 ikke: I added clandemeters IP https://netbox.alpin.pw/ipam/prefixes/44/ip-addresses/ is it ok? 2021-10-05 12:06:28 Yes, looks ok 2021-10-05 12:06:39 good 2021-10-05 12:07:49 anyone else wants to test wg? 2021-10-05 12:11:47 I want to try it 2021-10-05 12:12:28 ikke: would you do all this for yourself or you want me to add you? 2021-10-05 12:15:59 I think I can manage 2021-10-05 12:19:40 ok 2021-10-05 12:32:38 i need some help with smtp.alpinelinux.org when someone has time 2021-10-05 12:33:01 i think i have added myself as user with a password but i don't know how to configure my client 2021-10-05 12:34:12 what is you MUA (client)? 2021-10-05 12:34:53 claws-mail 2021-10-05 12:34:59 usual username@smtp.server.dom 2021-10-05 12:35:38 eh, two years ago I deleted my claws config 2021-10-05 12:36:01 should i use starttls? ssl? port 587? should it be user=ncopa or user=ncopa@domain? why does it seem to time-out instead of giving an immediate error message? 2021-10-05 12:36:10 I remember it have options somewhere to set this, but can't remember where 2021-10-05 12:36:43 i know where to find the options in claw. i just don't know what the options should be for this specific server. I have tried a few combinations already 2021-10-05 12:36:50 all this depends on the settings on the server 2021-10-05 12:36:59 thats why i'm asking for help 2021-10-05 12:37:17 who manages smtp.a.o 2021-10-05 12:53:26 clandmeter and I 2021-10-05 14:39:39 Mail is sent through: smtp.alpinelinux.org; Secured connection on port 587 using TLS 2021-10-05 14:39:42 ncopa: ^ 2021-10-05 14:39:48 thats my setting via gmail 2021-10-05 14:40:49 i think the username should be clandmeter@alpinelinux.org not sure though 2021-10-05 19:51:39 mps: what is the solution for using wg on multiple places at the same time? 2021-10-05 20:04:50 clandmeter: I think different IPs and probably different keys 2021-10-05 20:05:14 right, so each device should have its own config 2021-10-05 20:07:13 well, I doubt it will work with two or more devices with same IP ;) 2021-10-05 20:07:19 at the same time 2021-10-05 20:07:58 but you can have router and many devices behind it 2021-10-05 20:09:52 with NAT then 2021-10-05 20:15:30 yes 2021-10-05 20:15:56 ok, we can reserve one subnet for clandmeter 2021-10-05 20:20:37 We have 172.21.0.0/16 for developer subnets 2021-10-05 20:21:09 I'm kidding 2021-10-05 20:21:15 I'm not :) 2021-10-05 20:21:47 so every developer can have /24 net 2021-10-05 20:21:48 But I guess that would then just use dmvpn at that point 2021-10-06 07:18:29 yesterday I got bounce from mail.alpinelinux.org 2021-10-06 07:49:35 kunkku: explain? 2021-10-06 08:32:01 : host mail.alpinelinux.org[147.75.101.119] said: 451 4.7.1 Service unavailable - try again later (in reply to MAIL FROM command) 2021-10-06 08:37:45 kunkku: greylisting? 2021-10-06 08:39:12 shouldn't greylisting be 450 2021-10-06 08:40:47 i think any 4x code is a temp failure and sender should try again later 2021-10-06 08:41:14 well, yes 2021-10-06 08:41:29 A server employing greylisting temporarily rejects email from unknown or suspicious sources by sending 4xx reply codes ("please call back later"), as defined in the Simple Mail Transfer Protocol (SMTP). 2021-10-06 08:41:52 but 'nice' option is to say why failed 2021-10-06 08:42:27 do we know why it said 451 yesterday? 2021-10-06 08:42:43 i suspect greylisting 2021-10-06 08:47:31 i got try again later errors when trying to send via smtp.a.o 2021-10-06 08:54:53 hmm 2021-10-06 08:54:56 seems to be something wrong 2021-10-06 09:03:48 hmm 2021-10-06 09:03:52 im getting ssl errors 2021-10-06 09:04:05 can it be ca-certs that is outdated? 2021-10-06 09:12:02 there seems to be a non repo version of sasl installed 2021-10-06 09:23:42 the alpine version seems to be old too 2021-10-06 09:29:28 yes im looking in upgrading it 2021-10-06 09:30:04 looks like the regular sasl version is similar so i guess we can use it. 2021-10-06 09:43:12 ncopa: should be ok now 2021-10-06 09:44:07 kunkku: i got your eamil 2021-10-06 09:44:10 email* 2021-10-06 09:47:12 hmm looks like postfix still has issues 2021-10-06 10:11:45 I get 454 4.7.0 Temporary authentication failure: generic failure 2021-10-06 10:13:18 yes sasl has issues 2021-10-06 10:13:23 this upgrade is very painfull 2021-10-06 10:13:29 sucks all my time which i dont have 2021-10-06 10:13:53 it complains it cannot open sasldb2 2021-10-06 10:14:07 looks like postfix changed its db types 2021-10-06 10:14:10 and removed some 2021-10-06 10:14:21 i can try have a look at it if you want 2021-10-06 10:14:26 if you are busy 2021-10-06 10:17:20 if you can check why it cant open sasldb2 that would be great 2021-10-06 10:18:22 clandmeter: yes, berkeley db is removed 2021-10-06 10:18:29 ct 6 10:13:46 smtp mail.warn postfix/smtpd[1150]: warning: SASL authentication failure: Could not open /etc/sasl2/sasldb2 2021-10-06 10:18:29 Oct 6 10:13:46 smtp mail.warn postfix/smtpd[1150]: warning: unknown[37.0.11.164]: SASL LOGIN authentication failed: generic failure 2021-10-06 10:18:53 not a very informative error 2021-10-06 10:20:26 the real errors could be shown in the startup phase 2021-10-06 10:21:04 heh 2021-10-06 10:24:46 i was missing cyrus-sasl, but i think thats just the tools 2021-10-06 10:24:51 still same error 2021-10-06 10:25:02 there is no verbose error msg after start 2021-10-06 10:26:11 file permissions? 2021-10-06 10:27:08 I didnt touch the file, so that would be strange 2021-10-06 10:27:22 its owned by postfix 2021-10-06 10:30:17 i giveup for now, i think normal operation should be ok, just no sasl auth 2021-10-06 10:35:55 i htink i know what theproblem is 2021-10-06 10:37:56 931b34134f484677b0459692cb0163ddad1304dd 2021-10-06 10:38:27 the current database is in berkley db format, the update changes it to gdbm 2021-10-06 10:40:48 I had to run postmap on a number of db files when I upgraded my servers 2021-10-06 10:41:20 problme here is that it is the sasldb2 that is in berkley format 2021-10-06 10:41:48 yes 2021-10-06 10:41:53 that was my thinking as well 2021-10-06 10:41:57 i was looking at gitlog 2021-10-06 10:42:11 kunkku: sasldb is seperate from postfix 2021-10-06 10:42:29 ncopa: you can read it with strings and maybe rebuild it 2021-10-06 10:42:39 I think bdb was dropped from postfix too some time ago 2021-10-06 10:42:46 yes it was 2021-10-06 10:42:58 thats was also a headache, i didnt know and had to convert it. 2021-10-06 10:43:05 i thought it would be an easy upgrade... 2021-10-06 10:43:18 but it was old, i should have known. 2021-10-06 10:45:56 i copied it to txt just so we dont loose it 2021-10-06 10:48:21 clandmeter: https://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.13.0#Deprecation_of_Berkeley_DB_.28BDB.29 :P 2021-10-06 10:48:32 been there done that :p 2021-10-06 10:49:01 lots of stuff in postfix changed, at least also in the config. 2021-10-06 10:49:36 clandmeter: how do you copy to text? 2021-10-06 10:49:42 strings 2021-10-06 10:49:52 kind of :) 2021-10-06 10:50:12 the txt is on the sasl dir 2021-10-06 10:50:21 i found it 2021-10-06 10:50:34 not sure what the "on" is for 2021-10-06 10:50:59 apparently you are supposed to be able to db_dump input.db | gdbm_load output.db, but it does not appear to work 2021-10-06 11:00:28 so i have managed to get postfix to actually read a new db, with only my passwd in it 2021-10-06 11:00:39 but I still get auth error 2021-10-06 11:02:08 finally 2021-10-06 11:02:30 i figured out how to migrate the db: db_dump sasldb2.db | gdbm_load - sasldb2 2021-10-06 11:05:57 yay! and now I can send 2021-10-06 11:06:04 finally 2021-10-06 11:07:54 clandmeter, ikke: can you please test if you can send from your alpinelinux.org email? 2021-10-06 11:09:22 clandmeter: im deleting the .txt file 2021-10-06 11:12:11 postfix can use text for username:password 2021-10-06 11:12:24 or better yet, dovecot 2021-10-06 11:13:15 ncopa: you could write wiki article about your success for 'pure souls' who can hit this problem 2021-10-06 11:16:03 hmm, I'm wrong, postfix can't use plain text db for auth 2021-10-06 11:19:39 what is the greylisting delay on the server? 2021-10-06 11:42:18 i would like to start work on getting a build-3-15-riscv64 up and running 2021-10-06 11:42:27 where and how can I set that up? 2021-10-06 13:15:56 ncopa: i think we should start using lxc 2021-10-06 13:16:19 the only thing needed is to have qemu-openrc setup 2021-10-06 13:16:46 btw, i will receive the new rv64 next week 2021-10-06 13:41:49 clandmeter: do you have storage and memory for it? 2021-10-06 13:42:11 maybe we should use it for build-3-15-riscv64? 2021-10-06 14:01:23 do we want to use it as a builder? 2021-10-06 14:01:34 ncopa: we still need an m.2 2021-10-06 14:01:42 i can probably buy one 2021-10-06 14:02:48 the memory is onboard 2021-10-06 14:02:58 i already arranged a small case 2021-10-06 14:04:42 i'm willing to contribute for storage if we want use it as a builder 2021-10-06 14:04:46 but i dont know if we want that 2021-10-06 14:05:05 i think we should keep the builder on qemu for now 2021-10-06 14:05:14 and use the board for CI 2021-10-06 14:05:23 so we can start run tests on it 2021-10-06 14:05:42 but im open for better suggestions 2021-10-06 14:17:25 well we will need storage for it. something like 1tb of nvme would be okish. maybe also need to uSD card to boot from, but maybe i can find one here. 2021-10-06 14:45:01 hmm lxc is behaving differently on the riscv host 2021-10-06 14:48:35 lxc-create: build-3-15-riscv64: confile.c: set_config_personality: 1249 Invalid argument - Unsupported personality "riscv64" 2021-10-06 14:51:05 riscv host as in where we run the edge docker builder 2021-10-06 15:03:06 ncopa: as you are the official problem solver today 2021-10-06 15:03:27 take a look at: usa5-dev1 vs nld5-dev1 2021-10-06 15:03:48 try to run: lxc-create -n build-3-15-riscv64 -t alpine -- -a riscv64 -r edge 2021-10-06 15:04:19 the newer lxc version will fail 2021-10-06 15:05:27 lxc-create: build-3-15-riscv64: confile.c: set_config_personality: 1249 Invalid argument - Unsupported personality "riscv64" 2021-10-06 15:05:29 yup 2021-10-06 15:05:39 but the other one works 2021-10-06 15:06:19 https://github.com/lxc/lxc/blob/master/src/lxc/confile.c#L3186 2021-10-06 15:06:54 if the arch was never added to this list, why did it work before? :) 2021-10-06 15:07:19 or maybe we had a patch before 2021-10-06 15:07:29 so it does work in 3.13 but not in 3.14 2021-10-06 15:07:52 exactly 2021-10-06 15:08:38 sounds like a regression in lxc 2021-10-06 15:09:21 could be i added a fix before 2021-10-06 15:09:33 but i cant find it if i did 2021-10-06 15:10:53 there are no riscv64 in lxc-4.0.6 2021-10-06 15:11:13 so maybe they didnt check for supported architectures previously 2021-10-06 15:11:30 there was a commit regarding this part 2021-10-06 15:11:39 but its not like they added riscv64 2021-10-06 15:11:53 i guess we can add it and see what appens 2021-10-06 15:11:57 backport it to .14 2021-10-06 15:12:03 yeah 2021-10-06 15:12:57 if that works, lets just move back to using lxc for riscv64 2021-10-06 15:13:06 its a bit more in line without our other builders 2021-10-06 15:13:46 I wonder if we will bump into issues where ash does not load the default profile 2021-10-06 15:13:59 due to qemu-user 2021-10-06 15:21:19 i need to go somewhere now, if ncopa or ikke (or anyone else) can add a patch to include riscv64 in confile.c on master and stable i can try to setup a builder later. 2021-10-06 15:21:34 im adding a patch to test 2021-10-06 15:21:56 or more specifically, i building the liblxc.so.1 first and will try with LD_PRELOAD 2021-10-06 15:50:00 clandmeter: build-3-15-riscv64 was created 2021-10-06 15:50:44 container was created, but it needs to be configured (ip address, qemu etc) and started 2021-10-06 16:33:29 Yes 2021-10-06 16:33:40 You need to set the IP manually 2021-10-06 22:51:39 ikke: why does gitlab send mails with X-Spam: Yes? 2021-10-07 04:23:59 Hello71: huh, good question 2021-10-07 06:42:06 Hello71: i guess cause smtp.a.o has been upgraded and now acts different 2021-10-07 07:27:14 Hello71: i added a config, let me know if it solves the issue. 2021-10-07 07:36:36 clandmeter: thanks BTW for upgrading smtp.a.o 2021-10-07 07:40:24 looks like it works, mails are no longer checked by rspamd from gitlab. 2021-10-07 07:40:59 ikke: if you need to send out email from a trusted device, please add it to smtpd_milter_map 2021-10-07 07:41:21 OK 2021-10-07 23:02:35 why is aarch64 ci lagging so far behind today? 2021-10-07 23:03:07 i saw a few times it wasn't working on a single MR in alpine/aports 2021-10-07 23:04:01 there are like 10 MRs now that only have aarch64 left to build, all others are done 2021-10-08 07:54:52 ikke: ping 2021-10-08 07:55:04 pong 2021-10-08 07:55:17 unmatched should arrive t oday 2021-10-08 07:55:25 i wonder what to do with it 2021-10-08 07:56:06 should we dedicate it to CI only? 2021-10-08 07:57:10 We don't think it's suited as an actual builder, right? 2021-10-08 07:57:22 im not sure 2021-10-08 07:57:51 Maybe we can do some testing? 2021-10-08 08:00:47 Ariadne mention performance is similar to some older arm cpu, dont remember which one. 2021-10-08 08:01:05 its about on par with a53 2021-10-08 08:01:16 it would be nice if we could catch test errors 2021-10-08 08:01:24 rpi3 performance 2021-10-08 08:01:54 for the rest we can just keep using qemu-user 2021-10-08 08:01:59 untill we get more boards 2021-10-08 09:29:55 what i really would want to do is to create a build cluster. "build this package on riscv64", the scheduler finds a free worker node and send the build job, and get the apks back in return 2021-10-08 09:46:15 ncopa: you keep repeating yourself for 5 years now ;-) 2021-10-08 09:46:35 i know... unfortunately i have not been able to allocate time to actually implement it 2021-10-08 09:47:12 cant we utilize some existing solution? 2021-10-08 09:47:16 and the idea is changing over the years. now its more turning into a lightweight kubernetes clone 2021-10-08 09:47:22 we could use kubernetes i guess 2021-10-08 09:47:25 at least some framework that would make it easier to set it up 2021-10-08 09:48:21 we could try something with k0s i guess? 2021-10-08 09:48:50 but we would still need to design something to build on top of it 2021-10-08 09:49:26 yes 2021-10-08 09:49:29 https://kubernetes.io/docs/concepts/workloads/controllers/job/ 2021-10-08 09:50:16 how "lightweight" can it be? 2021-10-08 09:50:21 kubernetes can find a free owkrer node and execute a given container 2021-10-08 09:50:38 how much overhead will it take on a worker? 2021-10-08 09:51:08 approx 1GB disk 2021-10-08 09:51:18 disk is cheap 2021-10-08 09:51:49 and less than 1G ram 2021-10-08 09:52:55 i guess around 500MB ram 2021-10-08 09:53:31 hum.. we dont have builds of all our needed architectures 2021-10-08 09:54:01 we don't have ppc64le, s390x 2021-10-08 09:55:16 what actually runs on a worker? 2021-10-08 09:56:29 kubelet, containerd, runc (or crun) 2021-10-08 09:56:39 well, k0s too i guess 2021-10-08 09:56:56 then there is a network plugin needed also 2021-10-08 09:57:18 kube-router https://github.com/cloudnativelabs/kube-router/releases/tag/v1.3.1 2021-10-08 09:57:41 which seems to support armv6, armv7, aarch64, ppc64le, s390x 2021-10-08 09:57:47 so it should be possible to make it happen 2021-10-08 09:57:58 but not riscv64? 2021-10-08 09:58:02 currently not 2021-10-08 09:59:08 kube-router uses alpine 2021-10-08 09:59:19 https://github.com/cloudnativelabs/kube-router/blob/master/Dockerfile 2021-10-08 10:08:50 ok, but still those upstream proejects dont support riscv64, will need some work like we did with our own CI. 2021-10-08 10:09:29 so i think the problme here is that those upstream projects, kube-router, k0s etc depends on riscv64 on alpine 2021-10-08 10:09:46 so bootstrap problem if we depend on those 2021-10-08 10:34:09 Would be nice if we could have go in main at some point, but according to Ariadne, it has too many regressions 2021-10-08 10:35:21 i mean we can do it 2021-10-08 10:35:34 but it’s gonna potentially be a pain point 2021-10-08 13:13:48 talked with Jussi, a cooworker at mirantis, and he had some pretty good ideas on a build infra worker, more specifically, how to get the build artifacts out from the worker 2021-10-08 13:14:28 i'm not sure kubernetes is the right tool. of the 10 features listed on the home page, 5 seem vaguely useful, 3 useless, and 2 arguable 2021-10-08 13:15:06 Maybe something like nomad? 2021-10-08 13:15:18 kubernetes is overkill 2021-10-08 13:15:46 the problem is that we'd get a ton of dependencies that needs to be bootstrapped if we use kubernetes 2021-10-08 13:15:51 its simply not gonna fly 2021-10-08 13:16:03 and the "workers" job is relatively simple 2021-10-08 13:16:17 tell a controller that the worker is available to take jobs 2021-10-08 13:16:26 title 2021-10-08 13:16:29 Ops 2021-10-08 13:16:45 Sounds like gitlab-runner 2021-10-08 13:17:35 when it gets assigned a build job, it would get what package to build from what git commit, on which branch (maybe not needed), together where an url where to upload the artifacts 2021-10-08 13:17:36 Polls for jobs, has several executors, and can upload artifacts 2021-10-08 13:17:51 yup similar to gitlab-runner 2021-10-08 13:18:17 the upload artifacts woudl be minio url or s3 buckets or similar 2021-10-08 13:18:39 the controller could precreate a bucket and ship the url(s) where to upload the artifacts 2021-10-08 13:18:48 and maybe also the log 2021-10-08 13:19:19 uploading it would only be an http post 2021-10-08 13:19:32 the idea is to keep the worker node software extremely simple 2021-10-08 13:19:52 We could look if it's feasible to reuse gitlab-runner 2021-10-08 13:20:04 ...without go code preferible 2021-10-08 13:20:19 Right 2021-10-08 13:20:24 is there any document that explains how a gitlab-runner works? 2021-10-08 13:20:31 it depends on docker, doesnt it? 2021-10-08 13:20:44 Not necessarily 2021-10-08 13:20:55 It's one of the executors 2021-10-08 13:21:11 im thinking that our build worker nodes would not need docker, containerd, kubelet or anything, just runc or crun, to execute the container 2021-10-08 13:21:31 after all there are limited number of container images the worker needs to run 2021-10-08 13:21:42 Right 2021-10-08 13:21:45 only a single, in different version (for different branches) 2021-10-08 13:22:19 so we technically dont need containerd (which fetches container images from a registry) 2021-10-08 13:22:51 but hm.... while thinking of it. we would need a way to tell the worker which image it should use (eg alpine edge or 3.x-stable based) 2021-10-08 13:23:04 and we need a way to update those 2021-10-08 13:23:26 which is what containerd solves 2021-10-08 13:23:56 I think it's a lot herder to find something pre-existing that's not written in go or rust 2021-10-08 13:24:08 Harder* 2021-10-08 13:26:03 https://docs.gitlab.com/runner 2021-10-08 13:26:03 if prioritization and resource limiting is not required, each worker host could simply run two 3.10 builders, two 3.11 builders, and so on, and each container would fetch jobs from its respective queue. the problem with this model is that the host could potentially run up to 10 jobs at once 2021-10-08 13:26:40 i think what we are looking for here could be generically described as "docker job queue" 2021-10-08 13:27:01 One thing to take into account is building packages in dependency order 2021-10-08 13:27:36 One job might rely on an earlier one they might not me finished yet 2021-10-08 13:27:49 i think gitlab runner is not an appropriate tool because (from my understanding) the internal api is not stable. we want to submit jobs not from gitlab 2021-10-08 13:28:21 Right, that's something I was wondering about 2021-10-08 13:28:29 The API 2021-10-08 13:30:59 if we want to parallelize jobs across hosts, that can't be handled in the workers, it needs to be handled in the top-level scheduler. otherwise if you send a job "rebuild icu" to a worker, it will be crunching on its own forever. i guess what we really want is two queues: "package build queue", and "docker job queue", plus a "scheduler" to move/convert items from the former to 2021-10-08 13:31:01 the latter 2021-10-08 13:31:12 Hello71: the runner had no strict version dependencies on gitlab 2021-10-08 13:31:20 right 2021-10-08 13:31:31 So I assume there must be some stele api 2021-10-08 13:31:42 Stable* 2021-10-08 13:38:00 the nice thing about Nomad is that its not just for containers, it handles VMs and exec also 2021-10-08 13:38:41 we want parallelize jobs across hosts, so we need a top level scheduler/controller 2021-10-08 13:40:13 right, and Nomad is a client/server arch with server(s) managing scheduling 2021-10-08 13:40:31 nomad is implemented in go? 2021-10-08 13:41:05 Yes 2021-10-08 13:41:10 yupe, single binary on a box 2021-10-08 13:42:11 if you run in with "-dev" then the single process is both a server and client, normally these are separate. Clients are where the jobs run, Servers manage scheduling and resource allocation 2021-10-08 13:44:44 you write job files to define what you want (and within that indicate things such as the task driver to use, i.e. Qemu, Docker, Exec) 2021-10-08 13:44:55 https://www.nomadproject.io/docs/job-specification 2021-10-08 14:00:27 that doesn't say anything about about queuing, but https://www.hashicorp.com/blog/replacing-queues-with-nomad-dispatch does 2021-10-08 14:04:41 it still seems quite "heavy" though. yes, it's a "single binary", but so is k3s. 2021-10-08 14:07:54 was just pointing it out as a Kubernetes alternative - it has less functionality than K8s and partly because of this it's simpler (less moving parts to go wrong). 2021-10-08 14:08:21 based on my experience with "enterprise-ready" software, i am concerned about debuggability. if, for example, jobs suddenly stop moving, or aren't distributed as expected, how hard is it to find out the reason? 2021-10-08 14:09:13 will it do the systemd thing where it just says "job failed" and you have to dig through a dozen layers of c^Hgo to find out the reason 2021-10-08 14:10:04 it does sound like it has the appropriate functionality though, and is at least reasonably well-contained and easy to manage in the happy case 2021-10-08 14:10:04 Hello71: fully agree, hence my "less moving parts to go wrong" 2021-10-08 14:11:42 The ppc64le builder is being weird, apk can't get a lock 2021-10-08 14:12:05 does nomad workers support riscv64? 2021-10-08 14:12:24 i would prefer to have the workers without golang 2021-10-08 14:12:49 i guess the controller/scheduler could be implemented in go or whatever 2021-10-08 14:14:19 the idea here is that toe scheduler/controller, storage, staging repository, singing logic etc can run on whatever, x86_64, and could use docker kubernets whatever 2021-10-08 14:15:14 but the worker code should be as light as possible, without many dependencies, and shoudl preferable only be C code, shell, or maybe lua/python 2021-10-08 14:16:02 the worker only need to: register itself somewhere, wait for build jobs, run the build job, and upload the build artifacts and build log somewhere 2021-10-08 14:16:26 ideally it should also send a live stream of the build log if possible, but I havent solved that part yet 2021-10-08 14:18:30 ncopa: re riscv, no idea, best to ask the Alpine Nomad maintainer :-) 2021-10-08 14:22:51 seems that like many alpine packages, alpine nomad is not really maintained 2021-10-08 14:23:11 then there is simplenetes: https://github.com/simplenetes-io/simplenetes 2021-10-08 14:23:32 but i think its a generally a bad idea to implement such thing in shell 2021-10-08 14:24:28 Yeah agreed 2021-10-08 14:25:31 this is why i kind of like arch linux where each maintainer signs their own packages; it forces some accountability on maintainers. much less of this merge and ditch. e.g. alpine nomad is 1.1.1, but there is according to upstream release notes a CVE fixed in 1.1.4, released over a month ago (but this is drifting off #-infra topic) 2021-10-08 14:26:37 nomad does not have riscv64, nor 32bit x86 support 2021-10-08 14:26:42 Hello71: how does that prevent maintainers disappearing into thin air? 2021-10-08 14:27:11 The arch model is more different anyway 2021-10-08 14:27:29 it kind of does indirectly: the maintainers need to be vetted, because all maintainers are (gpg) trusted to sign all packages 2021-10-08 14:27:49 re dependencies for the build workers 2021-10-08 14:28:38 im thiking: the scheduler generates the list of packages that needs to be built. figures out which can be built in parallel and which cannot 2021-10-08 14:28:54 send the build request for each package to a worker 2021-10-08 14:29:06 Hello71: hmm, haven't noticed that the maintainer is AWOL, yeah no recent commits from him and he only maintains one other package. I'd volunteer to take over Nomad but not sure if I could commit to the time currently. 2021-10-08 14:29:08 get the built packages, sign those 2021-10-08 14:29:15 and upload to a staging http repo 2021-10-08 14:29:37 Hello71: yes, so you reduce the pool of maintainers 2021-10-08 14:29:47 and send the next build job to a worker, with url to the staging repo 2021-10-08 14:30:33 once all the packages are built and signed, the controller updates the index and uploads the packages to master mirror 2021-10-08 14:31:33 ncopa: from it's APKBUILD: "x86 run out of memory, mips64 & riscv64 limited by yarn/npm" 2021-10-08 14:31:56 I figure the workers are the most challenging, due to bootstrap requirements 2021-10-08 14:32:22 so sounds like it could run on those, I guess without an active maintainers those issues just haven't been investigated/fixed 2021-10-08 14:32:25 minimal: "run out of memory" doesn't sound hopeful for "simple software", and yarn/npm :| 2021-10-08 14:32:32 so if we would use nomad, we could need golang, npm, node, yarn built for any new future architecture 2021-10-08 14:32:42 So I assume that would probably written in C 2021-10-08 14:33:19 ncopa: the "joy" of modern software eh? :-( 2021-10-08 14:33:26 exactly 2021-10-08 14:34:04 i realized that if we would use lets say k0s, we'd need kube-router. which uses alpine 2021-10-08 14:34:18 which means it would be difficult to bootstrap a new architecture 2021-10-08 14:34:29 How do other projects handle this? 2021-10-08 14:34:39 they dont port to new architectures 2021-10-08 14:34:42 :) 2021-10-08 14:34:50 i think nomad should theoretically be buildable without node etc, just delete the web interface 2021-10-08 14:35:20 we only need the worker stuff to be portable 2021-10-08 14:35:32 im thinking something we write ourselves + crun 2021-10-08 14:35:39 Right 2021-10-08 14:35:49 crun? 2021-10-08 14:35:49 maybe containerd, but that requries golang 2021-10-08 14:35:57 Or runc? 2021-10-08 14:36:05 crun is runc implemented in C 2021-10-08 14:36:11 Aha 2021-10-08 14:36:20 to run container 2021-10-08 14:36:38 that way we can run the build in an OCI container 2021-10-08 14:36:45 Yes 2021-10-08 14:36:47 (which is a tarball with metadata as I understand) 2021-10-08 14:36:49 Hello71: was thinking the same re: node 2021-10-08 14:36:56 i think there are some non-trivial added features of these "orchestration" programs. from what i can remember, the main two are dealing with container failures, and worker auto-setup. i think the latter is not that important, not sure about the former 2021-10-08 14:38:07 and log handling 2021-10-08 14:38:10 true 2021-10-08 14:38:14 and fetching container images 2021-10-08 14:38:27 need to go. have a nice weekend! 2021-10-08 14:38:34 and thanks for an interesting discussion 2021-10-08 14:38:40 o/ 2021-10-08 14:39:29 i think that counts under "worker auto-setup" 2021-10-08 17:53:43 ugh, ppc64le builder has a RO filesystem again :/ 2021-10-08 18:00:09 and does not seem to come back after reboot 2021-10-08 18:07:49 ok, it's back 2021-10-10 11:36:17 Has there been decided on a replacement for aports turbo yet? 2021-10-10 11:37:16 Martijn Braam wrote a replacement in Python at one point and is UI-wise the exact same except for some improvements, we use it on https://pkgs.postmarketos.org. And I think Adelie also had something? 2021-10-10 11:40:33 PureTryOut: any specific reason you are asking this? 2021-10-10 11:41:51 Because I find https://pkgs.postmarketos.org much nicer than https://pkgs.alpinelinux.org currently, and from what I understood nobody really touched aports turbo in forever and understands it properly 2021-10-10 11:42:05 clandmeter understands it properly 2021-10-10 11:42:23 Also I like apk-file as a tool but rather than having some nice json api to talk too, it just scrapes results of https://pkgs.postmarketos.org/contents from html directly 2021-10-10 11:42:36 so extending the thing with a simple api or whatever would help there 2021-10-10 11:42:48 Yes, an API would be nice to have 2021-10-10 11:43:02 Apparently there was one at some point 2021-10-10 11:43:49 And well there are some more limitations currently I would like to see improvements on, but I'm not sure if it wouldn't just be better to replace the thing entirely by a more modern version like the one Martijn wrote 2021-10-10 11:44:35 https://gitlab.com/postmarketOS/apkbrowser 2021-10-10 11:44:50 I don't see that one supports flags? 2021-10-10 11:44:57 flags? 2021-10-10 11:45:02 oh as in flagging outdated? 2021-10-10 11:45:03 https://pkgs.alpinelinux.org/flagged 2021-10-10 11:45:11 yes 2021-10-10 11:45:19 Yes that might be the one thing missing currently 2021-10-10 11:45:23 But we can add that of course 2021-10-10 11:45:25 nod 2021-10-10 11:46:21 As far as I know that's the only thing missing currently, but we can get Martijn to fix that since we (postmarketOS) pay him for these things anyway πŸ˜› 2021-10-10 11:48:02 And how is the performance? 2021-10-10 11:48:15 I'll get Martijn to join here, he can answer that better 2021-10-10 11:48:18 ok 2021-10-10 11:49:26 o/ 2021-10-10 11:49:30 \o 2021-10-10 11:49:39 So, ikke asked "how is the performance?" 2021-10-10 11:49:49 still limited by sqlite, seems fine 2021-10-10 11:49:53 Then you answer "good", and conversation done πŸ˜‚ 2021-10-10 11:50:30 heh 2021-10-10 11:50:56 performance seems absolutely fine on the postmarketOS deployment, but we have way less packages and traffic on that 2021-10-10 11:51:05 Yeah 2021-10-10 11:51:24 the good thing is you can deploy as much instances of it as you want since it's neatly stateless 2021-10-10 11:52:12 yeah, we do the same with aports turbo atm 2021-10-10 11:52:23 We have 4 instances running handling the traffic 2021-10-10 11:52:53 it's designed to have the same database format and same user interface as aports turbo, it just has the language swapped out for python 2021-10-10 11:53:12 Ok, and importing packages is done through a cron job? 2021-10-10 11:53:28 the indexing service has the most difference, seems like turbo shells out to a tar implementation and I use the built in tar support in python 2021-10-10 11:53:36 yes, still cron 2021-10-10 11:54:34 martijnbraam: where is the source? 2021-10-10 11:54:46 https://gitlab.com/postmarketOS/apkbrowser 2021-10-10 11:54:50 thanks 2021-10-10 11:55:49 Ok, built with flask. Any other dependencies? 2021-10-10 11:56:09 nope 2021-10-10 11:56:24 I think, it's been a while since I messed with the deployment 2021-10-10 11:56:32 I didn't see a requirements.txt 2021-10-10 11:56:39 ah also requests 2021-10-10 11:56:46 ok, reasonable 2021-10-10 11:56:58 I mean, it's meant to run on alpine :) 2021-10-10 11:57:33 Something that would be nice to have (aports turbo doesn't have this either) is to be able to use https://alpinelinux.org/releases.json 2021-10-10 11:57:43 Note that this was originally made because aports turbo at the time wouldn't run on the latest stable Alpine release 2021-10-10 11:57:52 I understand this would not work for postmarket, but could be something optional 2021-10-10 11:57:58 instead of the branch config in the app itself? 2021-10-10 11:58:08 Yes 2021-10-10 11:58:22 what's releases.json normally used for? 2021-10-10 11:58:35 Anything that needs that kind of data 2021-10-10 11:58:44 I'm wondering if that's useful for pmos 2021-10-10 11:58:59 do you know of any users? 2021-10-10 11:59:11 Atm alpine secdb uses it 2021-10-10 11:59:24 Probably more, but I don't know from the top of my head 2021-10-10 11:59:25 ah right 2021-10-10 11:59:34 well it seems doable 2021-10-10 11:59:39 The goal is that we do not need to maintain this list for each project separately 2021-10-10 12:00:24 When a release is made, this file is updated 2021-10-10 12:00:46 https://gitlab.alpinelinux.org/alpine/infra/alpine-mksite/-/blob/master/alpine-releases.conf.yaml 2021-10-10 12:01:41 hehe, is that just a yaml file that's directly translated into json so you won't have to edit json by hand? 2021-10-10 12:01:47 brb, food 2021-10-10 12:05:09 martijnbraam: I guess so :D 2021-10-10 12:07:59 Having something comparable would be nice in postmarketOS too. It contains quite a lot of data and I like to keep in sync and use the same stuff wherever possible 2021-10-10 12:55:54 I was already debating setting up apkbrowser on my local server with the alpine repositories just so I could have the so: and pc: search features :P 2021-10-10 12:56:04 heh 2021-10-10 12:56:28 Do you happen to already have a docker image for this? If not, I'll create one 2021-10-10 12:57:01 Docker isn't used on our infrastructure so no πŸ˜› 2021-10-10 12:57:06 :) 2021-10-10 12:57:06 I don't no 2021-10-10 12:57:09 no problem 2021-10-10 12:57:43 I didn't even know so: and pc: searches were a thing. What search box accepts those? 2021-10-10 12:57:59 PureTryOut: try searching with empty input 2021-10-10 12:58:21 Oh no, search for a non existing package 2021-10-10 12:58:24 http://pkgs.postmarketos.org/packages?name=asd&branch=master&origin= 2021-10-10 12:58:34 the so: and pc: searches work in the normal search box in apkbrowser 2021-10-10 12:59:05 yeah but in package search or file search? 2021-10-10 12:59:16 package search 2021-10-10 12:59:35 Ah cool 2021-10-10 12:59:48 it's funny that aports turbo does have that info in the database, but doesn't expose it :P 2021-10-10 12:59:51 Oh yeah it says it if your search result is empty, nice 2021-10-10 13:53:19 Ikke: how is aports-qa-bot ? did you enable the warn_protected_branches service ? 2021-10-10 14:11:48 maxice8: I haven't spent much time on it lately 2021-10-10 14:25:00 did you get aports-qa-proxy working ? 2021-10-10 14:25:38 I've been mostly focussing on gitlab, so no 2021-10-10 14:25:47 I'll try to get that working 2021-10-11 14:28:11 ikke: sorry I have been busy today and was not able to help you set up build-3-15-* 2021-10-11 14:29:10 I think I have a script in /var/lib/lxc called setup-new-builder or similar, which you can use. you can use it to clone the current build-3-14-* lxc containers 2021-10-11 14:29:50 I think most of the builder hosts has this script. you can read the sources 2021-10-11 14:30:36 once it is cloned I bootstrap aports repo and abuild 2021-10-11 14:30:57 IIRC there is a script in .abuild/ for doing it 2021-10-11 14:31:04 With bootstrap.sh? 2021-10-11 14:31:08 no 2021-10-11 14:31:11 Ok 2021-10-11 14:31:19 just building the world from scratch 2021-10-11 14:32:03 its basically: ap recursdeps build-base | while read dir; do (cd $dir && abuild -rk)||break; done 2021-10-11 14:32:17 I think I need to set BOOTSTRAP=1 on some packages 2021-10-11 14:32:30 which basically disables the test suite 2021-10-11 14:32:38 I don't remember which package needs it. bison or simlar 2021-10-11 14:32:50 Ok 2021-10-11 14:32:54 after build-base is done I ask upgrade -U -a 2021-10-11 14:33:05 and do the same with abuild and aports-build 2021-10-11 14:33:06 Makes sense 2021-10-11 14:33:24 and finally also $(cat /etc/apk/world) 2021-10-11 14:33:51 and then its done to start listening on Matt messages 2021-10-11 14:33:54 mqtt 2021-10-11 14:34:31 OK, I'll try it and try to document it 2021-10-11 14:34:36 I try to start with the slowest machines first 2021-10-11 14:34:51 riscv64 will be the challenge here 2021-10-11 14:35:33 We'll start using lxc, right? 2021-10-11 14:35:47 yes, that's how I understand it 2021-10-11 14:36:24 I will be available on WhatsApp if you need me 2021-10-11 14:36:29 Ok 2021-10-11 14:37:38 you may need to git pull && ap .... to rebuild after openssl thing is reverted 2021-10-11 14:37:51 oh no.. I forgot to tag release of abuild 2021-10-11 14:38:25 I haven't checked if we have any -dbg packages in the build-base packages 2021-10-11 14:38:41 I planned to include the -dbg fix for abuild 2021-10-11 15:09:52 ikke: where do we want to run the rv builder? 2021-10-11 15:09:59 the current box has docker 2021-10-11 15:10:05 not sure that plays nice iwth ldc 2021-10-11 15:10:08 with lxc* 2021-10-11 15:10:17 clandmeter: good question 2021-10-11 15:11:18 or did we somewhat solve the mix of docker and lxc? 2021-10-11 15:12:03 I don't think we extensively looked into that yet 2021-10-11 15:13:25 I think the issue was mostly that we wanted to route them both on dmvpn, right? 2021-10-11 15:14:01 not sure anymore 2021-10-11 15:14:11 thought something related with iptables 2021-10-11 15:21:00 ikke: shall we temp run it on nld5-dev1? 2021-10-11 15:21:09 clandmeter: I was thinking the same 2021-10-11 15:21:54 i guess we need to upgrade it to 3.14 2021-10-11 15:22:01 as it has a fix for rv 2021-10-11 15:22:11 ok 2021-10-11 15:24:12 heh 2021-10-11 15:24:17 of course not needed 2021-10-11 15:24:27 it already has rv containers 2021-10-11 15:24:35 ah, right 2021-10-11 15:39:08 ikke: its running now 2021-10-11 15:39:16 do we have a list of what to setup? 2021-10-11 15:39:39 Is this the 3.15 builder? 2021-10-11 15:39:44 yes 2021-10-11 15:40:00 ncopa notes some things earlier today 2021-10-11 15:40:15 ssh root@172.16.4.206 2021-10-11 15:40:31 where did he note? 2021-10-11 15:41:31 in this channel 2021-10-11 15:42:08 "I think I have a script in /var/lib/lxc called setup-new-builder or similar, which you can use. you can use it to clone the current build-3-14-* lxc containers" 2021-10-11 15:42:23 "IIRC there is a script in .abuild/ for doing it" 2021-10-11 15:43:34 right, i know the script 2021-10-11 15:43:51 .abuild on which box? 2021-10-11 15:45:14 x86 has it 2021-10-11 15:45:35 https://tpaste.us/9PL7 2021-10-11 16:05:17 what would that script do? 2021-10-11 16:05:29 bootstap a builder? 2021-10-11 16:06:07 yes, start building the initial packages that are required for a builder 2021-10-11 16:06:37 right, that is what i have in my docker logic 2021-10-11 16:06:41 kind of i htink 2021-10-11 16:07:38 clandmeter: did you btw follow the conversation around pkgs.a.o? 2021-10-11 16:07:56 nope 2021-10-11 16:08:08 well, i did see an url from pmos 2021-10-11 16:08:20 didnt know they made an alternative in py 2021-10-11 16:08:22 martijnbraam created an alternative version 2021-10-11 16:08:24 right 2021-10-11 16:08:30 i looked at it 2021-10-11 16:08:34 i think we can use it 2021-10-11 16:08:37 tune it a bit 2021-10-11 16:08:48 it was my idea to do this in py too 2021-10-11 16:08:50 ok 2021-10-11 16:08:52 but no time... 2021-10-11 16:08:53 heh 2021-10-11 16:09:14 but the code looks simple, not sure it has all the bells and whistles we have 2021-10-11 16:09:28 flags are still missing 2021-10-11 16:09:40 Not sure what else 2021-10-11 16:10:20 i guess twisted has some more features 2021-10-11 16:10:27 so thats why the code is more simple 2021-10-11 16:10:34 flask is relatively simple 2021-10-11 16:10:37 but i didnt spend a lot of time checking it out 2021-10-11 16:10:42 ah flask 2021-10-11 16:10:59 i mix that with twisted 2021-10-11 16:11:27 twisted is an eventloop framework, right? 2021-10-11 16:11:44 i dunno, i just know it exists and i mixed the names :) 2021-10-11 16:11:49 heh 2021-10-11 16:12:06 but lets discus that anothe time pls 2021-10-11 16:12:10 sure 2021-10-11 16:12:15 for the builder 2021-10-11 16:12:20 i can run tmux on the host 2021-10-11 16:12:38 and lxc-attach 2021-10-11 16:12:45 run that script 2021-10-11 16:13:00 and then apk upgrade -Ua 2021-10-11 16:13:10 you can also keep an eye on it 2021-10-11 16:13:18 yes 2021-10-11 16:15:48 i guess it makes sense to write these steps down 2021-10-11 16:15:54 yes, that was my idea 2021-10-11 16:16:02 and maybe make a script to boostrap a new builder 2021-10-11 16:16:05 comapred to copy it 2021-10-11 16:16:20 did you use that create-new-builder script? 2021-10-11 16:16:26 no 2021-10-11 16:16:29 thats not possible 2021-10-11 16:16:35 there is no "old" builder 2021-10-11 16:16:39 right 2021-10-11 16:16:43 so there is nothjing to copy from :) 2021-10-11 16:16:46 that script can only copy, understood 2021-10-11 16:17:10 there are only 2 things that we need .abuild and .ssh 2021-10-11 16:17:23 the rest can be scripted 2021-10-11 16:17:33 ok 2021-10-11 16:17:37 the joy of qemu-user... 2021-10-11 16:17:47 git clone is slowww 2021-10-11 16:18:10 you can join tmux if you like 2021-10-11 16:18:19 i need to stop soon 2021-10-11 16:18:30 maybe you can continue if you have some time 2021-10-11 16:19:00 welcome 2021-10-11 16:19:06 :) 2021-10-11 16:19:42 i wonder if we run in trouble if profile is not sourced 2021-10-11 16:19:46 i needed to do that manually now 2021-10-11 16:19:58 but what if mqtt-exec spins? 2021-10-11 16:20:36 does it rely on .profile? 2021-10-11 16:23:27 normally when you login /etc/profile is sourced 2021-10-11 16:23:35 im not sure which part is responcible for that 2021-10-11 16:23:36 yes, I'm aware 2021-10-11 16:23:49 sh (ash) does that 2021-10-11 16:23:58 but only for login shells 2021-10-11 16:24:03 does the shell doe it itself? 2021-10-11 16:24:06 yes 2021-10-11 16:24:08 ok 2021-10-11 16:24:13 try sh -l 2021-10-11 16:24:16 well thats not functioning 2021-10-11 16:24:43 probably because the 'login shell' detection is broken with qemu-user 2021-10-11 16:27:49 sh -l works 2021-10-11 16:33:04 clandmeter: reference that ash reads profile: https://git.busybox.net/busybox/tree/shell/ash.c#n14551 2021-10-11 16:33:26 if (login_sh): https://git.busybox.net/busybox/tree/shell/ash.c#n14647 2021-10-11 16:34:45 https://git.busybox.net/busybox/tree/shell/ash.c#n14475 login_sh = xargv[0] && xargv[0][0] == '-'; 2021-10-11 16:34:56 We need to know what xargv is 2021-10-11 16:38:53 yes thats the problem 2021-10-11 16:38:58 qemu sets argv 2021-10-11 16:39:30 Do you know what xargv[0] is? 2021-10-11 16:40:13 i think its what qemu passes 2021-10-11 16:40:16 check ps 2021-10-11 16:40:48 i think i saw dalias mention it 2021-10-11 16:41:04 You mean "/usr/bin/qemu-riscv64/bin/sh" 2021-10-11 16:41:18 yes, indeed 2021-10-11 16:41:23 i need to run now, sorry. 2021-10-11 16:42:06 dalias ln -s /bin/sh /tmp/-sh 2021-10-11 16:42:08 alright 2021-10-11 16:50:59 nope, all restaurants are closed, need to feed myself. mondays are so boring... 2021-10-11 16:51:13 :sadtrombone: 2021-10-11 16:53:19 btw, im missing gcc-gnat 2021-10-11 16:53:28 missing where? 2021-10-11 16:53:35 i thought i bootstrapped gcc to include it 2021-10-11 16:53:55 in the repo 2021-10-11 16:54:46 right 2021-10-11 16:55:08 oh 2021-10-11 16:55:15 maybe it got removed 2021-10-11 16:55:21 when i restarted the container 2021-10-11 16:55:30 and the next run of gcc would not build it 2021-10-11 16:56:06 next build* 2021-10-11 16:56:21 How to bootstrap gnat? 2021-10-11 16:56:33 boostrap gcc 2021-10-11 16:56:39 from bootstrap.sh 2021-10-11 16:56:47 so thats not that simple 2021-10-11 16:56:56 We can do that later, right? 2021-10-11 16:57:05 you can 2021-10-11 16:57:11 but you need to build gcc again 2021-10-11 16:57:39 maybe i have a copy somewhere 2021-10-11 16:58:12 not in your dev repo 2021-10-11 16:58:46 libgnat-10.3.1_git20210625-r0.apk 2021-10-11 16:59:11 its not the same snapshot 2021-10-11 16:59:29 10.3.1_git20210921-r1 2021-10-11 17:00:16 maybe it could still work, but i doubt it. 2021-10-11 17:01:52 i guess we can boostrap it on a copy of the edge builder 2021-10-11 17:03:18 ok 2021-10-11 20:46:16 ikke: i guess you didn't disable tests? 2021-10-11 20:46:22 clandmeter: no 2021-10-11 20:46:28 :) 2021-10-11 20:46:36 I think the idea was to run the test suite 2021-10-11 20:46:37 I think we should 2021-10-11 20:46:40 :) 2021-10-11 20:46:51 Not on qemu 2021-10-11 20:46:54 right 2021-10-11 20:47:11 Does it still run? 2021-10-11 20:47:15 so add BOOTSTRAP=1 to /etc/abuild.conf 2021-10-11 20:47:23 no, it failed on perl and was focussing on abuild first 2021-10-11 20:47:38 Ok 2021-10-11 20:47:45 I can take Look 2021-10-11 20:48:07 alright 2021-10-11 21:00:02 i think we should keep the default shell for root on our infra to ash. (hint mps_) 2021-10-11 21:00:15 :) 2021-10-11 21:00:38 dircolors: Command not found. 2021-10-11 21:00:39 :) 2021-10-11 21:02:16 ikke: did mps write some info on how to add an wg account 2021-10-11 21:02:40 https://gitlab.alpinelinux.org/alpine/infra/infra/-/wikis/Alpine-wireguard-VPN 2021-10-11 21:02:44 That's what he wrote 2021-10-11 21:02:54 thats the client side iirc? 2021-10-11 21:02:58 yup 2021-10-11 21:03:08 ok, so we need something on the server too 2021-10-11 21:03:13 just a little readme.txt 2021-10-11 21:03:29 Would be good if we have these things documented 2021-10-11 21:04:26 /etc/wireguard/wg0.conf 2021-10-11 21:04:36 You've already opened it 2021-10-11 21:04:36 ok so the peer part, the key needs to be unique i ugess? 2021-10-11 21:04:50 I think as a best practice 2021-10-11 21:05:02 else there is no relation to the ip 2021-10-11 21:05:06 right 2021-10-11 21:05:24 and i dont see you in it 2021-10-11 21:05:32 i thought you also wanted to connect 2021-10-11 21:05:50 Yes, just didn't get to it yet :P 2021-10-11 21:06:13 :) 2021-10-11 21:06:17 story of our lives 2021-10-11 21:13:24 btw, using wg.a.o fails for me 2021-10-11 21:13:28 when you have ipv6 2021-10-11 21:14:19 because it's not setup to use ipv6? 2021-10-11 21:17:50 clandmeter: ikke: I already drink daily dose of brandy and wine so can't say anything sensible about wg on hub but you can look on our current wg hub 2021-10-11 21:18:30 i added my 2nd config 2021-10-11 21:18:35 how do i reload this thing? 2021-10-11 21:18:54 (my old friend died from lung cancer today and I'm somewhat of the 'computer things') 2021-10-11 21:19:17 ugh, my condoleances 2021-10-11 21:19:40 shir 2021-10-11 21:19:42 t 2021-10-11 21:19:52 that's a life 2021-10-11 21:19:57 sorry to hear that 2021-10-11 21:20:15 drink some more and enjoy the memories. 2021-10-11 21:20:18 yeah, that happens 2021-10-11 21:20:27 thanks both 2021-10-11 21:21:23 clandmeter: yes, memories are good and that are things I have to keep 2021-10-11 21:24:23 clandmeter: feel free to change shell to ash 2021-10-11 21:30:21 ikke: wrong var :) 2021-10-11 21:30:30 its ABUILD_BOOTSTRAP 2021-10-11 21:30:38 aha 2021-10-11 21:39:57 clandmeter: ncopa did mention that we should upgrade abuild before bootstrapping 2021-10-11 21:40:15 for which reason? 2021-10-11 21:40:54 https://tpaste.us/zlOj 2021-10-11 21:41:06 versioned cmd: provides for example 2021-10-11 21:41:35 I'm about to push abuild 3.9.0_rc2 2021-10-11 21:42:02 what is the reasoning behind versioning cmd? 2021-10-11 21:42:24 That apk does not complain when upgrading packages when you install cmd:foo 2021-10-11 21:42:40 aha 2021-10-11 21:43:07 https://gitlab.alpinelinux.org/alpine/abuild/-/merge_requests/115 2021-10-11 21:43:38 aha v2 2021-10-11 21:44:05 v2? 2021-10-11 21:44:24 i was about to write, aha but still no clue why... 2021-10-11 21:44:27 but that link explains it 2021-10-11 21:44:43 oh lol 2021-10-11 21:44:54 so my aha was upgraded :) 2021-10-11 21:45:42 Ok, let's hope I do not break the builders 2021-10-11 21:45:59 its too late to brake anything, so yes please. 2021-10-11 21:47:18 My first abuild release :) 2021-10-11 21:47:24 well, pre-release :P 2021-10-11 21:47:30 make us proud 2021-10-11 21:48:01 armhf built :) 2021-10-11 21:48:20 right 2021-10-11 21:48:24 but does it build again? :) 2021-10-11 21:48:29 :D 2021-10-11 21:51:02 I see you canceled the build 2021-10-11 21:51:20 you can use abuild-apk without sudo :) 2021-10-11 21:51:52 yeah but i always forget 2021-10-11 21:52:19 riscv64 is not yet ready 2021-10-11 21:53:54 looks like abuild is sane 2021-10-11 21:54:00 congrats 2021-10-11 21:54:01 few :) 2021-10-11 21:54:13 is that missing a p? 2021-10-11 21:54:19 or just a few are ok? 2021-10-11 21:54:40 more like a sigh of relieve 2021-10-11 21:56:04 wow 2021-10-11 21:56:09 did riscv64 not build for some time? Seems like nano was pushed a while ago 2021-10-11 21:56:11 rv64 is 5 days behind 2021-10-11 21:56:12 yeah 2021-10-11 21:57:18 our builders do suck, we need some tooling to handle these lockups. 2021-10-11 21:58:51 22 ports to be build before abuild 2021-10-11 21:58:59 and it includes a kernel :| 2021-10-11 21:59:32 well, that would mean good night 2021-10-11 21:59:44 yes, maybe more than 1 :) 2021-10-11 22:00:20 heh 2021-10-11 22:01:06 what was the reason for not having golang based utilities on a builder? 2021-10-11 22:01:19 golang is available on al arches iirc? 2021-10-11 22:01:22 yes 2021-10-11 22:01:30 or is it support? 2021-10-11 22:01:35 I guess so 2021-10-11 22:01:52 else nomad could be something to look into 2021-10-12 08:00:19 clandmeter: do we need script by which we can add new 'users' on wg hub, or I misunderstood your last night question 2021-10-12 08:40:13 mps_: would be nice to know how to add a peer and make it available. 2021-10-12 08:40:21 if we need to reload something 2021-10-12 08:40:53 wg0.conf is related to wg-quick? 2021-10-12 08:41:00 ok, I will write something short about this 2021-10-12 08:44:47 though it is simple, add user to /etc/wireguard/wg0.conf and run 'wg setconf wg0 /etc/wireguard/wg0.conf' 2021-10-12 10:40:06 clandmeter: rv64 is running again 2021-10-12 10:40:26 abuild 3.9.0_rc2 has been installed 2021-10-12 10:40:30 right, i added an issue for it to tsc 2021-10-12 10:40:38 I noticed 2021-10-12 10:42:04 I dont think rv64 is production quality yet, so thats why i question why we should offer stable releases. 2021-10-12 10:42:50 Ariadne: will you join tsc today? 2021-10-12 10:44:45 from the little experience i had with rv64, some things already segfaulted on me. 2021-10-12 10:49:50 mps_: would be nice if you could add a simple readme.txt on the host ~root/ dir so next time we know how to add it. 2021-10-12 11:06:27 clandmeter: ok, will do 2021-10-12 11:06:51 in motd ;) 2021-10-12 11:10:24 if we install perl there I could write script to check consistency of wg.conf file 2021-10-12 11:11:00 i.e. no duplicate IPs, keys 2021-10-12 12:01:01 clandmeter: on deu7-dev1 in /root dir run this `./add-wg-user.sh -i 172.16.252.4 -c 'Dev 3' -k 'aaaaaaaaaaaaaaaaa'` and you will get entry (record) for user 2021-10-12 12:01:28 -i IP address, -c comment -k pubkey 2021-10-12 12:02:30 ofc, someone versed is posix shell could make it a lot better, I think ;) 2021-10-12 12:15:36 mps: thx 2021-10-12 12:15:50 nmeum: why did they put this small fan on the cpu? 2021-10-12 12:15:58 sounds like a vacuum cleaner 2021-10-12 12:16:17 haha 2021-10-12 12:16:21 was asking myself the same thing 2021-10-12 12:16:22 clandmeter: you are welcome 2021-10-12 12:16:25 mine is also relatively loud 2021-10-12 12:16:26 very annoying 2021-10-12 12:16:44 did you receive your nvme ssd already? :) 2021-10-12 12:16:46 and pretty useless 2021-10-12 12:16:52 you can't nowadays sell good computer without noisy cooler 2021-10-12 12:16:55 yes just received it 2021-10-12 12:17:11 i installed it in an older matx chassis i had around 2021-10-12 12:17:30 but the fan... haha what did they think... 2021-10-12 12:17:35 this story started with old days when amstrad had to add useless fan to their compures 2021-10-12 12:18:08 FreedomUSDK 2021.03.01 unmatched ttySIF0 2021-10-12 12:18:28 competitor bashed them that amstrad computers are bad because they don't have fan 2021-10-12 12:18:38 nmeum: why are there two serial devices? 2021-10-12 12:19:51 idk, I just used the USB serial to install the thing once and have been accessing the unmatched over ssh ever since 2021-10-12 16:35:40 ikke: we should probably add edge to pkgs.a.o 2021-10-12 16:35:46 for rv 2021-10-13 10:29:14 clandmeter: hi 2021-10-13 10:29:30 lo 2021-10-13 10:29:42 Saw you working on the riscv 3.15 builder :) 2021-10-13 10:29:58 working is a bit much 2021-10-13 10:30:00 aports-build should be bootstrapped now 2021-10-13 10:30:03 just checking in 2021-10-13 10:30:06 nod 2021-10-13 10:30:33 so upgrade + setting up aports-build I guess 2021-10-13 10:31:52 you want to build world on it? 2021-10-13 10:32:09 no idea, do we? 2021-10-13 10:32:23 thats what aports-build will do? 2021-10-13 10:32:27 yes 2021-10-13 10:32:30 :D 2021-10-13 10:32:41 not sure we need all of world 2021-10-13 10:32:48 but nice to write down what is needed to get going 2021-10-13 10:33:11 clandmeter: we could also coopt it as the builder for edge? 2021-10-13 10:33:22 boostrap world takes time 2021-10-13 10:33:27 yes 2021-10-13 10:33:35 and not build time, thatrs free 2021-10-13 10:33:41 probably lot of fixes 2021-10-13 10:33:44 right 2021-10-13 10:33:55 i doubht its worth the trouble 2021-10-13 10:34:07 but if you want to take care of it, thats fine. 2021-10-13 10:34:34 Not as a priority at least 2021-10-13 10:34:34 you can also rsync packages from edge builder 2021-10-13 10:35:07 but those do not have the new abuild 2021-10-13 10:35:18 working on x86(_64) now 2021-10-13 10:36:08 ikke: i have another question 2021-10-13 10:36:17 in case of ci 2021-10-13 10:36:33 ok 2021-10-13 10:36:34 do you think its possible to run build on qemu and check on another runner? 2021-10-13 10:36:42 sure 2021-10-13 10:36:48 or 2021-10-13 10:36:53 you mean abuild check 2021-10-13 10:36:56 it will need to build src 2021-10-13 10:37:15 You need to transfer the built files as an artifact 2021-10-13 10:37:27 which is possible, though it would probably need to be dymamically done 2021-10-13 10:37:34 right, but thats probably whole src tree 2021-10-13 10:38:06 and i bet that some projects have like xGB of src 2021-10-13 10:38:11 We could put all untracked files in the artifact 2021-10-13 10:38:14 yes 2021-10-13 10:38:16 so that would fail 2021-10-13 10:38:19 like ceph 2021-10-13 10:38:30 nod 2021-10-13 10:39:44 ok another crazy idea 2021-10-13 10:40:03 we can timeout check i guess? 2021-10-13 10:40:23 actually timeout when the whole build+check hangs 2021-10-13 10:40:36 For CI we already have a timeout 2021-10-13 10:41:39 lets say, if build or check time out, try on another CI host? 2021-10-13 10:42:54 so make the rv64 board only do CI jobs that qemu-user host cannot build. 2021-10-13 10:43:16 We can add an on_fail job that has a tag explicitly targetting the board 2021-10-13 10:43:29 i think else the board is unable to keep up 2021-10-13 10:43:40 its like using a single PI for aarch64 2021-10-13 10:43:57 But I'm not sure if the pipeline is considered a success if the on_failure job succeeds 2021-10-13 10:44:06 I think it won't 2021-10-13 10:44:49 the CI logic cannot do magic on APKBULD veriables right? 2021-10-13 10:44:58 variables* 2021-10-13 10:45:09 What do you mean? 2021-10-13 10:45:18 what i notice is that golang stuff tend to have issues 2021-10-13 10:45:18 You man change behavior? 2021-10-13 10:45:28 they hang often 2021-10-13 10:45:43 You can trigger child pipelines where the gitlab-ci.yml is defined by the output of a script 2021-10-13 10:46:03 like if builddeps has go, use rv64 2021-10-13 10:46:17 We would need to use child pipelines 2021-10-13 10:46:41 right 2021-10-13 10:46:48 first to source the APKBUILD 2021-10-13 10:46:55 child to do something with it? 2021-10-13 10:47:36 im just thinking out load, i have no idea if all of this will benefit us in the end. 2021-10-13 10:47:38 The first will have a job that dynamically creates a ci yaml file 2021-10-13 10:47:50 the child pipeline then will use that generated file to do the actual job 2021-10-13 10:47:58 nod 2021-10-13 10:48:04 It would make CI more complicated 2021-10-13 10:48:16 yes, thats what i thought 2021-10-13 10:48:22 and its already complicated enough 2021-10-13 10:48:51 One thing I was thinking about on using child-pipelines is to only trigger jobs for relevant arches 2021-10-13 10:52:19 anyways, if you have an idea on how to use the rv64 board to improve build quaility, im all ears. 2021-10-13 11:04:15 clandmeter: another thing 2021-10-13 11:04:48 https://docs.gitlab.com/ee/administration/operations/fast_ssh_key_lookup.html maybe related to slow fetches / pushes over ssh 2021-10-13 11:06:48 right 2021-10-13 11:07:10 sounds something we could try 2021-10-14 01:55:13 Hi, I'm trying to write a How-To on the wiki, but when I click save it has a banner that basically says 'Scammers not welcome'. I haven't written anything in the wiki before, and my account is about 2 days old. 2021-10-14 04:31:07 ktprograms: let me take a look 2021-10-14 04:32:31 Apparently the rule kicks in when you create a new page but never edited a page before 2021-10-14 04:32:43 hold on 2021-10-14 04:33:25 kunkku: I think it should be possible now 2021-10-14 05:26:26 ikke: Thanks, it works now, and here's the page I wrote: https://wiki.alpinelinux.org/wiki/How_to_make_a_cross_architecture_chroot. 2021-10-14 12:12:48 alpine is unmatched 2021-10-14 12:12:59 or should i say vice versa? 2021-10-14 12:13:48 anyways it runs now from nvme which is nice 2021-10-14 12:15:17 nmeum: i messed up by thinking your setup-disk patch was already in aports, but its only part of alpine-conf master... 2021-10-14 12:34:48 clandmeter: no, I haven't backported it to master yet. but feel free to do so 2021-10-14 12:35:23 there is one issue with your patch for setup-disk 2021-10-14 13:02:05 do we know who ServerStatsDiscoverertraveler4 is? can we kick them out? 2021-10-14 13:03:43 maybe matrix related? 2021-10-14 13:04:31 https://matrix.org/docs/projects/other/server-stats-project 2021-10-14 13:06:40 That is Matrix related indeed 2021-10-14 13:06:50 "Traveler bot", so I suppose it joins every room it can find and links them together in some fancy graph and what not 2021-10-14 13:07:04 Can safely be kicked but then again I wonder, why? 2021-10-14 13:07:58 https://github.com/mx-serverstats/server_stats#how-to-remove-the-bot-from-a-room 2021-10-14 13:11:41 clandmeter: what's the known issue? 2021-10-14 13:12:16 it assumes boot is on its own partition 2021-10-14 13:13:03 why? 2021-10-14 13:13:28 why what? 2021-10-14 13:13:29 I suppose it boots in EFI? That'd make sense then 2021-10-14 13:13:49 https://gitlab.alpinelinux.org/alpine/alpine-conf/-/blob/master/setup-disk.in#L365 2021-10-14 13:14:31 if you run setup-disk against a mountpoint with a single partition, it will not boot. 2021-10-14 13:14:32 ah well 2021-10-14 13:14:37 that could be changed I suppose 2021-10-14 13:14:40 good catch 2021-10-14 13:15:04 also: update-u-boot needs some love 2021-10-14 13:15:11 yes it does :) 2021-10-14 13:15:35 and now i understand how this uboot thingy works 2021-10-14 13:15:47 it searches for extlinux.conf and uses this device 2021-10-14 13:15:53 saerch order is nvme first 2021-10-14 13:16:27 so if i break my nvme, it will automatically fallback to the sd install. 2021-10-14 13:18:47 nmeum: i guess you have the same problem with not beeing able to reboot? 2021-10-14 13:19:00 clandmeter: yes, that is a linux limitation presently 2021-10-14 13:19:12 yes i read a bit up on it 2021-10-14 13:19:13 very annoying because I can't put the unmatched in our datacenter for this reason 2021-10-14 13:19:15 there are some hacks 2021-10-14 13:19:35 ikea can help :) 2021-10-14 13:19:45 haha 2021-10-14 13:19:59 I was hoping linux upstream will fix it eventually 2021-10-14 13:20:02 but haven't read up on it 2021-10-14 13:20:03 but maybe not so suitable in a dc 2021-10-14 14:55:17 nmeum: crap, there is no config to boot on power... 2021-10-14 14:55:46 this is as much of fun as the fan 2021-10-14 14:56:09 put it far away so you dont hear it, but not too far so you can still reach it to reboot :) 2021-10-14 14:56:36 hehe 2021-10-14 14:57:42 I just hope the linux folks will somehow make it possible to reboot the thing otherwise I will have to put it in some sort of storage room :S 2021-10-14 14:58:01 or remove the shitty fan 2021-10-14 18:00:14 ikke: can you restart tpaste? 2021-10-14 18:01:01 done 2021-10-14 18:04:49 Thx 2021-10-14 18:04:59 Will check to add health check 2021-10-14 18:12:25 any idea why it's happening? 2021-10-14 18:41:11 nope 2021-10-14 18:41:15 didnt look into it 2021-10-14 18:42:12 the previous discussion regarding pkgs.a.o, what has been disucssed? 2021-10-14 18:44:55 mostly whether we would be interested in replacing it 2021-10-14 18:45:14 was that discussion here? 2021-10-14 18:45:19 or in another channel? 2021-10-14 18:45:29 here 2021-10-14 18:45:36 ok ill search backlog 2021-10-14 18:52:08 ok 2021-10-14 18:54:06 i guess we almost use it as is 2021-10-14 18:54:11 but loose flagging 2021-10-14 18:54:24 it will have to be rewritten 2021-10-14 18:54:28 I would not mind to be honest 2021-10-14 18:54:39 to rewrite it? 2021-10-14 18:55:03 I don't mind switching and living without it for the time being 2021-10-14 18:55:22 I'm not sure how actively used it is 2021-10-14 18:55:39 i guess using py we will some more ppl wanting to chip in 2021-10-14 18:55:47 yes 2021-10-14 18:55:53 more mainstream language 2021-10-14 18:56:01 hint, PureTryOut 2021-10-14 18:56:15 martijnbraam: ping 2021-10-14 19:34:00 sorry, what do you mean with "loose flagging"? 2021-10-14 19:35:09 pkgs has 2 features 2021-10-14 19:35:16 manual flag a pkg out ofdate 2021-10-14 19:35:26 and subscribe to fredora 2021-10-14 19:35:31 fedora* 2021-10-14 19:35:47 so it will automatically do that 2021-10-14 19:39:10 subscribe to fedora? you mean the Anitya service? 2021-10-14 19:39:26 I didn't know there already was integration with that, I kinda wanted to add that into the new thing πŸ€” 2021-10-14 19:39:48 you are not maintaining a pkg? 2021-10-14 19:40:03 I am, but I never get emails for that stuff 2021-10-14 19:40:08 I just check repology.org all the time 2021-10-14 19:40:12 hmm 2021-10-14 19:40:16 i think i do 2021-10-14 19:40:29 somewhere hidden away in my filters 2021-10-14 19:40:53 This is an automatic message send from pkgs.alpinelinux.org 2021-10-14 19:40:53 You are receiving this message because you are the maintainer of aport: 2021-10-14 19:41:30 Yeah I definitely never got that 2021-10-14 19:47:50 I get those just for one or 2 packages I maintain 2021-10-14 20:03:22 I would love proper support for Anitya though (although I wouldn't like to get emails from it, I can already see my inbox being spammed), but I personally would see it as a replacement for the manual flagging 2021-10-14 20:05:10 The integration is already there 2021-10-14 20:05:23 in aports-turbo 2021-10-14 20:05:27 (but it doesn't seem to work currently) 2021-10-14 20:06:04 https://pkgs.alpinelinux.org/flagged 2021-10-14 20:06:10 zoxide was automatically flagged 2021-10-14 20:06:20 xfce4-whiskermenu-plugin as well 2021-10-14 20:06:29 vim 2021-10-14 20:06:36 So just a few then 2021-10-14 20:06:48 more than a few 2021-10-14 20:07:10 See all the flags around 2am 2021-10-14 20:07:42 Well like I said, I have literally never received an email from it, and I maintain quite a lot of packages nowadays 2021-10-14 20:07:55 And well, luckily I don't receive notifications for it, because damn it would be so much spam 2021-10-14 20:08:25 It probably depends on whether it can match a package 2021-10-14 20:11:27 PureTryOut: so you dont want a notification 2021-10-14 20:11:48 then whats the point? 2021-10-14 20:24:42 I personally prefer an overview on a website. Notifications can be nice but at this point I maintain so many things that it would be spam 2021-10-14 21:41:12 its on pkgs.a.o :) 2021-10-15 05:59:31 Yes I realize that, I would use that over the notifications myself. If it were more reliable though, as filtering on myself as maintainer it literally only shows vulkan-headers right now while according to repology there is quite a bit more that I have to update 2021-10-15 05:59:53 According to the message vulkan-headers was flagged automatically but I definitely never got an email from that 2021-10-15 09:21:05 ikke: do you remember if dmvpn will work behind NAT? 2021-10-15 09:21:12 kunkku: ^ 2021-10-15 09:52:56 it will work 2021-10-15 09:53:33 but obviously when both spokes are behind NAT, shortcut routes will not work 2021-10-15 09:54:23 those packets get routed via the hubs 2021-10-15 15:43:09 kunkku: you around? 2021-10-15 17:12:36 ikke: looks like dmvpn starts now, but i cant ping anyting 2021-10-15 17:12:42 looks like no routes are added 2021-10-15 17:12:57 do you have a gre tunnel? 2021-10-15 17:13:02 or interface rather 2021-10-15 17:13:20 yup 2021-10-15 17:13:31 i cannot ping the hub 2021-10-15 17:13:39 hmm 2021-10-15 17:21:50 works 2021-10-15 17:22:10 ah, what was it? 2021-10-15 17:23:08 the setup script did not finish completely 2021-10-15 17:23:23 because of missing kernel modules 2021-10-15 17:23:36 so after mps give me a good kernel, i didnt rerun it. 2021-10-15 17:23:46 ikke: can you ssh to 172.16.25.1? 2021-10-15 17:24:10 up 2021-10-15 17:24:11 yup* 2021-10-15 17:24:16 alpine-unmatched 2021-10-15 17:25:08 yup, when all imagination is lost :) 2021-10-15 17:25:15 :) 2021-10-15 17:25:30 welcome in my attic 2021-10-15 17:25:38 :) 2021-10-15 17:25:55 also I can ping it from my workstation 2021-10-15 17:26:11 i can add your key if you like 2021-10-15 17:26:32 not sure what I could do there 2021-10-15 17:26:55 maybe to look 'inside' of machine 2021-10-15 17:27:12 but this will require root 2021-10-15 17:27:39 it is intended for builder? 2021-10-15 17:27:56 mps: try now 2021-10-15 17:27:57 not atm 2021-10-15 17:28:06 It's too slow to be a builder 2021-10-15 17:28:35 mps: if you change shell ill kick you :p 2021-10-15 17:28:50 jk 2021-10-15 17:29:34 :) 2021-10-15 17:30:02 username is? 2021-10-15 17:30:08 9root 2021-10-15 17:30:16 -9 2021-10-15 17:30:30 yes 2021-10-15 17:32:30 mps: if you want to try something, just create yourself a container 2021-10-15 17:32:43 same goes for ikke ofc :) 2021-10-15 17:33:05 no idea for now 2021-10-15 17:33:07 πŸ‘ 2021-10-15 17:33:15 but thank you 2021-10-15 17:33:20 oh, and dont reboot without me knowing 2021-10-15 17:33:33 but i guess i mentioned that enough already :D 2021-10-15 17:33:45 would you mind to add linux-tools and pciutils 2021-10-15 17:34:07 np 2021-10-15 17:34:30 util-linux 2021-10-15 17:34:43 sorry for wrong naming 2021-10-15 17:34:48 i guess you mean that instead of linux-tools :) 2021-10-15 17:35:57 I wouldn't ask for inux-tools-iio and linux-tools-gpio 2021-10-15 17:36:12 and linux-tools-tmon 2021-10-15 17:36:38 not nice to play with them on remote machines 2021-10-15 17:38:35 how much you payed for all this? 2021-10-15 17:38:47 The board was sent to us 2021-10-15 17:39:02 ah, to alpine? 2021-10-15 17:39:26 yes 2021-10-15 17:39:41 I thought clandmeter bought it 2021-10-15 17:39:43 clandmeter did buy an m.2 disk though 2021-10-15 17:40:10 yes kind of 2021-10-15 17:40:22 i bought it but in the end not pay for it 2021-10-15 17:40:35 somebody else could not resist to donate to my paypal 2021-10-15 17:40:40 heh 2021-10-15 17:40:54 but stupid amazon send it back 2021-10-15 17:40:59 so i had to buy from another place 2021-10-15 17:41:04 nice, there are still idealist in this world 2021-10-15 17:41:06 which was 5 euro more expensive 2021-10-15 17:41:14 so in the end I payed 5 euro :) 2021-10-15 17:41:16 :D 2021-10-15 17:41:22 and ofc the chassis 2021-10-15 17:41:33 its donated by some company i know ;-) 2021-10-15 17:41:40 ;-) 2021-10-15 17:42:11 LVT we call it in NL :) 2021-10-15 17:42:12 company should be listed on sponsors? 2021-10-15 17:42:32 haha 2021-10-15 17:42:40 who knows, maybe its already on it ;-) 2021-10-15 17:43:13 I'll ask my other Dutch friends what LVT means 2021-10-15 17:50:02 ikke: you know this abbrev? 2021-10-15 17:50:49 I didn't, but I looked it up 2021-10-15 17:50:54 :) 2021-10-15 17:51:13 we use it often, but thats probably because of our type of business 2021-10-15 17:51:32 i think also in construction 2021-10-16 08:14:05 would be possible to give me temporary(?) access to a ppc64le lxc container? I would like to debug https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/26267#note_184653 but running the go test suite in an emulated s390x takes forever on my systems 2021-10-16 08:16:25 nmeum: I can take a look at it later today 2021-10-16 08:16:50 thanks 2021-10-16 08:17:56 s/s390x/ppc64le/ 2021-10-16 23:04:00 re: that latest list message, on bootstrapping 3.15 builders, I'm curious what does that process entail? 2021-10-16 23:04:32 nmeum: if you need a ppc64le machine I can help you out anytime today 2021-10-18 04:29:42 Thalheim: We bootstrap a new release from edge. We copy an existing stable builder, without the built packages, update it to edge with the bare minimum packages and set the repository to it's own package repository (which is still empty). Then we start building everything necessary for build-base, abuild and aports-build 2021-10-18 04:29:56 after each step, we replace the installed packages with the ones that were just built 2021-10-18 11:23:30 thanks! 2021-10-18 17:42:36 clandmeter: do we want to switch the riscv64 edge builder to an lxc container? 2021-10-18 18:02:38 i dont see an immediate reason to do it 2021-10-18 18:03:15 I would prefer to use the cpu cycles on that host 2021-10-18 18:03:24 alright, fine with me 2021-10-18 18:03:44 so we run builder and ci on the same box 2021-10-18 18:03:54 is ci running? 2021-10-18 18:04:06 no 2021-10-18 18:04:15 any specific reason? 2021-10-18 18:04:33 Might cause a lot of CI failures? 2021-10-18 18:04:42 due to tests 2021-10-18 18:05:17 Unless we make sure we set ABUILD_BOOTSTRAP for riscv64 2021-10-18 18:05:42 clandmeter: btw, I've created a 4096-bits key for rv64 now as well 2021-10-18 18:05:51 nice 2021-10-18 18:06:00 Then in due time, we can switch the builder to use it when we switch the other edge builders as well 2021-10-18 18:06:05 yes think we can disable checks 2021-10-18 18:06:16 at least have some kind of build test 2021-10-18 18:06:51 if we can run lxc and docker on the same host, i have no problem with switching edge to lxc 2021-10-18 18:07:24 I did run docker and lxc on the same host, there is as far as I know no inherrent limitation 2021-10-18 18:07:31 I guess our issue was combined with dmvpn 2021-10-18 18:07:34 but the current setup has not really broke anything except thats its different. 2021-10-18 18:07:54 so if you want to change it, its ok for me. 2021-10-18 18:08:20 btw, did we discuss rootbld for new stable builders? 2021-10-18 18:08:28 no, not yet 2021-10-18 18:08:52 i think we have suggested this now for the last 10 releases 2021-10-18 18:09:12 There is one challenge we need to solve 2021-10-18 18:09:33 rootbld does not work on CI, nor does it make sense 2021-10-18 18:09:45 We do want to limit network at the right time though to catch issues 2021-10-18 18:09:57 it does not make sense, thats absolutely right :) 2021-10-18 18:10:31 It would be nice if we could get some kind of hook from abuild 2021-10-18 18:10:56 Then we can use that hook to disable networking during build/check/package 2021-10-18 18:11:03 well it would make sense though in another way 2021-10-18 18:11:13 use a different executor 2021-10-18 18:11:30 but thats limits things as far i read the docs 2021-10-18 18:12:07 i think we can already do that if we put `abuild -r` into multiple stages 2021-10-18 18:12:22 right, that could also be an option 2021-10-18 18:12:36 thats how rootbld does it 2021-10-18 18:12:37 iirc 2021-10-18 18:12:56 but with docker containers, you basically already have a rootbld 2021-10-18 18:13:08 just without the 'disable networking at the right moment' 2021-10-18 18:13:45 i know 2021-10-18 18:13:48 both are containers 2021-10-18 18:14:07 can you disable network with docker exexcutor? 2021-10-18 18:14:23 I think rootbld just removes the dns servers 2021-10-18 18:14:34 from /etc/hosts 2021-10-18 18:14:48 i dont think so 2021-10-18 18:14:51 oh 2021-10-18 18:15:01 bubblewrap can disable networking 2021-10-18 18:16:14 You cannot shutdown the network interface in docker 2021-10-18 18:16:41 but you can assign none as network 2021-10-18 18:17:25 it will not assign the default network 2021-10-18 18:17:31 Yes, but that prevents networking all-together 2021-10-18 18:17:35 which is not very useful 2021-10-18 18:17:48 depends how you split abuild -r 2021-10-18 18:18:02 cant each part run in its own container? 2021-10-18 18:18:06 yes, but would be a hassle when you want to build lots of packages 2021-10-18 18:18:27 2 jobs per package 2021-10-18 18:18:37 with artifact transfer (so limited size) 2021-10-18 18:18:57 ok, cant you keep it on the same host? 2021-10-18 18:19:07 with the build dir 2021-10-18 18:19:26 You can cache files, but we need to make sure we don't polute non-related jobs 2021-10-18 18:23:35 truncating /etc/resolv.conf can be done in a docker container and might be effective 2021-10-18 18:24:38 can you kill the default route? 2021-10-18 18:25:21 nope 2021-10-18 18:25:30 ip: RTNETLINK answers: Operation not permitted 2021-10-18 18:29:04 unshare -Un abuild build? 2021-10-18 18:29:11 er, -cn 2021-10-18 18:29:47 requires privileged container 2021-10-18 18:29:51 I believe 2021-10-18 18:30:17 bb unshare does not have -c 2021-10-18 18:31:09 unshare: unshare failed: Operation not permitted 2021-10-18 18:49:58 ikke: i dont t hink you can edit resolv.conf? 2021-10-18 18:50:09 clandmeter: I could 2021-10-18 18:50:12 I just tested it 2021-10-18 18:50:19 i cant just tested :) 2021-10-18 18:50:27 strange 2021-10-18 18:50:42 oh i can 2021-10-18 18:50:43 what version of docker? 2021-10-18 18:50:44 oh 2021-10-18 18:50:47 but cant delete it 2021-10-18 18:51:02 right 2021-10-18 18:51:06 but you can make it empty 2021-10-18 18:51:19 apparently you need to set -c security.nesting=true 2021-10-18 18:51:36 to docker run I suppose? 2021-10-18 18:51:50 i thought it was lxc 2021-10-18 18:55:31 it should work with --security-opt=seccomp=unconfined 2021-10-18 18:55:49 yes, if you remove all restrictions, ofcourse you can 2021-10-18 18:56:10 not --privileged 2021-10-18 18:56:19 still, you remove many seccomp restrictions 2021-10-18 18:56:33 not sure why they forbid unshare 2021-10-18 18:56:35 I'd rather create a new profile that allows unshare 2021-10-18 18:57:01 sure 2021-10-18 18:57:26 i guess they disabled it for "added security" 2021-10-18 18:58:33 maybe also because they think it's not that useful without mount 2021-10-18 19:35:41 i guess you can take the resolv.conf route and try to mimic as much as possible as what we do with bwrap 2021-10-18 19:48:40 ikke: have you looked into using shell executor? 2021-10-18 19:49:04 I have used the shell executor before 2021-10-18 19:49:26 that way you could simply use abuild 2021-10-18 19:50:51 But we also need to clean up ~/packages for example each time? And how about concurrency? 2021-10-18 19:52:09 i think currently rootbld mounts stuff from ~ 2021-10-18 19:52:42 we could add an option to not do that 2021-10-19 20:45:22 huh does our ppc64le ci host only have 8 cores? htop shows other cores as 'offline' 2021-10-19 20:45:48 nproc just returns 8 2021-10-19 20:51:42 oh, not CI, build vm 2021-10-19 21:24:20 https://github.com/htop-dev/htop/issues/757 2021-10-19 21:53:23 lol, the vertical screen is a thing with so many cores ;) 2021-10-19 21:55:15 yes, I ran into it on our ppc64le host 2021-10-19 21:55:18 not as many, but quite a bit 2021-10-19 21:58:49 Im rly thinking about buying extra screen to just rotate it 45 degrees and comfortable view web pages or console/code etc :/ 2021-10-19 21:59:32 45 degree? :D 2021-10-19 21:59:42 that would be diagonal 2021-10-19 22:00:45 omg, ye 90! 2021-10-19 22:01:29 but maybe 45 wouldnt be so bad :D 2021-10-19 22:01:50 hehe 2021-10-19 22:01:55 it would certainly be original 2021-10-19 22:02:17 well, just bend neck little bit 2021-10-20 07:12:16 45 would be ideal to use in the rain. 2021-10-20 08:02:36 fun now the arm builders are out of space 2021-10-20 09:21:55 i have cleaned up my lxc containers again and deleted some more distfiles and buildlogs 2021-10-20 09:22:35 Thanks 2021-10-20 09:23:01 I guess its problematic to run 6 llvm builds in parallel on same disk 2021-10-20 09:23:56 Yes, I can somehow imagine 2021-10-20 09:24:27 I wonder if it would be an idea to have the ~/packages on a network share 2021-10-20 09:24:57 on a host with much disk space 2021-10-20 09:25:23 That could make sense 2021-10-20 09:26:22 i have been thinking a lot on the build infra lately 2021-10-20 09:26:35 and what we currently do is very simple 2021-10-20 09:26:38 and fast 2021-10-20 09:26:44 which I like 2021-10-20 09:26:57 a full redesign will make it complicated 2021-10-20 09:27:46 if we had the ~/packages on a network share, we could do the signing and upload from a different host 2021-10-20 09:28:14 meaning that we would not need to have the signing keys on the builders 2021-10-20 09:28:37 Yes, that would be a good idea security wise 2021-10-20 09:31:14 we could even have the shared disk on master mirror 2021-10-20 09:31:51 then the "upload" to master could be a rename of the new files 2021-10-20 09:31:58 which is fast 2021-10-20 09:37:23 How would we handle signing? 2021-10-20 09:39:34 i wonder how much work it is of moving this all to gitlab ci/cd 2021-10-20 09:39:55 and if we want to depend so deeply onto gitlab 2021-10-20 09:40:19 I'm not sure if gitlab ci/cd is that suitable for build-the-world kind of jobs 2021-10-20 09:40:30 why not? 2021-10-20 09:50:56 what are the dependencies for a gitlab runner? 2021-10-20 09:51:03 go 2021-10-20 09:51:07 docker? 2021-10-20 09:51:18 Not necessarily 2021-10-20 09:51:27 but go at least 2021-10-20 09:51:33 for buildtime 2021-10-20 09:51:38 runtime atm bb and musl 2021-10-20 09:51:51 gitlab-runner has several executors 2021-10-20 09:52:00 docker is one of them, but you can also execute jobs directly on the host 2021-10-20 09:52:07 (shell executor) 2021-10-20 09:52:27 so we'd need a gitlab runner binary 2021-10-20 09:52:37 which could be corsscompiled i guess 2021-10-20 09:53:17 yes 2021-10-20 09:53:20 so it would be doable 2021-10-20 09:53:41 then the question is if we want be locked in to gitlab 2021-10-20 09:55:23 re "how would we handle signing" on a shared network mount 2021-10-20 09:55:44 builders would have their own signing key 2021-10-20 09:56:06 the signing host would have the official signing keys 2021-10-20 09:56:15 they share the network mount 2021-10-20 09:56:30 builder would sign the built packages (and index?) with the builders signing key 2021-10-20 09:56:51 the signing host would verify this signature and re-sign the packages with the official key 2021-10-20 09:56:55 and upload to dl master 2021-10-20 09:57:58 i think moving it to gitlab ci/cd will be complicated tbh 2021-10-20 09:58:40 well maybe not 2021-10-20 09:59:04 im thinking out loud: what happens if you git push, the ci pick it up and start build, and second after a new git push arrives 2021-10-20 09:59:20 i guess it would work if the second push would be blocked til the first is done with its job 2021-10-20 09:59:29 it would work but be slower than what we currently do 2021-10-20 10:00:11 what i'd really want is something that could distribute the build jobs so they run in parallel 2021-10-20 10:01:04 but that is a bit complicated 2021-10-20 10:02:04 the idea with a shared network mount was to save disk space on the builders, which is the current problem 2021-10-20 10:04:51 the benefits with our current approach is: 1) its simple, 2) its fast. downside is: 1) secret key management (limits who gets access to the host) 2) wasteful disk usage (full mirrors are stored on each builder) 2021-10-20 10:54:21 what are the biggest disk consumers? 2021-10-20 10:54:26 pkgs or distfiles? 2021-10-20 10:55:44 I think pkgs 2021-10-20 10:57:01 distfiles is shared between 3 arches for example on a(rm(hf|v7)|rch64) 2021-10-20 10:57:34 distfiles is atm 26G, just build-edge-aarch64 is 70G 2021-10-20 10:58:22 build-edge-aarch64/rootfs/home/buildozer/packages how much is that? 2021-10-20 10:59:03 40G 2021-10-20 10:59:27 so there is 30G of other lingering files 2021-10-20 10:59:36 ~/go probably 2021-10-20 10:59:52 ~/.cache/go-build ~/.cache/yarn 2021-10-20 10:59:57 and maybe some src not cleaned ujp 2021-10-20 10:59:58 aports/*/*/src 2021-10-20 11:01:26 if we would want to use CI to build packages, we would need to calculate pkgs to build differently 2021-10-20 11:01:37 similar to buildrepo 2021-10-20 11:01:44 or even use buildrepo 2021-10-20 11:01:55 right but it needs ~/packages 2021-10-20 11:07:34 I assume that would be persisted on the host 2021-10-20 11:17:54 well, that would remove the advantage of using multiple runners to build packages 2021-10-20 11:30:25 i tested runc and crun this morning 2021-10-20 11:30:44 its like a chroot with a config.json basically 2021-10-20 11:32:06 and it can do special things with stdio 2021-10-20 11:36:23 one thing i find very cool is that it can run userns, as non-root 2021-10-20 11:36:51 which means we can avoid the abuild-apk suid root 2021-10-20 11:38:22 we could install the the dependencies in one crun run and do the actual build as non-root in a second run 2021-10-20 12:06:59 ncopa: how is that differently as to using bubblewrap? 2021-10-20 12:48:48 does same job, but runc/crun is designed to run oci (docker) containers 2021-10-20 12:54:19 i dont know how bubblewrap handle the lifcycle, but with runc you can create the intance, then set up the networking for it and finally run it 2021-10-20 12:54:27 and collect the run state 2021-10-20 12:54:45 basically what docker does 2021-10-20 12:55:41 09:24 I wonder if it would be an idea to have the ~/packages on a network share 2021-10-20 12:55:50 i suggested this previously for enabling -dbg more broadly 2021-10-20 12:57:41 clandmeter: the other difference is that runc/crun implements the OCI standard, which means it follows a standard to run the containers 2021-10-20 12:57:59 there is also a bwrap-oci wrapper that can do the same for bubblewrap apparently 2021-10-20 12:58:07 https://github.com/projectatomic/bwrap-oci 2021-10-20 12:58:37 i guess the main benefit is that we'd follow a standard, which means we could re use other tools 2021-10-20 12:59:10 if we'd use bubblewrap, I'd go for bwrap-oci 2021-10-20 13:21:07 ncopa: so you want to use an image? 2021-10-20 13:24:34 yes 2021-10-20 13:24:47 thats the idea, we create the images using standard tools 2021-10-20 13:25:09 at least we have the possibliity to run/create the images using standard tools 2021-10-20 13:38:07 I was just wondering what the adv over using images and just use apk to setup the container. apk is so fast you hardly notice it. 2021-10-20 13:38:28 no need to manage images, and world is alwayss uptodate 2021-10-20 13:40:40 maybe easier for others to setup a similar environment to test things in?? 2021-10-20 13:45:57 i dont think runc is *easy* to use? 2021-10-20 13:47:55 imho its nice to have each build start completely from scratch to have 100% pristine env. 2021-10-20 13:48:31 similar to what rootbld does 2021-10-20 13:57:39 i think the idea is that alpine would use runc or crun, and if there is a build issue, then package maintainers can use docker to diagnose the issue. the theory is that this would potentially be more consistent than alpine using bwrap and apk, and maintainers using docker 2021-10-20 13:59:34 it does raise the question though of why maintainers can't also use bwrap and apk. i think the answer to that depends on how well we can develop and package those tools for broad distribution as well as internal use 2021-10-20 14:00:43 And how easy it is to inspect / manipulate the state inside the chroot / container in case it's necesary 2021-10-20 14:03:43 theoretically with any namespace-based container you can use nsenter, as long as the container is not so minimal that it doesn't contain a shell 2021-10-20 14:06:05 in principle bwrap should be easier 2021-10-20 14:06:12 its build into abuild 2021-10-20 14:06:44 the downside is that you need alpine env to use it 2021-10-20 14:07:14 docker makes it easy to run on macos or windows 2021-10-20 14:13:13 the idea here is to re-use the standard container format and standardized runtime 2021-10-20 14:13:33 because i think it may give portability benefits in the long run 2021-10-20 14:13:53 then we *can* manage the build infra as docker images 2021-10-20 14:14:25 which means we *can* prepare and run them using things like docker or kubernetes 2021-10-20 14:14:35 but we dont need to 2021-10-20 14:16:25 we can still set up the rootfs from scratch with apk add --root $PWD/rootfs and then run crun on that rootfs 2021-10-20 14:18:31 but yes the idea is to use standard OCI containers 2021-10-20 14:18:57 even recnt lxc/lxd uses OCI containers nowdays 2021-10-20 14:19:12 instead of lxc containers, which are non-standard 2021-10-20 14:19:30 i have not read up on oci containers 2021-10-20 14:19:35 is that the format of the image? 2021-10-20 14:19:39 or it is more? 2021-10-20 14:19:55 i think there are runtime specs 2021-10-20 14:20:05 like what args a container runtime is supposed to take 2021-10-20 14:20:25 it means that you can run containerd with brwap-oci, crun or runc 2021-10-20 14:20:30 and it should just work 2021-10-20 14:21:19 then there is a spec for the container image format 2021-10-20 14:21:34 and there are specs for network plugins and storage plugins as well 2021-10-20 14:27:55 ok, well this part it the easiest part. 2021-10-20 14:29:07 or you want to implement this in our current builder logic? 2021-10-20 14:30:52 i dont know yet 2021-10-20 14:31:26 still seems like root permissions is needed 2021-10-20 14:31:55 to be able to install the deps 2021-10-20 14:41:22 step 1: setup up the container rootfs. needs root. installs build-base, abuild, git and creates the buildozer user account. this is equal for all packages/builds 2021-10-20 14:42:46 step 2: as non-root, in the container, git clone repository, enter aport and report back to the caller (which as root) what the makedepends are 2021-10-20 14:46:27 step 3: as root, in the container, install the makedepends from step 2 2021-10-20 14:47:21 step 4: as non-root perform the build and return the built packages 2021-10-20 14:48:40 step 5: as root, clean up the build env 2021-10-20 14:49:18 step 6: (re)sign the packages, update the index and upload to master mirror 2021-10-20 14:52:06 in step2 we also need to figure out if we need network connectivity for the build 2021-10-20 14:52:44 before step4 we can set up the network if needed, or run it with the hosts network namespace 2021-10-20 14:53:09 is step 5 necessary? 2021-10-20 14:53:33 well unless we want to keep the container forever 2021-10-20 14:53:53 the idea here is that we start from scratch for every build 2021-10-20 14:54:05 Oh, you mean basically cleanup the container 2021-10-20 14:54:09 yes 2021-10-20 14:54:12 ok 2021-10-20 14:54:48 the built packages could be preserved via a shared mount or similar 2021-10-20 14:55:24 with the above steps we dont need suid abuild-apk 2021-10-20 14:56:32 is that important? 2021-10-20 14:56:51 would be nice to drop 2021-10-20 14:57:51 I think that user who can run abuild-apk can run anything as root 2021-10-20 14:58:23 theorietically 2021-10-20 14:58:33 yes, i also brought this up before :p 2021-10-20 14:58:43 They can build and install a package that replaces /etc/passwd / /etc/shadow / /etc/group 2021-10-20 14:58:50 or just install a suid binary 2021-10-20 14:58:51 or any other filel 2021-10-20 14:58:57 exactly 2021-10-20 14:58:59 right 2021-10-20 14:59:56 with the above steps, we can avoid that the APKBUILD ever runs as root, nor have permissions to execute anything in the container 2021-10-20 15:00:01 as root that is 2021-10-20 15:01:15 actually, we could even do the pre-fetch of the install deps 2021-10-20 15:01:34 and run the actual install of the deps (as root) without network 2021-10-20 15:02:13 this sounds quite similar to arch makechrootpkg or xbps-src build 2021-10-20 20:16:07 rdfined: "Totally, 36 GiB can be reduced." on distfiles :-) 2021-10-20 20:16:13 rdfind* 2021-10-20 20:32:48 hmm, again :( 2021-10-21 10:26:21 77G free again on nld8 2021-10-21 10:26:37 37G atm on usa2 2021-10-21 12:12:04 I get periodic notification of 'Convert man pages to scdoc' passed from gitlab web ui 2021-10-21 12:12:11 not sure where they come from 2021-10-21 12:13:00 Usually it means you have an MR page open 2021-10-21 12:13:05 somewhere in a tab 2021-10-21 12:13:20 You get notifications when the pipeline finishes 2021-10-21 12:16:08 yeah i just dont know which of the tabs it is :) 2021-10-21 12:16:20 it also seems to be that the job is running in a loop 2021-10-21 15:48:30 i got an email from IBM about "getting a proper involvement and ensure that we continue with Alpine on P.". They ask what would be the key requirements for P. 2021-10-21 15:48:52 I got it yesterday didnt notice it until today 2021-10-21 15:50:27 I need to respond soonish. (today or latest tomorrow). Do you have any input on what would be the key requirements for power on alpine? 2021-10-21 16:05:58 Stable build platform 2021-10-21 16:06:24 Someone to contact in case we have some build issue on ppc 2021-10-21 16:13:39 :+1: I'll respond that 2021-10-21 18:56:55 ikke: do you mean someone to contact that can help us with build issue like if build or testsuite fails? or someone to contact if build platform fails 2021-10-21 18:57:09 the former 2021-10-21 19:40:22 i sent the email with cc to ikke, clandmeter and Ariadne (who have been doing some ppc64le work in the past) 2021-10-21 19:44:47 Looks good 2021-10-21 19:45:21 ncopa: seems like usa2 has ~270G in logs 2021-10-21 19:45:33 wel, distfiles +logs 2021-10-21 19:46:30 i think we can purge majority of those 2021-10-21 19:46:38 right 2021-10-21 19:46:57 i mean maybe keep distfiles/v3.x/ 2021-10-21 19:47:07 incase the x86_64 does not have it all stored 2021-10-21 19:47:30 That's where the majority of diskspace is used 2021-10-21 19:47:43 but the rest should be on distfiles and buildlogs server 2021-10-21 19:48:03 we upload the buildlogs right? 2021-10-21 19:48:07 https://tpaste.us/kxwr 2021-10-21 19:48:09 yes 2021-10-21 19:48:22 then there is no point in keeping them there 2021-10-21 19:48:56 edge distfiles is ~10G 2021-10-21 19:49:58 find /var/cache/distfiles/ -type f -maxdepth 1 -mtime +7 -delete 2021-10-21 19:50:44 find /var/cache/distfiles/buildlogs -type f -mtime +7 -delete 2021-10-21 19:50:53 we could probably run something like that on all the builders 2021-10-21 19:51:04 from a cronjob or similar 2021-10-21 19:52:29 and maybe also a dedup program that create hardlinks of the duplicate distfiles in /var/cache/distfiles/ 2021-10-21 19:52:40 could be run weekly or similar 2021-10-21 19:52:56 I manually run rdfind occasionally 2021-10-21 19:53:08 we could runn rdfind weekly from cronjob i think 2021-10-21 19:53:19 and the delete job daily 2021-10-21 20:01:58 nowadays i usually use util-linux hardlink, it is much "lighter" than rdfind 2021-10-21 20:02:16 at least for cli, i didn't test speed 2021-10-21 20:05:35 rdfind is so complicated to use, i always think it will do something unexpected because i invoked it "wrong" 2021-10-21 20:14:56 ncopa: those delete commands did not help a lot btw 2021-10-21 20:15:05 40G free max 2021-10-21 20:32:31 so what is eating the space? 2021-10-21 20:33:06 3.x distfiles 2021-10-21 20:33:13 18053096 /var/cache/distfiles/v3.10 2021-10-21 20:33:13 18808540 /var/cache/distfiles/v3.11 2021-10-21 20:33:13 22346048 /var/cache/distfiles/v3.15 2021-10-21 20:33:13 25976412 /var/cache/distfiles/v3.12 2021-10-21 20:33:27 maybe a rdfind run will help? 2021-10-21 20:33:34 I think I already ran that today 2021-10-21 20:33:48 But not certain if it was on this host 2021-10-21 20:34:38 lets delete some of the distfiles 2021-10-21 20:34:44 anaylzuing 1200 files now 2021-10-21 20:34:44 du -s * | sort -n 2021-10-21 20:35:34 3G saved 2021-10-21 20:35:37 i dont think anyone will miss supertuxkart-1.1.tar.xz tessdata-4.0.0.tar.gz stellarium-0.20.1.tar.gz on s390x 2021-10-21 20:35:45 ahuh 2021-10-21 20:35:50 :) 2021-10-21 20:36:14 i have an idea 2021-10-21 20:37:42 for file in *; do if curl-command-that-returns-success-if-file-exists-on-remote https://distfiles... ; rm $file; done 2021-10-21 20:38:04 -I --fail 2021-10-21 20:38:20 -I for just a HEAD request 2021-10-21 20:38:34 wget --spider -q "$uri" 2021-10-21 20:39:58 better to check that content-length is same at least imo 2021-10-21 20:41:44 shouldnt be necessary. its its not the same we have an issue on x86_64 and all other arches as well 2021-10-21 20:42:01 sure but still good to check, no? 2021-10-21 20:42:14 what i want to catch is sources that are only built on s390x 2021-10-21 20:42:31 like s390-tools-2.12.0.tar.gz 2021-10-21 20:42:51 mm. 2021-10-21 20:43:45 ncopa: are you already working on something? 2021-10-21 20:43:53 yeah 2021-10-21 20:44:00 k 2021-10-21 20:47:37 currently running: for i in *; do echo -n .; if curl --silent -I --fail https://distfiles.alpinelinux.org/distfiles/v3.12/$i >/dev/ 2021-10-21 20:47:37 null; then rm $i; else echo $i; fi; done 2021-10-21 20:47:48 in v3.12 2021-10-21 20:48:03 i think we can do the same for the other v3.x 2021-10-21 20:48:10 its just very slow 2021-10-21 20:48:47 can imagine 2021-10-21 20:48:53 lots of http requests 2021-10-21 20:48:56 i need to go to bed. good night 2021-10-21 20:48:59 o/ 2021-10-21 21:45:51 tbh for me, what is needed is cheap ppc64le hardware 2021-10-21 21:46:17 secure computing for me is great and all, but secure computing should be available to all 2021-10-21 22:08:12 if you are using https it will be much faster to do multiple urls at once 2021-10-21 22:09:07 at least 2-3x faster, possibly more 2021-10-21 22:09:47 but if you don't mind spending few hours and few hundred MB and possibly unnecessarily deleting some files it's probably fine 2021-10-22 07:00:40 ncopa: why not offload distfiles to other server(s) and let abuild use this primarily and rsync local distfiles back to the distfiles server weekly and cleanup locally. it will use more bandwidth but this we have enough. 2021-10-22 07:01:04 distfiles is probably the only part we can save on space 2021-10-22 07:01:24 and maybe a clean $HOME 2021-10-22 08:27:01 we could also have a separate distfiles service on a separate distfiles host 2021-10-22 08:27:45 it would scan the aports tree regularily and make sure that all needed source files are there 2021-10-22 08:27:58 and that files not used by any aport (in edge) can be deleted 2021-10-22 08:28:08 and keep an archive for the stable branches 2021-10-22 08:28:24 even better, it could be a smart http proxy 2021-10-22 08:28:29 heh 2021-10-22 08:28:58 why we do what we currently do is becase: its simple 2021-10-22 08:29:57 The smart http proxy I suppose would have mapping of what the upstream of files would be based on what it scans on aports? 2021-10-22 08:31:25 the scan of aports would be to purge old (or keep forever if its in v3.x) 2021-10-22 08:32:16 it could something like, if no aport (in any arch) does not use the source file, and source file is older than 7 days, then delete 2021-10-22 08:32:44 smart http proxy would also let different builders (different archs building same file) fetch same file in parallel 2021-10-22 08:33:48 i mean the builder could get the file from proxy at the same time as the proxy server fetches from upstream and saves to disk 2021-10-22 08:34:36 right 2021-10-22 08:34:49 if the responsibility to keep the source archive is moved away from the build server itself, then build server could build the package and clean up download cache immediately 2021-10-22 08:34:54 but it does need to act as a proper proxy then 2021-10-22 08:34:59 yup 2021-10-22 08:35:24 abuild could have a DISTFILES_PROXY var 2021-10-22 08:35:53 if set, abuild would do: http_proxy=$DISTFILES_PROXY curl ... 2021-10-22 08:36:04 or similar 2021-10-22 08:45:58 why make it that complicated? 2021-10-22 08:46:22 we already have the tools in place, except to upload files to the distfiles server. 2021-10-22 08:48:14 but i agree, adding some logic to automatically clean it up would be nice. 2021-10-22 08:49:11 challenge here is to find a host that has enough storage space 2021-10-22 08:51:20 nod 2021-10-22 08:51:29 hmm, how that i think of it, that proxy logic is maybe not so bad idea :) 2021-10-22 08:52:28 not sure how such proxy would store files 2021-10-22 08:57:37 i guess if you would use a caching proxy server, you will need to use its http interface to invalidate (cleanup) cache objects. 2021-10-22 09:01:01 I was actually thinking of writing a http proxy server (in go or similar). manly because we want to store the files in a normal directory (i think) 2021-10-22 09:01:17 or maybe we dont need that 2021-10-22 09:01:35 could also be stored in minio backend? 2021-10-22 09:02:40 anyway... i think we need to manually manage the distfiles for now 2021-10-22 09:02:45 or maybe add a cronjob 2021-10-22 09:04:31 linode has object storage 2021-10-22 09:05:35 https://www.scaleway.com/en/docs/tutorials/setup-nginx-reverse-proxy-s3/ 2021-10-22 09:07:46 i guess thats not what we are looking for 2021-10-22 09:15:22 the fundamental problem is: we dont really need to store all the distfiles on all the builders. once the package is built we can delete the local distfiles. but we do want keep and archive on our distfiles archive server (distfiles.a.o) 2021-10-22 09:15:38 yes thatrs what im saying 2021-10-22 09:15:45 just rsync it weelky to distfiles 2021-10-22 09:15:53 cleanup local 2021-10-22 09:15:59 and use distfiles as primary source 2021-10-22 09:16:08 for edge we want keep it as long as there are any aport using it + some grace period 2021-10-22 09:16:21 for stable branches we want keep forever 2021-10-22 09:16:22 and add some script logic to distfiles to clean it up 2021-10-22 09:16:43 hmmm 2021-10-22 09:16:53 i guess that could work 2021-10-22 09:17:03 it should :) 2021-10-22 09:17:10 but it still needs lots of space 2021-10-22 09:17:19 on the distfiles.a.o server yes 2021-10-22 09:17:25 yes 2021-10-22 09:17:36 but its the only way to move forward without too much hassle 2021-10-22 09:17:41 but a distfiles server would only need big disk. no need to beefy cpu 2021-10-22 09:17:52 correct 2021-10-22 09:18:10 so its only possible with linode i think 2021-10-22 09:18:16 what we do may need to think about is races 2021-10-22 09:18:22 small linode with lots of space 2021-10-22 09:18:33 races of what? 2021-10-22 09:18:36 uploading? 2021-10-22 09:18:51 what happens if 5 different builders uploads the same file at the same time 2021-10-22 09:18:53 yes 2021-10-22 09:19:04 dont upload at the same time :) 2021-10-22 09:19:13 how do we sync the builders to do it? 2021-10-22 09:19:22 with cron? 2021-10-22 09:19:42 seperate them by an hour or so 2021-10-22 09:19:54 i was thiknking we could rsync the distfiles directly after the build is done 2021-10-22 09:20:08 so we rsync the distfiles at the same time as the build logs 2021-10-22 09:20:27 well that will give you race kind of issues 2021-10-22 09:21:36 but i guess each sync will have its own tmp dir 2021-10-22 09:22:02 which means we woudl upload the same file N times and delete N-1 copies 2021-10-22 09:22:09 its just wasteful 2021-10-22 09:22:15 yes, and it also slows down uplaods 2021-10-22 09:22:33 otoh, if we are going to rsync weekely 2021-10-22 09:22:45 they why bother with the builders at all 2021-10-22 09:23:25 ? 2021-10-22 09:23:59 we coudl have a weekly cronjob running on the distfiles server that would 2021-10-22 09:24:13 1) git pull latest git aports 2021-10-22 09:24:43 2) for each arch, abuild fetch every package 2021-10-22 09:25:03 3) clean up unused packages and report errors 2021-10-22 09:25:34 right, make it completely independent 2021-10-22 09:25:58 but thats problematic if source file disappears from upstream in the mean time 2021-10-22 09:26:10 its safer to rsync from the builders 2021-10-22 09:26:24 you could technically run abuild fetch on each commit 2021-10-22 09:26:35 yup 2021-10-22 09:26:47 for each arch 2021-10-22 09:27:30 its still non-optimal (and current approach is non-optimal as well) 2021-10-22 09:28:04 because what will happen: git push notification, builders git pul and start build, distfiles manager git pull and start fetch 2021-10-22 09:28:23 i think rsync from builders is the simplest solution atm. 2021-10-22 09:28:50 if the builder start build before distfiles manager have finished download the file from upstream, the builder will get 404 from distfiles and will download the file from upstream 2021-10-22 09:29:36 which is why i think a smart http proxy would be the nicest thing to do. it would mean that we'd only download from upstrea exactly once. and all the builders will get it from the http proxy 2021-10-22 09:30:07 the http proxy would block the builders while it waits for data from upstream http 2021-10-22 09:30:11 do we have a volunteer to write it? 2021-10-22 09:30:15 no :) 2021-10-22 09:30:28 and yes, i think rsync from builders is the simplest 2021-10-22 09:30:34 i agree it sounds the best 2021-10-22 09:31:12 if there was a solution that could provide such feature, that would be nice. 2021-10-22 09:32:02 before pushing to aports download files to distfiles.a.o? 2021-10-22 09:33:06 i think adding a service to download files on git push could be nice to have. 2021-10-22 09:34:23 ncopa: can we reverse the logic of having distfiles.a.o as a backup url? 2021-10-22 09:35:04 i think now we can set an url as primary location, but not the other way around 2021-10-22 09:35:24 not sure what the abuild variable was called. 2021-10-22 09:35:41 DISTFILES_MIRROR 2021-10-22 09:35:48 heh, I started to write something about 'reverse logic' 2021-10-22 09:36:14 why would we want fetch from upstream before distfiles? 2021-10-22 09:36:19 so each builder would just do what it d oes now, but if src is gone use distfiles 2021-10-22 09:36:38 well if we push a commit 2021-10-22 09:36:52 if we have 10 builders, it would fetch the file 10x at the same time. 2021-10-22 09:37:06 from distfiles? 2021-10-22 09:37:11 yes 2021-10-22 09:37:40 i gues that network traffic within our building infra is fast? 2021-10-22 09:37:51 it is 2021-10-22 09:37:51 its normally within linode? 2021-10-22 09:38:07 ? 2021-10-22 09:38:33 i mean our build hosts and infra is normally within same network? 2021-10-22 09:38:37 maybe it isnt 2021-10-22 09:39:12 well, i guess if we want reduce bandwith between builders and distfiles, then that would be the way to go 2021-10-22 09:39:12 linode is not equinix 2021-10-22 09:39:22 yeah, i just realized that :) 2021-10-22 09:39:32 and we have our friends in br :) 2021-10-22 09:39:40 but thatrs so slow, it would not matter :)- 2021-10-22 09:39:56 oh.......... what if we added a distfiles to the fastly cdn? 2021-10-22 09:40:19 hmm 2021-10-22 09:40:30 it could be a relatively small cache 2021-10-22 09:40:44 you mean have a distfiles server and put cdn on top of it 2021-10-22 09:40:51 yup 2021-10-22 09:41:04 it can work i guess 2021-10-22 09:41:10 DISTFILES_MIRROR=http://cdn-distfiles.a.o/... 2021-10-22 09:41:44 but we should first build the distfiels server :) 2021-10-22 09:42:14 im manually deleting files from s390x builders' distfiles meanwhile 2021-10-22 09:42:24 but it sounds like a good idea 2021-10-22 09:42:30 should speed up downloads 2021-10-22 09:42:37 at least second time 2021-10-22 09:42:57 yes, it is good idea 2021-10-22 09:42:58 for sure for the stable builders 2021-10-22 09:43:08 cause edge probably already cached them 2021-10-22 09:43:22 no, they are different urls 2021-10-22 09:43:47 do they have to be different urls? 2021-10-22 09:44:01 we now use that logic 2021-10-22 09:44:08 but in this setup, not sure its still needed? 2021-10-22 09:44:21 not really, but it makes it much simpler to keep stable distfiles forever 2021-10-22 09:44:40 we should never delete any of the distfiles from the stable branches 2021-10-22 09:44:51 but we add logic to clean it up 2021-10-22 09:44:53 thats why we store them under different directory on distfiles 2021-10-22 09:45:03 we dont clean up stable branches 2021-10-22 09:45:13 i mean we need a copy of the sources somewhere, for ever 2021-10-22 09:45:17 you mean forever? 2021-10-22 09:45:20 yes 2021-10-22 09:45:41 you want to support a package that has been upgraded? 2021-10-22 09:45:57 or only keep the current stable version? 2021-10-22 09:46:11 i think its good to keep the old version as well 2021-10-22 09:46:34 i think we need to see how that results in space 2021-10-22 09:46:58 we do hardlinks to dupes 2021-10-22 09:47:06 what about non supported releases? 2021-10-22 09:47:16 i think its still useful to keep 2021-10-22 09:47:55 for example, lets say there is an incident. we suspect a backdoor was planted but we dont know where it came from 2021-10-22 09:48:03 in an old, not updated server 2021-10-22 09:48:25 we might still want investigate where it came from, and not just update the server and hope it fixes it 2021-10-22 09:48:44 in that case its good to have the sources even if upstream has deleted them 2021-10-22 09:49:07 ok, so we need to check how much space we actually need to implement this distfiles server 2021-10-22 09:49:25 cause i have a feeling 1TB will not be enough 2021-10-22 09:50:18 186.9G /var/cache/distfiles/ 2021-10-22 09:50:27 i think that is our current distfiles.a.o 2021-10-22 09:50:52 buildlogs included 2021-10-22 09:51:07 we probably want archive the build logs as well 2021-10-22 09:51:12 but those can be compressed 2021-10-22 09:52:15 do you expect size to grow if we add other arches? 2021-10-22 09:53:54 minimally 2021-10-22 09:54:02 it would only grow with new packages 2021-10-22 09:54:06 (distfiles) 2021-10-22 09:54:09 logs would grow per arch 2021-10-22 09:56:15 we could use zfs as backend 2021-10-22 09:56:24 en enable compression 2021-10-22 09:56:28 and* 2021-10-22 09:56:37 sorry im dutch 2021-10-22 10:32:16 Are you apologizing for being dutch? :P 2021-10-22 10:38:03 :) 2021-10-22 11:05:22 ACTION have a lot of mails with 'Met vriendelijke groet' in signatures 2021-10-22 11:17:33 So many dutchians here πŸ˜‚ 2021-10-22 12:24:08 ikke: maybe for going dutch :p 2021-10-22 12:52:16 its ok. we like you even if you are dutch :) 2021-10-22 13:37:27 ok ill try to look into setting up a new distfiles server this weekend 2021-10-22 13:37:32 on linode 2021-10-22 13:37:55 i wonder why edge distfiles is in / 2021-10-22 13:37:58 and not in /edge 2021-10-22 13:38:46 ikke: i also think the buildlogs could be better organized, the ones coming from the build scripts not the actual buildlog 2021-10-22 13:39:22 maybe put them in a subdir and name the dir to the program that creates them 2021-10-22 13:39:44 i guess aports-build and buidlrepo 2021-10-22 13:40:15 you mean /var/cache/distfiles/buildlogs-* vs /var/cache/distfiles/buildlogs/? 2021-10-22 13:41:16 under buildlog are a lot of build logs 2021-10-22 13:41:39 like build-3-14-x86_64.v20210212-7558-gb67897e64f.log 2021-10-22 13:41:47 right 2021-10-22 13:42:01 ppl could wonder, where does it come from 2021-10-22 13:42:20 I don't think most people even know it exists? 2021-10-22 13:42:39 most dont :) 2021-10-22 13:42:42 And the builders do not even upload it 2021-10-22 13:42:48 it only contains x86_64 2021-10-22 13:42:51 build.a.o 2021-10-22 13:43:02 ah ok 2021-10-22 13:46:28 clandmeter: i think moving edge distfiles to /var/cache/distfiles/edge makes sense 2021-10-22 13:49:14 btw, if all the builders share the same distfiles locally, we could just run the rsync from the host, that limits the amount of rsyncs we need to run. 2021-10-22 15:31:52 right now, distfiles just come from x86_64 2021-10-23 12:42:34 heh, linode network helper is so confusing when you forget about it 2021-10-23 12:48:31 :) 2021-10-23 12:48:53 You can disable it if you want 2021-10-23 12:49:11 ofc 2021-10-23 12:49:25 but the damage is already done :) 2021-10-23 12:49:44 i have zfs setup with dedup and compression 2021-10-23 12:49:55 lets see how zfs handles multiple copies 2021-10-23 12:50:25 ikke: i think you used rdfind on current distfiles? 2021-10-23 12:50:34 yes 2021-10-23 12:50:52 any way to find out how space it would use without it? 2021-10-23 12:51:06 much* 2021-10-23 12:51:31 I believe du has an option to count hardlinks multiple times 2021-10-23 12:52:35 yup 2021-10-23 12:52:37 -l 2021-10-23 12:52:43 about 100g more 2021-10-23 12:53:10 i rsycned without --hardlinks 2021-10-23 12:53:25 so it means it will probably include the 100g extra 2021-10-23 12:53:52 but using hardlinks is maybe not useful with dedub 2021-10-23 13:01:40 would it matter? 2021-10-23 13:06:35 doesn't dedup use huge ram? I heard it is not recommended 2021-10-23 13:21:24 well this server is going to do only one thing, that is to serve distfiles. so i guess if it used 75% of ram, thats fine. 2021-10-23 13:21:57 If that's enough to serve the files as well, fine with me :) 2021-10-23 13:22:07 I suppose that's zfs file cache? 2021-10-23 13:23:45 I've heard that zfs likes RAM :) 2021-10-23 13:24:37 It doesn't use the kernel file cache afaiu 2021-10-23 13:25:40 i think its call ARC 2021-10-23 13:26:31 Adaptive Replacement Cache 2021-10-23 13:27:20 http://dtrace.org/blogs/brendan/2012/01/09/activity-of-the-zfs-arc/ 2021-10-23 13:28:41 im a bit confused on the versioning of zfs 2021-10-23 19:34:06 when does the riscv64 repo get added to https://pkgs.alpinelinux.org? 2021-10-23 19:34:38 I suppose when I get motived enough to dig into it 2021-10-23 19:36:53 lol ok 2021-10-23 19:43:02 I suppose that moment is now :) 2021-10-23 20:14:28 Lol 2021-10-23 20:14:54 I'm looking into moving to the py one 2021-10-23 20:16:19 Maybe add some changes, but first need to understand the current changes. 2021-10-23 20:16:35 clandmeter: ok, good that I know, then I don't have to look into that 2021-10-24 09:15:23 ikke: https://tpaste.us/NM9q 2021-10-24 09:16:09 180G, 1.63 dedup? 2021-10-24 09:16:35 around 10G less compared to origin 2021-10-24 09:16:49 ok 2021-10-24 09:16:52 currently using 1G of memory 2021-10-26 06:44:17 ikke: i think it kind of makes sense to upload both log and src from the buildrepo plugin 2021-10-26 06:46:09 the problem to fix is that multiple builders will probably upload at the same time. 2021-10-26 06:47:38 yes, indeed 2021-10-26 09:07:56 i was able to get my terraform project working with alpine vms 2021-10-26 09:09:07 the only thing I need from 'cloud-init' was to set the hostname, so i made a tiny init script that mounts the cloud-init config iso with meta-data, and fish out the local-hostname and write /etc/hostname 2021-10-26 09:09:41 so now I can spin out a cluster of alpine vms in minutes on my workstation 2021-10-26 09:09:46 less than two minutes actually 2021-10-26 09:10:00 cool 2021-10-26 09:10:07 with qemu / libvirt? 2021-10-26 09:10:10 yes 2021-10-26 09:10:24 and terraform libvirt driver 2021-10-26 09:10:57 i also found out how to improve the ubuntu cluster by reading the docs and sources 2021-10-26 09:11:13 the problem was that i needed to set the hostname before network starts 2021-10-26 09:11:20 so the ddns woudl use the proper hostname 2021-10-26 09:11:43 previously the hostname was set after network, so dhcp server never got the "right" hostname for ddns 2021-10-26 09:12:09 stack overflow and google told me the solution was to restart dhclient 2021-10-26 09:12:16 or reboot after cloud-init 2021-10-26 09:12:39 but i figured out that its possible to set local-hostname as meta-data, and then its set before network :) 2021-10-26 09:14:55 ubuntu vm uses 1.5G for the OS. alpine vm uses less than 100MB :D 2021-10-26 09:18:48 minimal debian install for one arm64 SBC I tested on weekend is 500 MB, alpine is 80 MB 2021-10-26 09:19:02 pretty cool :) 2021-10-26 09:19:20 im actually super happy. i know have alpine clusters for my k0s work 2021-10-26 09:19:38 though debian have alsa while I didn't added as it is not essential 2021-10-26 11:09:18 c00l ncopa 2021-10-26 15:37:59 oh fun, the armv7 3.15 and edge builders are out of space 2021-10-26 15:38:41 yes, it's a struggle to make enough space free. THey all share the same disk 2021-10-26 15:39:00 We are working on moving distfiles off the builders, which should alleviate a bit 2021-10-26 15:39:12 means aarch64 is out as well 2021-10-26 15:42:10 30G free now 2021-10-26 15:42:44 I assume this is a constant struggle πŸ˜… 2021-10-26 15:43:02 yes, especially around release time 2021-10-26 15:43:09 and each release gets bigger and bigger 2021-10-26 15:48:14 are release images on those servers too? 2021-10-26 15:49:54 yes 2021-10-26 15:50:08 if yes then why not remove rc1,rc2 etc images from previous releases or move them somewhere else, mirrors will be then less overloaded too 2021-10-26 15:51:17 Something ncopa needs to decide I suppose 2021-10-26 15:51:46 At least something to discuss with him 2021-10-26 15:53:38 could somebody check how many space could save by removing for example rc1 images in alpine/v3.10/releases/ ? 2021-10-26 15:54:17 there were 7 rc in every arch 2021-10-26 15:56:26 20G for all rc releted files for 3.10 accross all arches 2021-10-26 15:59:37 thought it will be more 2021-10-26 16:01:44 we can remove '_rc*' 2021-10-26 16:02:16 i think i have deleted some older *_rc* in the past 2021-10-26 16:02:37 i think after the release is out and announce we can delete the _rc* 2021-10-26 16:03:03 ncopa: I think you did it because dont see it in previous 2021-10-26 16:04:15 so all 3.1x only got RC files 2021-10-26 17:13:16 ikke: did you cleanup distfiles on aarch64? 2021-10-26 17:16:01 ah its moved to edge ofc 2021-10-26 17:16:24 but still some source is in the root of distfiles 2021-10-26 17:25:38 clandmeter: I just cleaned up the ~/.cache dirs of buildozer 2021-10-26 19:09:46 freed another 27G on x86_64 builder distfiles with rdfind 2021-10-26 20:13:03 ikke: any idea what those tap intefaces are about on aarch64 builder? 2021-10-26 20:13:07 some test suite? 2021-10-26 20:13:32 good question. I would assume so 2021-10-27 08:16:09 Hi @ikke, hope you are doing well. I have a question, hope you could help me. s390x folks is migrating our 2 hosts' disks to new storage system. The problem is the UUID in /etc/fstab might change. There is an option to use /dev/disk/by-path/ but it seems that does not work on Alpine ? Do you have any suggestion ? 2021-10-27 08:17:25 tmhoang: you need to use udev for those 2021-10-27 08:18:09 PARTUUID is best solution 2021-10-27 08:18:34 ikke: could you provide some more details ? I'm fluent in this 2021-10-27 08:18:43 *not 2021-10-27 08:18:47 and FS LABEL in fstab 2021-10-27 08:20:41 udev would automatically create /dev/disk/* links ? Hmmm interesting. Wondering if Alpine use udev or eudev today 2021-10-27 08:21:00 mps: do you have an example ? 2021-10-27 08:21:13 tmhoang: hi 2021-10-27 08:21:16 long time no see 2021-10-27 08:21:30 tmhoang: we use eudev as implementation for udev, but maybe not for long 2021-10-27 08:22:49 i guess the proper solution would be to find out the new uuid? 2021-10-27 08:23:46 tmhoang: blkid and look for PARTUUID= 2021-10-27 08:24:16 clandmeter: Hi, hope you are doing well after all those time. If booting on new system, the new UUID is not correct, it would fail, right ? I then need to boot a live alpine system and fix that. I'm trying to not do that. 2021-10-27 08:24:36 then add it in kernel cmdline (APPEN) 'root=PARTUUID=xxxxxx' 2021-10-27 08:24:43 APPEND* 2021-10-27 08:24:44 yes, but you could probably do stuff from initrramfs 2021-10-27 08:24:53 if you have somebody onsite 2021-10-27 08:24:54 fstab is not for the root partition 2021-10-27 08:25:31 Only the boot partition is mentioned by UUID 2021-10-27 08:25:34 you would need to mount boot partition and update bootloader config 2021-10-27 08:25:52 iirc blkid is part of bb 2021-10-27 08:26:02 and libudev-zero could support /dev/disk/by-xxx 2021-10-27 08:26:31 clandmeter: yes, there is bb blkid and also separate pkg 2021-10-27 08:26:43 does root work with partuuid? 2021-10-27 08:27:17 clandmeter: you mean 'root' in kernel cmdline? if yes then yes, it works 2021-10-27 08:27:30 hmm ok 2021-10-27 08:27:55 i guess the partition keeps that same tmhoang? 2021-10-27 08:27:57 What bootloader is used? I don't see grub / syslinux 2021-10-27 08:28:07 'UUID' only works with initramfs and from fstab 2021-10-27 08:29:04 ikke: zipl bootloader, see /etc/zipl.conf for cmdline 2021-10-27 08:29:11 few days ago I made script to do this automatically for installing alpine on one SBC 2021-10-27 08:29:26 mps: I don't understand why root=PARTUUID= solve the problem ? https://paste.debian.net/1217030/ 2021-10-27 08:29:41 I mean, find partuuid and set it in extelinux.conf 2021-10-27 08:29:43 so rootfs is already my lvm path 2021-10-27 08:29:53 root=/dev/vg0/lv_root 2021-10-27 08:30:07 yeah for that host, rootfs is not the problem. But /boot is the problem. 2021-10-27 08:30:12 tmhoang: don't ask me about lvm, I never liked it 2021-10-27 08:30:16 tmhoang: nod 2021-10-27 08:30:43 tmhoang: we could remove /boot from fstab when swapping 2021-10-27 08:30:48 then it should boot and we can mount it later? 2021-10-27 08:31:25 ikke: sounds hacky :D ? I never did that, scary ? 2021-10-27 08:31:36 afaik /boot is only necessary for updating the bootloader 2021-10-27 08:31:43 not for booting 2021-10-27 08:31:47 ikke: the other builder uses UUID for both / and /boot 2021-10-27 08:31:54 tmhoang: ok 2021-10-27 08:32:14 I'll try with udev/eudev. Fallback plan is I will boot into a live system and fix the mounted rootfs - manual but less headache. ? 2021-10-27 08:32:30 why would the uuid change? 2021-10-27 08:32:53 because the underlying storage host id changed - what I was told. 2021-10-27 08:33:07 but that should not change the partition id? 2021-10-27 08:33:48 so in this system, PARTUUID and UUID are the same (is that normal?) in above paste 2021-10-27 08:34:05 yes i was also confused about partuuid 2021-10-27 08:34:15 afaik uuid is the partition id 2021-10-27 08:34:51 we are still 3.10 and 3.12 on those hosts, ikke. Should we upgrade ? Or ask ncopa ? 2021-10-27 08:34:55 tmhoang: heh 2021-10-27 08:34:59 wanted to just ask you about that 2021-10-27 08:35:04 Yes please 2021-10-27 08:35:33 if i clone a disk to another disk, the uuid should keep the same. 2021-10-27 08:35:49 afaik its not related to the disk itself? 2021-10-27 08:35:50 The uUID is a FS property 2021-10-27 08:35:59 exactly 2021-10-27 08:36:05 clandmeter: yea to my understanding but they say otherwise - let me double check 2021-10-27 08:36:26 im not sure how they clone things at their side 2021-10-27 08:36:37 tmhoang: do they only copy files over perhaps? 2021-10-27 08:37:04 ikke, clandmeter: no, not copying files :) they should have disk clone utilities for their s390x storages 2021-10-27 08:37:14 right 2021-10-27 08:37:21 how ? 1970s, maybe - but work 2021-10-27 08:37:26 so the partition table is copied as is 2021-10-27 08:37:52 which have the ids i guess (not an ptable expert) 2021-10-27 08:48:28 btw, does udev device-by-id work for root device? 2021-10-27 08:49:23 I don't think so 2021-10-27 08:49:45 I guess it needs to be something that nlplug-findfs supports 2021-10-27 08:52:39 seems to work ? https://dev.alpinelinux.org/~clandmeter/other/forum.alpinelinux.org/forum/kernel-and-hardware/persistent-device-names.html 2021-10-27 08:53:31 tmhoang: udev is not available inside initramfs 2021-10-27 08:53:39 it will be started after switch_root 2021-10-27 08:54:26 i guess the best solution is to just try it with current uuid's 2021-10-27 08:55:13 if it fails to boot you can get a prompt in initramfs (or append single) and find out the uuid and edit the bootloader config. 2021-10-27 08:55:30 initramfs uses mdev together with nlplug-findfs 2021-10-27 08:57:00 tmhoang: you can get into an interactive prompt for the bootloader? else after you find out the new uuid you could just modify it the next time you boot. 2021-10-27 09:00:05 I think we don't have the option for interactive booting. I'd rather boot a live system and fix uuid instead. 2021-10-27 09:00:30 OK udev not available in initramfs is the deal breaker then 2021-10-27 09:02:22 ikke: just a heads up, they want to do it on Thursday or Friday. What should I do to turn off the machines gracefully ? Just # poweroff ? 2021-10-27 09:02:47 Yes, that should suffice 2021-10-27 09:02:54 thanks 2021-10-27 09:17:51 PARTUUID is in partition table data, and are different for gpt and msdos partition tables 2021-10-27 09:18:11 UUID is created by mkfs.* 2021-10-27 09:18:56 with gptdisk PARTUUID could be set manually for gpt tables 2021-10-27 09:19:14 for msdos tables this is possible fdisk 2021-10-27 09:19:46 for example I created msdos partuuid '0x01234567' 2021-10-27 09:20:21 it is under 'i' option from 'x' (expert) menu if fdisk 2021-10-27 11:38:59 clandmeter: today I will work on upgrade linux-edge, do we need anything to add for rv64 2021-10-27 13:06:50 mps: not that i know off 2021-10-27 13:06:59 did anything land for proper rebooting? 2021-10-27 13:23:21 clandmeter: I don't see anything related in changelog about this 2021-10-27 13:23:37 i dont think so 2021-10-27 13:23:55 one guy was working on this, but i think he will resubmit his patches 2021-10-27 13:24:00 I see more changes in 5.16 rc series 2021-10-27 13:24:30 https://forums.sifive.com/t/reboot-command/4721/30 2021-10-27 13:24:31 and I think 5.16 will be released next week 2021-10-27 13:53:37 ikke: something keeps pushing things in distfiles root on arm builder 2021-10-27 13:53:57 i checked abuild.conf but they seems okish 2021-10-27 13:54:47 maybe mqtt-exec.aports-build needs a restart 2021-10-27 13:56:00 i dont think so 2021-10-27 13:56:10 those conf files get sourced by abuild 2021-10-27 13:56:22 i think somebody has a shared distfiles 2021-10-27 13:56:27 oh 2021-10-27 13:56:32 ACTION watches in mps direction 2021-10-27 13:57:52 clandmeter: looks like I have it but I didn't set it 2021-10-27 13:58:12 strange 2021-10-27 13:58:16 lxc config did not set it 2021-10-27 13:58:26 didnt we bump into this some time ago? 2021-10-27 13:58:37 mps: which container is it? 2021-10-27 13:58:40 iirc yes, few weeks ago 2021-10-27 13:58:54 mps-edge-aarch64 2021-10-27 13:58:58 i saw linux and libreoffice 2021-10-27 13:59:04 so that pointed me to you :p 2021-10-27 13:59:22 yes also I saw these 2021-10-27 13:59:25 can you reboot your container and try again? 2021-10-27 13:59:33 see if the mountpoint is gone 2021-10-27 13:59:56 I wanted to clean /var/cache/distfiles but not sure will I delete something which should be kept 2021-10-27 14:00:12 yes dont touch it please 2021-10-27 14:00:17 because this I didn't cleaned it 2021-10-27 14:00:33 its ok, its not a big deal, but lets keep it to infra to clean that part. 2021-10-27 14:00:48 agree 2021-10-27 14:01:13 i dont think its wise to share this to user containers 2021-10-27 14:01:20 (though I'm also in infra team ;) ) 2021-10-27 14:01:20 or maybe with overlayfs 2021-10-27 14:01:40 yes, you are, but you manage it as a user ;-) 2021-10-27 14:02:05 the only person atm to touch it is ikke 2021-10-27 14:02:19 mps: did you reboot it? 2021-10-27 14:02:20 looks like some time ago I'm promoted :D 2021-10-27 14:02:38 clandmeter: no, I didn't 2021-10-27 14:02:50 if possible just hit reboot on the container 2021-10-27 14:02:56 ah wait 2021-10-27 14:02:59 that does not work 2021-10-27 14:03:01 i remember now 2021-10-27 14:03:08 i need to down and up it 2021-10-27 14:03:14 are you doing something? 2021-10-27 14:03:19 no 2021-10-27 14:03:27 ok then i will restart it 2021-10-27 14:03:32 ok 2021-10-27 14:04:08 ok should be up again 2021-10-27 14:04:19 can you check if the mountpoint is gone now? 2021-10-27 14:05:19 looks like it is still there 2021-10-27 14:06:15 yes it is, I don't think I have 44GB in distfiles 2021-10-27 14:06:58 ok fixed 2021-10-27 14:07:05 please check again 2021-10-27 14:08:02 still is there 2021-10-27 14:09:29 wtf 2021-10-27 14:09:56 ah foek 2021-10-27 14:09:58 though there is more free space now, /dev/mapper/vg0-lv_root 875G 813G 18G 98% / 2021-10-27 14:10:26 hehe, I'm learning Dutch lang ;) 2021-10-27 14:10:30 its the same as last time i think 2021-10-27 14:10:41 its in global config 2021-10-27 14:10:48 ncopa: are you adding distfiles to /etc/lxc/alpine.common.conf ? 2021-10-27 14:37:45 i think i have done that in the past yes 2021-10-27 15:39:22 Maybe we could clean up our dev containers a bit: https://tpaste.us/7V8L 2021-10-27 15:47:29 clandmeter: strangely enough, my container does not have it mounted, even though alpine.common.conf is included 2021-10-27 15:51:32 oh, confusing 2021-10-27 15:51:41 /usr/share/lxc/alpine.common.conf 2021-10-27 15:53:39 mps: can I restart your container one more time? 2021-10-27 15:58:38 yes, do it 2021-10-27 15:59:52 ok, it should now now longer have /var/cache/distfiles mounted 2021-10-27 16:01:00 it doesn't now 2021-10-27 16:01:26 good 2021-10-27 17:20:27 ikke: please clean mine 2021-10-27 17:20:35 clandmeter: ok, will do 2021-10-27 17:20:37 Or else I try but to forget it later 2021-10-27 17:20:52 Lol autocorrect 2021-10-27 17:21:04 I made some space by removing src directories from aports 2021-10-27 17:21:12 libreoffice takes some space 2021-10-27 17:21:44 I didn't touch arm containers for some time 2021-10-27 17:21:45 clandmeter: fyi, I switched user containers to /usr/share/lxc/config/alpine.common.conf and restarted them 2021-10-27 17:22:03 Ok 2021-10-27 17:22:09 What about new ones 2021-10-27 17:23:49 clandmeter: fyi, we do still have unasigned space left on usa9 2021-10-27 17:25:25 lxc.include = $LXC_TEMPLATE_CONFIG/alpine.common.conf 2021-10-27 17:26:08 if we use lxc-create, it should use the correct path 2021-10-27 18:53:42 Right but new ones will include distfiles from host? 2021-10-27 18:57:22 I would not expect it 2021-10-27 20:52:12 "Ninety-nine problems but a bitch ain't one!" ;) 2021-10-27 22:40:53 Hi, dl-4 and dl-5 mirrors are late by 4-5 days now, are they ok? https://mirrors.alpinelinux.org/#mirror5 2021-10-28 06:26:07 ikke, clandmeter: Morning, it seems they did the move to new s390x storage system and the UUID did not change. They raised the UUID change issue because they had that with SLES before. 2021-10-28 06:36:44 ikke, ncopa: If you have some minutes, please check if the services in those 2 s390x servers are functioning correctly. They still keep a copy of the old servers if we want to rollback. 2021-10-28 06:36:53 thanks a lot 2021-10-28 07:02:20 alexeymin: should be ok now, thx for reporting 2021-10-28 09:12:59 we need to fix space 2021-10-28 09:13:12 there is no more room to increase 2021-10-28 09:13:37 I was afraid 2021-10-28 09:13:44 of that 2021-10-28 09:14:28 its weird as i added 80G 2021-10-28 09:14:44 and it was full in a few minutes 2021-10-28 09:24:24 can imagine for all arches 2021-10-28 09:24:42 11G per arch 2021-10-28 09:24:50 i guess we need to delete old versions 2021-10-28 09:27:11 nld3 is also full 2021-10-28 09:28:01 nld3 is upstream for dl-cdn 2021-10-28 09:29:11 with the rate this is growing we will bump into issues faster and faster 2021-10-28 09:29:50 yes 2021-10-28 09:29:56 We need a long-term solution 2021-10-28 09:30:07 for our mirror infra, we just need more space 2021-10-28 09:30:18 1.5T is not that much 2021-10-28 09:30:47 most of our desktops have more storage :) 2021-10-28 09:31:46 Not mine :P 2021-10-28 10:17:08 maybe we can mount mout desktop :) 2021-10-28 10:17:12 over nfs 2021-10-28 10:17:13 :D 2021-10-28 10:17:21 I heard clandmeter wants to volunteer 2021-10-28 10:17:45 I can volunteer as well, but its going to be problematic when I power it off over the weekend 2021-10-28 10:18:19 right 2021-10-28 10:18:26 nfs is so 80s 2021-10-28 10:18:33 ipfs is the new hype 2021-10-28 10:18:37 smaba is 90s? 2021-10-28 10:18:38 yeah 2021-10-28 10:18:59 we need distributed repositories 2021-10-28 10:23:04 btw, i have checked that s390x.a.o seems to be ok. Where is the second s390x machine tmhoang mentioned? 2021-10-28 10:23:40 ncopa: s390x-ci.a.o 2021-10-28 10:24:00 yes i also checked the system 2021-10-28 10:24:02 but he already left 2021-10-28 10:24:13 did we verify no .makedepends* were left behind? 2021-10-28 10:24:54 sadly they did not upgrade the hosts 2021-10-28 10:25:10 i guess we can do that ourselves 2021-10-28 10:25:18 Without oob access? 2021-10-28 10:25:28 yes 2021-10-28 10:25:30 ok 2021-10-28 10:25:42 we dont have it, so we need to deal with it i guess? 2021-10-28 10:26:12 not sure we have proper way to contact tmhoang? 2021-10-28 10:26:27 I have an IBM e-mail for them 2021-10-28 10:26:58 yes me too 2021-10-28 10:27:35 let me write him an email 2021-10-28 10:30:01 ok done 2021-10-28 17:00:00 lesigh 2021-10-28 21:03:42 clandmeter: are you rebooting some linode servers? 2021-10-28 21:03:54 yup 2021-10-28 21:04:06 ok :) 2021-10-28 21:04:15 im syncing the new mirrors 2021-10-28 21:04:24 i think we need to start using them 2021-10-28 21:04:29 yes, makes sense 2021-10-28 21:04:34 i upgraded the storage to 2TB 2021-10-28 21:05:04 if we use those new ones we can probably kill the ones on equinix 2021-10-28 21:05:19 and also move distfiles 2021-10-28 21:05:26 that will give us some space 2021-10-28 21:05:30 nod 2021-10-28 21:05:33 sounds like a plan 2021-10-28 21:06:01 but i think i need to sync them twice 2021-10-28 21:06:21 twice? 2021-10-28 21:06:23 as i thought i updated the source mirror but it didnt 2021-10-28 21:07:03 yes i restarted the containers, and that doesnt seem to pickup new entries in .env 2021-10-28 21:07:23 so its using original rsync.a.o 2021-10-28 21:07:31 but that one is also out of space 2021-10-28 21:08:00 yeah, it only read .env when you create the containers 2021-10-28 21:08:13 or pass through to thec ontainers anyway 2021-10-28 21:10:21 but it takes a long time to sync 2021-10-28 21:10:46 last update 24 oct 2021-10-28 21:10:56 but it feels like its much more 2021-10-29 06:36:04 ikke: when do you have time to discus mirror design? 2021-10-29 06:41:58 During lunch time or after work 2021-10-29 09:35:17 ikke: ok lets see when we are both available 2021-10-29 09:36:05 I'm available in ~40 minutes 2021-10-29 09:36:14 my idea is to remove all our "official" dl-X.a.o repos in favor of just dl-cdn 2021-10-29 09:36:24 and have geo based rsync.a.o 2021-10-29 09:37:33 if somebody does not want to use dl-cdn.a.o they could choose a community managed mirror 2021-10-29 09:38:18 dl-cdn will use geo based rsync as source 2021-10-29 09:40:24 if any of the geo based mirrors is overloaded we will probably need to add an additional mirror in that region 2021-10-29 09:40:50 algitbot: tell tpaste to behave 2021-10-29 09:41:02 algitbot: listen to me! 2021-10-29 09:55:08 I don't understand why it's happening. Related to traffic? 2021-10-29 09:55:19 i dont think so 2021-10-29 09:55:31 probably one of the instances is acting up 2021-10-29 09:55:41 so if it chooses that one, its 502 2021-10-29 09:56:31 But restarting does not help 2021-10-29 09:56:41 i think it helps for some time 2021-10-29 10:01:36 ikke: now 2 instances have lost the socket 2021-10-29 10:03:47 https://tpaste.us/mqRL 2021-10-29 10:19:13 I've restarted it a couple of times yesterday, but I don't see any difference in Zabbix 2021-10-29 10:19:28 i just restarted it 2021-10-29 10:19:30 it wokrs ok now 2021-10-29 10:19:37 im hammering it 2021-10-29 10:19:40 but no change 2021-10-29 10:21:09 im running: while true; do curl -s -o /dev/null https://tpaste.us && echo OK ;done 2021-10-29 10:23:29 for i in $(seq 1 4); do docker-compose exec --index=$i http netstat -an;done |grep '0.0.0.0:8080' 2021-10-29 10:28:00 clandmeter: so regarding mirrors 2021-10-29 10:30:02 Ok shoot 2021-10-29 10:30:08 Thinking about it a bit 2021-10-29 10:31:30 I think what you proposed makes sense, just wondering if we need to keep something as contingency 2021-10-29 10:32:54 But I suppose as a last resort we could just add multiple A / AAAA records to dl-cdn to balance things a bit 2021-10-29 10:33:38 clandmeter: we do need to keep a server which will redirect traffic from the old mirrors to dl-cdn, unless we add them as allowed entrypoints to fastly 2021-10-29 10:33:38 I case what happens? 2021-10-29 10:33:45 worst case :) 2021-10-29 10:33:56 Define worst 2021-10-29 10:34:07 Not the Dutch worst 2021-10-29 10:34:19 long-time outage of fastly 2021-10-29 10:35:35 yes we could have the geo mirrors do backup 2021-10-29 10:35:44 and spin up a few others 2021-10-29 10:35:49 actually 2021-10-29 10:35:59 we could setup backup on linode 2021-10-29 10:36:12 and in case we need it, spin a new one from backup 2021-10-29 10:36:29 aren't backups tied to an instance? 2021-10-29 10:36:39 yes 2021-10-29 10:36:44 like gitlab 2021-10-29 10:36:55 you spin a test from the backup right? 2021-10-29 10:36:58 yes 2021-10-29 10:37:13 but im wondering if the backup is also for the volume 2021-10-29 10:37:17 But you need to keep an instance running, which would be one of the geo servers I suppose 2021-10-29 10:37:28 yes 2021-10-29 10:37:34 its just to scale up if needed 2021-10-29 10:38:56 forget it, volumes are not backed up 2021-10-29 10:39:01 Can imagine 2021-10-29 10:40:32 i think if fastly goes haywire, thats something we can solve 2021-10-29 10:40:54 but if linode goes down, thats different cookie 2021-10-29 10:41:43 ahuh 2021-10-29 10:45:09 what will we do with the dl-X mirrors? 2021-10-29 10:45:35 I think we do need to keep the dns records 2021-10-29 10:45:51 nod 2021-10-29 10:46:00 The other day I saw nl.a.o hardcoded in a Dockerfile 2021-10-29 10:46:01 the problem is that they serve both http and rsync 2021-10-29 10:46:23 adding them to fastly will break rsync 2021-10-29 10:46:51 Can we see if they are used as rsync source? 2021-10-29 10:47:19 i guess we can from the logs 2021-10-29 10:47:27 but i would assume so 2021-10-29 10:48:00 we could add them to the geo mirrors 2021-10-29 10:48:03 and redirect http 2021-10-29 10:48:21 not sure how apk handles redirects? 2021-10-29 10:48:34 I think it handles it 2021-10-29 10:48:42 We already redirect some mirrors 2021-10-29 10:49:00 ok, so i guess that could work 2021-10-29 10:49:43 Maybe it would be nice if APK would give a warning when a mirror returns a permanent redirect 2021-10-29 10:49:49 when i see the traffic on fastly, thats pretty frightening... 2021-10-29 10:50:36 1k hits per second? 2021-10-29 10:50:41 anyways, it works so just ignore it :) 2021-10-29 10:50:58 yes the overall hits and traffic 2021-10-29 10:51:51 so i guess rsync.a.o will become a cname for rsync.geo.a.o 2021-10-29 10:52:54 can we let zabbix monitor traffic on linode? 2021-10-29 10:53:26 clandmeter: sure, if we install the agent everywhere 2021-10-29 10:53:41 we can do it also on linode 2021-10-29 10:53:46 but maybe its nice to have it local 2021-10-29 10:53:50 yeah 2021-10-29 10:53:53 central 2021-10-29 10:56:07 or have vnstat like on cz and other mirrors 2021-10-29 10:56:26 Either works 2021-10-29 10:56:39 Depends if we want alerts in case of sudden increase in traffic 2021-10-29 10:56:55 yes that seems more preferable 2021-10-29 10:58:31 so dl-master does close to 1TB total per month 2021-10-29 10:58:51 that's quite a bit 2021-10-29 10:59:04 https://cz.alpinelinux.org/.stats/ 2021-10-29 10:59:57 wow 2021-10-29 11:00:28 our t1 does 230TB 2021-10-29 11:00:33 per month 2021-10-29 11:03:04 thats like 290G/h 2021-10-29 11:06:45 yup, its doing a constant 200MiB/s 2021-10-29 11:09:05 i think we need to go back to the drawing board 2021-10-29 11:10:09 we can do only 5TB per linode 2021-10-29 11:10:41 so we need to spin 26 linodes to cope with that :) 2021-10-29 11:12:19 Heh 2021-10-29 11:15:58 im not sure whats going on 2021-10-29 11:16:06 https://dl-t1-2.alpinelinux.org/.stats/ 2021-10-29 11:16:51 i think we need to add some network monitoring to this t1 server 2021-10-29 12:40:27 small typo, its 46 linodes :) 2021-10-31 19:17:29 clandmeter: I was thinking regarding rv64 ci. We could probably turn failures for rv64 into warnings. People would still get feedback that the job failed, but it would not cause the entire pipeline to fail. 2021-10-31 19:19:40 nod 2021-10-31 19:23:33 would we run CI on usa5 as well, then? 2021-10-31 19:25:03 you didnt already? 2021-10-31 19:25:17 btw, i increased size on nld3 for mirror 2021-10-31 19:25:22 seems we still had some space left 2021-10-31 19:29:01 I started a runner, but didn't put it into action yet 2021-10-31 19:31:22 lets enable it and see how it goes 2021-10-31 19:31:42 Will try to look into it next week 2021-10-31 19:38:15 ikke: does nld5 run a runner? 2021-10-31 19:38:37 Not that I'm aware of / recall 2021-10-31 19:38:53 nld7 / usa7 are the CI hosts 2021-10-31 19:38:56 runner_raid0 vg0 rwi-a-r--- 100.00g 2021-10-31 19:39:12 does not ring a bell 2021-10-31 19:40:09 https://tpaste.us/qPpK 2021-10-31 19:40:31 i guess its block device for a runner before 2021-10-31 19:40:40 but if its not used, i guess we can kill it 2021-10-31 19:41:17 I have no idea why we would create a separate volume for that 2021-10-31 19:41:50 to use it as a qemu disk? 2021-10-31 19:42:05 like we do for arm 2021-10-31 19:42:07 ah, like that 2021-10-31 19:42:15 I think we did have a vm there before 2021-10-31 19:43:26 ah there is a runner.sh 2021-10-31 19:43:41 i guess we can just kill it 2021-10-31 19:43:45 fine with me 2021-10-31 19:44:06 i think we need to define a policy regarding releases to hold on a mirror 2021-10-31 19:46:58 maybe something to bring up to the TSC? 2021-10-31 19:48:36 i guess we can first think about a policy 2021-10-31 19:48:46 if we cannot figure it out, ask tsc 2021-10-31 19:50:20 I'm fairly certain we can figure out something. It's just that this has more stakeholders so to speak 2021-10-31 19:53:19 i think we can first think about the policy and submit it to the tsc. 2021-10-31 19:53:31 nod 2021-10-31 19:53:51 we want to limit the time at tsc as much as possible 2021-10-31 19:53:59 yes, understand 2021-10-31 19:54:08 i guess the policy needs to follow our releases.json somehow 2021-10-31 19:54:44 btw,https://alpinelinux.org/releases could use some colors :) 2021-10-31 19:55:09 MRs welcome :P 2021-10-31 19:55:32 Was thinking about the website. Do we want to move that to docker as well? 2021-10-31 19:55:40 i think i can handle it without an MR :) 2021-10-31 19:55:51 I already have some setup locally 2021-10-31 19:55:56 sure 2021-10-31 19:56:05 there are some more containers that can be moved 2021-10-31 19:56:13 netbox can also be updated 2021-10-31 19:56:22 Do we want to convert nld3 to a docker host eventually? 2021-10-31 19:56:26 seems to have new shiny uui 2021-10-31 19:56:27 ui 2021-10-31 19:56:31 yes 2021-10-31 19:56:46 since 3.0 2021-10-31 19:57:08 we use upstream containers? 2021-10-31 19:57:20 no 2021-10-31 19:57:50 ok 2021-10-31 19:58:13 Or maybe we do :P 2021-10-31 19:58:29 I forgot, the plan was to switch to something simpler, but not sure if I got to it 2021-10-31 19:58:29 i remember we did initially 2021-10-31 19:58:43 and i made some work on our own 2021-10-31 19:58:52 but im not sure its more simple to maintain our own 2021-10-31 19:59:34 No, we still use upstream 2021-10-31 19:59:54 ok 2021-10-31 20:00:07 so should be easy as git pul and update? 2021-10-31 20:01:22 oh, that's the old dir, there is netbox-new :D 2021-10-31 20:01:27 which is our own 2021-10-31 20:02:12 lol 2021-10-31 20:02:16 make up your mind ;-)