2021-11-01 15:44:39 blaboon: can you check my payment settings? 2021-11-01 16:24:17 clandmeter: just took a look. we currently credit you guys $1000/month, but with the recently-created storage volumes you are now using $1125 of services. so that resulted in a charge on today's invoice 2021-11-01 16:24:46 what about the outstanding credit? 2021-11-01 16:26:40 which outstanding credit? 2021-11-01 16:26:48 i guess i will have to remove those mirrors and host them somewhere else. 2021-11-01 16:27:02 it says $2,414.02 remaining 2021-11-01 16:27:07 In promotions 2021-11-01 16:27:26 oh yea, that's for the remainder of the year. but it's $1000/mo maximum 2021-11-01 16:27:48 not sure i follow 2021-11-01 16:28:01 2.5k for 2 months? 2021-11-01 16:28:11 but we are limited for 1k/m? 2021-11-01 16:28:57 should i remove the 3 linodes with the storage so i dont get into more trouble? 2021-11-01 16:29:52 the promo was initially created with a total of $13000 for 13 months. it is currently setup such that you can only use $1000 per month, so any un-used credit at the end will basically disappear (and then we will have to setup a new promo) 2021-11-01 16:32:42 blaboon: please let me know what to do, i dont want to be charged :) 2021-11-01 16:34:04 since the charge for this month was pretty small i can just go in and zero it out. but for next month you will need to make sure you stay under the $1000 limit (or add a payment method if you want to use more) 2021-11-01 16:35:06 ok understand, thanks for helping out. i will cleanup the account. 2021-11-01 16:37:27 i just cleared out the account balance, so it's back at 0 now 2021-11-01 16:38:50 thanks! 2021-11-01 16:41:01 if you need any help figuring out what your current current usage is, just let me know. i can do an invoice projection which will estimate your end-of-month usage based on your current services 2021-11-01 16:41:20 i just deleted the mirrors 2021-11-01 16:41:27 they have the 2TB volumes 2021-11-01 16:41:35 which seems to cost the most 2021-11-01 16:42:01 oh yea, that brought you all the way down to $405 2021-11-01 16:42:36 yes, storage is kind of expensive 2021-11-01 16:42:42 but our mirrors are close to 2TB 2021-11-01 17:34:42 hey, can somebody check user mailing list? a user reported, that his email did not appear in list after he tried with two different email addresses. I tried to send there email today, it does not show there. ddevault 2021-11-01 19:07:28 Hmm, last e-mail 2 months ago, that's suspicious 2021-11-01 19:13:06 yes, my email did not show there too. I assume, there should be some 'general' email address mentioned, what user can contact if there is some issue with mailing list as probably not everybody using irc 2021-11-01 19:14:47 ddevault: any idea why mails for ~alpine/users is not showing up? 2021-11-01 19:15:12 ~alpine/devel is still working 2021-11-01 19:25:21 checked the logs, but I don't find anything relevant there 2021-11-02 06:13:55 I'll look into it 2021-11-02 08:02:57 it looks like the box is a bit outdated 2021-11-02 08:02:58 it's on 3.13 2021-11-02 08:03:02 going to upgrade it and then troubleshoot further 2021-11-02 08:10:16 well, it seems to work now 2021-11-02 08:10:35 helby: try to resend your message? 2021-11-02 09:50:08 so storage is expensive. I wonder if we could ask cloudflare to sponsor R2 storage? 2021-11-02 09:59:19 ddevault: looks okay, message arrived within a second, thanks ;/ 2021-11-02 10:06:59 helby: aight 2021-11-02 10:07:10 ncopa: how much storage and what for? 2021-11-02 10:18:00 ddevault: alpine mirrors 2021-11-02 10:18:24 so also a question of bandwidth 2021-11-02 10:18:27 yes 2021-11-02 10:18:36 if you can quanitfy it, I might be able to help 2021-11-02 10:18:39 We have hosts with the bandwidth, but they do not have storage 2021-11-02 10:19:28 I have storage leaking out my ears 2021-11-02 10:19:32 but not necessarily bandwidth 2021-11-02 10:20:22 Yes, that seems to be the conundrum 2021-11-02 10:20:35 either storage, or bandwidth, but not both 2021-11-02 10:21:00 rsync.a.o eats quite a bit of bandwidth 2021-11-02 10:21:38 a solution might be caching proxy mirrors 2021-11-02 10:21:55 so those with lots of bandwidth use an LRU cache of packages and fetch from hosts with lots of storage to get fresh packages 2021-11-02 10:21:56 yes, I was thinking about something like that as well 2021-11-02 10:23:49 well, whatever the solution, let me know if you would like to use my high-capacity/low-bandwidth infrastructure 2021-11-02 10:24:00 I would be surprised if alpine could offer up enough data to put a strain on my storage capacity 2021-11-02 10:25:06 Mirror size is about 1.5-2TB atm 2021-11-02 10:30:57 yeah that's easy 2021-11-02 10:31:02 we have almost half a petabyte 2021-11-02 10:37:02 I wonder if backblaze would offer storage+bandwidth if asked nicely 2021-11-02 10:37:19 the problem is rsync is consuming a lot 2021-11-02 10:37:30 around 10TB a day 2021-11-02 10:38:19 and there are not real caching options for rsyncd 2021-11-02 10:38:57 fastly already does http caching for us, so thats currently not an issue 2021-11-02 10:39:56 mirror size is ovet 1.6 i think atm, and its growing exponential, just killing a few older releases doesnt really improve things. 2021-11-02 10:42:02 a decent long time solution for mirror storage is ~ 4TB 2021-11-02 10:43:21 and geo based rsync servers could lower the cost of traffic per node 2021-11-02 10:44:58 one solution is to establish a tiered mirror architecture 2021-11-02 10:45:15 so there's a small number of tier-1 mirrors which fetch from the authoratative source, then tier-2 mirrors fetch from tier-1 mirrors 2021-11-02 10:45:38 thats what we already have 2021-11-02 10:45:46 kind off 2021-11-02 10:45:52 would adjusting the number of nodes in each tier help? 2021-11-02 10:46:00 or reorganizing the graph 2021-11-02 10:46:13 the thing is 10TB a day, thats difficult to scale 2021-11-02 10:46:20 The root needs both the storage and the bandwidth 2021-11-02 10:46:25 how much of that is redundant though 2021-11-02 10:46:57 we were looking at doing geo based rsync.a.o 2021-11-02 10:47:02 putting it on linode 2021-11-02 10:47:22 they do 5TB a month... 2021-11-02 10:47:56 what's the size of the daily mirror diff? 2021-11-02 10:48:07 that really depends 2021-11-02 10:48:14 what's the range of probable sizes 2021-11-02 10:48:16 on release month thats much higher 2021-11-02 10:48:27 i dont have exact stats 2021-11-02 10:48:32 let me show you a grapth 2021-11-02 10:48:41 quantifying these figures makes planning much much easier 2021-11-02 10:48:47 https://dl-t1-2.alpinelinux.org/.stats/ 2021-11-02 10:49:30 i am wondering if somebody is abusing rsync.a.o for something 2021-11-02 10:49:41 if we know that the mirror changes at N bytes per X time, then root B(andwidth) is B = N * M / X, where M is the number of tier 1 mirrors 2021-11-02 10:49:49 so we can solve this equation for any desired B by tuning the other figures 2021-11-02 10:51:02 it not only depends on what changes 2021-11-02 10:51:22 we can also provide different answers depending on demand, like reducing the number of tier 1 mirrors during a release (in exchange for slower roll-out of the mirror updates) 2021-11-02 10:51:24 new mirrors popup and they will fetch everything 2021-11-02 10:51:36 just once per new mirror, though, nbd really 2021-11-02 10:52:03 and you can just rate-limit the initial rollout to keep the bandwidth at the desired utilization 2021-11-02 10:52:08 im monitoring the rsync mirror now 2021-11-02 10:52:23 and i see some host utilizing a lot of bw for a long long time 2021-11-02 10:52:34 thats why im thinking something is wrong 2021-11-02 10:52:38 also note that the default alpine installer currently defaults to putting everyone on the same mirror 2021-11-02 10:52:48 it's likely that bandwidth is not evenly distributed among the mirrors 2021-11-02 10:52:55 thats our cdn 2021-11-02 10:52:57 thats fine 2021-11-02 10:52:58 many distros forbid public access to their root mirrors and whitelist the tier 1 mirrors 2021-11-02 10:53:03 to control for the bandwidth 2021-11-02 10:53:12 the installer is http based 2021-11-02 10:53:20 fastly handles that for us 2021-11-02 10:53:28 aight 2021-11-02 10:53:50 and the amount of data it does is also insane 2021-11-02 10:54:02 per mirror or as a whole? 2021-11-02 10:54:09 the fastly stats 2021-11-02 10:54:23 they have a dashboard to montior usage 2021-11-02 10:54:34 ah 2021-11-02 10:54:35 but for now, all is good 2021-11-02 10:54:45 the worry kind of is rsync 2021-11-02 10:55:21 if any amount of http traffic is not a concern 2021-11-02 10:55:31 why not rework the mirrors to use http instead of rsync to keep up to date? 2021-11-02 10:55:59 equinix is currently hosting rsync.a.o 2021-11-02 10:56:18 because mirror strategy is always based on rsync? 2021-11-02 10:56:55 is the stragecy immutable? 2021-11-02 10:57:02 strategy* 2021-11-02 10:58:00 if you want to support community mirrors, thats the only way (i think) 2021-11-02 10:58:13 how so? 2021-11-02 10:59:17 for instance http://mirror.nl.leaseweb.net/ 2021-11-02 10:59:23 they want to sync from rsync 2021-11-02 10:59:47 brb 2021-11-02 11:00:01 it might be worth asking them if they're willing to consider another solution 2021-11-02 11:19:20 Not sure if syncing over http is efficient.. 2021-11-02 12:26:29 what is the backend for dl-cdn? could it be that it is the dl-cdn cache misses that is the problem? 2021-11-02 12:27:58 there are like 5-10 cache misses per second 2021-11-02 12:29:20 ok. its not dl-cdn causing it 2021-11-02 12:30:08 i think we should have round-robin for rsync.a.o backend 2021-11-02 13:42:55 ncopa: geo dns is probably more suitable 2021-11-02 13:43:07 it worked in our linode setup 2021-11-02 13:55:23 ncopa: http://172.16.8.10:8080/hosts look at hostname proxy, thats all http traffic like source for dl-cdn 2021-11-02 21:43:26 clandmeter: oof, I'm seeing a huge drop in traffic on nld3 :D 2021-11-02 21:45:51 311.88 Mbps 2021-11-02 21:57:59 Haha 2021-11-02 21:58:12 Nice 2021-11-02 22:02:47 hmm, it's at 1gbps again now 2021-11-02 22:21:21 lesigh 2021-11-02 22:21:27 i just cleaned up >50G 2021-11-02 22:22:10 where did that 15G just go? 2021-11-02 22:23:51 apparently fx is 11G 2021-11-02 22:23:55 in source 2021-11-03 09:23:57 building ceph 6 times on the same host is a world of pain 2021-11-03 09:26:27 6x? 2021-11-03 09:26:35 for what reason? 2021-11-03 09:26:43 multiple arch? 2021-11-03 09:26:53 3 arches, 2 releases (3.15, edge) 2021-11-03 09:27:07 arm i guess? 2021-11-03 09:27:11 yup 2021-11-03 09:27:27 i still have 2 arm servers at office 2021-11-03 09:27:32 actually 3 2021-11-03 09:29:36 but one is altra, dont think i can use it 2021-11-03 09:29:47 others are our older arm builders 2021-11-03 09:29:56 i can check if we can use them for CI 2021-11-03 09:30:14 ikke: would that benefit us? 2021-11-03 09:30:30 if not, i will put them somewhere the light never comes 2021-11-03 09:31:21 We do still have space left on usa9, so I suppose we should increase the volume 2021-11-03 09:31:46 check with lvs -a 2021-11-03 09:31:54 Yes, I did before 2021-11-03 09:32:02 if its raid it can fail 2021-11-03 09:32:56 I'm not sure if it's a proper raid. We did set something up, but I'm not sure if it was done properly 2021-11-03 09:34:18 i mean lvm raid 2021-11-03 09:34:33 what i noticed before is 2021-11-03 09:34:57 first disk has some space reserved for boot/os/swap or similar 2021-11-03 09:35:30 when you want to extend the volume but the first disk is full due to that setup, you cannot extend it. 2021-11-03 09:36:22 but im not sure thats the case on that box, but it is on one of them. 2021-11-03 10:03:55 https://gitlab.alpinelinux.org/alpine/aports/-/jobs/529620 2021-11-03 10:04:03 the aarch64 is stucked 2021-11-03 10:35:40 fcolista: you can cancel the build yourself 2021-11-03 11:24:41 I published my alpine development cluster setup for my work on k0s: https://github.com/ncopa/alpine-vm-cluster 2021-11-03 11:25:16 it takes me 2-3 mins to spin up a kubernetes cluster using N number of alpine hosts in libvirt 2021-11-03 11:59:55 thx for the link, will look into it later. 2021-11-03 16:48:34 clandmeter: fyi, I added 100G to usa9 root_lv 2021-11-03 17:03:08 7gbps peak on nld3 😲 2021-11-03 20:57:19 clandmeter: I think we should look into binding the CI hosts to a single numa domain 2021-11-03 20:57:25 arm CI hosts 2021-11-03 21:48:16 ok? 2021-11-03 21:48:31 for performance? 2021-11-04 05:27:57 clandmeter: yes 2021-11-04 05:28:36 yesterday there was backlog of CI jobs for arm* 2021-11-04 05:28:42 and aarch64 2021-11-05 09:42:36 ... 2021-11-05 09:56:14 ikke: the queue is also related to disk space? 2021-11-05 09:56:32 what queue? 2021-11-05 09:59:06 <@ikke> yesterday there was backlog of CI jobs for arm* 2021-11-05 09:59:30 No, that's just slow jobs 2021-11-05 10:00:05 do we have some metrics? 2021-11-05 10:02:06 limited 2021-11-05 10:02:18 what kind of metrics are you interested in? 2021-11-05 10:02:56 We do monitor the CI hosts: https://zabbix.alpinelinux.org/zabbix.php?action=latest.view&filter_hostids%5B%5D=10393&filter_set=1 2021-11-05 10:04:46 https://zabbix.alpinelinux.org/history.php?action=showgraph&itemids%5B%5D=32191 steal time up to 50% 2021-11-05 10:29:21 ikke: I’m currently moving my stuff to a new server - do we still need my gitlab runner? 2021-11-05 10:30:09 Cogitri[m]: I don't think it's required 2021-11-05 10:30:34 Alright, could you disable my Gitlab runner for now then? :) 2021-11-05 10:31:31 CI host runs in qemu? 2021-11-05 10:33:34 clandmeter: for arm* aarch64, yes 2021-11-05 10:33:42 Cogitri[m]: I've paused them 2021-11-05 11:48:23 clandmeter: the lxc containers we limited to single numa domains, we did not do that for the qemu vms yet 2021-11-05 11:48:38 I wonder if it's just a matter of running qemu with numectl 2021-11-05 13:47:37 x86_64 builder seems to be have no disk space left 2021-11-05 13:51:40 yup both 3.15 and edge builders are failing on all packages now 2021-11-05 14:03:30 PureTryOut: yes, was already looking at it 2021-11-05 14:03:43 πŸ‘οΈ 2021-11-05 16:12:04 ikke: do you know how much RAM have x86 builder? 2021-11-05 16:12:24 64G 2021-11-05 16:12:38 aha 2021-11-05 16:13:07 I wonder why linux-lts passed on armv7 but not on x86 2021-11-05 16:13:50 but iirc armv7 have lot more RAM 2021-11-06 12:01:23 clandmeter: do you still plan to move distfiles to somewhere central? 2021-11-06 12:01:49 yup 2021-11-06 12:01:59 got sidetracked by the mirrors issue 2021-11-06 12:02:03 nod 2021-11-06 12:02:10 distfiles host is online 2021-11-06 12:02:15 just need to switch to it 2021-11-06 12:02:19 and setup some sync 2021-11-06 12:06:24 I have gitlab 14.2 running on gitlab-test btw 2021-11-06 12:09:25 nice 2021-11-06 12:09:33 what are we on now? 2021-11-06 12:09:42 14.0? 2021-11-06 12:09:58 yes 2021-11-06 12:35:10 i guess we need to think of an update strategy for distfiles 2021-11-06 12:35:58 technically each arch could have different distfiles 2021-11-06 12:40:12 ikke: do you prefer to sync each day or each week? 2021-11-06 12:47:35 I guess that excludes logfiles? Which will probably continue to work as is? 2021-11-06 12:47:51 yes 2021-11-06 12:48:05 we could think about improving it later 2021-11-06 12:49:04 Ideally it means we could remove the files from the builders 2021-11-06 12:49:17 yes 2021-11-06 12:49:27 we can add a switch to rsync 2021-11-06 12:49:37 to delete on successful transfer 2021-11-06 12:50:00 I don't have a 2021-11-06 12:50:16 strong preference for either 2021-11-06 12:50:48 i guess the only thing to concider is that distfiles would be 1 week behind 2021-11-06 12:51:13 btw, regarding mirrors 2021-11-06 12:51:27 what do you think if we change to MR based additions? 2021-11-06 12:51:56 we could kill the ML and issues creation 2021-11-06 12:54:33 switching back to distfiles: i guess doing it daily makes most sense, and we offset each arch by one hour. 2021-11-06 12:54:42 to prevent race conditions 2021-11-06 13:02:44 sounds like a good approach 2021-11-06 14:23:53 ikke: for both? 2021-11-06 23:04:46 has https://pkgs.alpinelinux.org/packages always taken 10 seconds to load? 2021-11-06 23:42:43 loads fast for me 2021-11-07 09:58:59 Hello71: since week or two it loading for me much slower, from around 4 seconds to even more 2021-11-07 14:52:31 hmm strange, for me it's also almost instant 2021-11-07 14:53:31 also for me 2021-11-07 15:24:42 now is fine too, but before I noticed many times long delays 2021-11-07 15:35:06 If I look at our monitoring, I see some spikes, but very rarely and very brief. In general, it seems to load in <500ms 2021-11-08 21:00:40 ikke: would it make sense to add a board to gitlab infra project? 2021-11-08 21:01:14 clandmeter: probably 2021-11-08 21:01:50 we could add issues as a priority list 2021-11-08 21:02:04 sounds good to me 2021-11-09 11:36:39 im not sure I have an ppc64le dev env? 2021-11-09 11:38:31 I have. it was just stopped 2021-11-09 11:38:49 seems like ppc64le.alpinelinux.org is up again, running ubuntu 2021-11-09 11:40:32 Yes, Rafael said he would install alpine again, but he hasn't yet 2021-11-10 06:52:59 would it be possible to run `apk dot --errors` after a build in CI? and show a warning if its non-empty 2021-11-10 12:23:54 clandmeter: the opensource.is mirror gets an error from sync: max connections reached 2021-11-10 12:28:00 ref? 2021-11-10 12:29:51 Hmm, it's not shown on the mirror list 2021-11-10 12:30:04 forwarded it 2021-11-10 12:37:23 ok 2021-11-10 12:37:31 they dont mention upstream 2021-11-10 12:37:54 rsync.a.o does not set a limit 2021-11-10 12:37:55 afaiks 2021-11-10 12:38:03 afaics 2021-11-10 12:38:04 Yeah, I didn't see one either 2021-11-10 12:38:23 i still find it weird some servers could sync 2021-11-10 12:38:31 even though rsync was behind 2021-11-10 12:43:06 something is of or they sync from another server 2021-11-10 12:43:20 i think we should ask mirrors to report from where they sync 2021-11-10 12:43:58 I want to go over all mirrors and see if any of the email addresses is secret (not online) and make the yml file public 2021-11-10 12:44:09 let them create MR to submit a mirror 2021-11-10 12:46:05 ikke: typo alert :) 2021-11-10 12:47:41 Uhoh 2021-11-10 12:48:53 Typo where? 2021-11-10 12:55:19 rsync.a.o in email 2021-11-10 12:56:03 he 2021-11-10 14:53:15 clandmeter: so they are using leaseweb as source 2021-11-10 14:53:54 hehe 2021-11-10 14:54:08 and they send us email to do what? 2021-11-10 14:54:16 call leasweb? 2021-11-10 14:54:18 clandmeter: I sent them an e-mail 2021-11-10 14:54:22 :D 2021-11-10 14:54:25 that their mirror was out of date 2021-11-10 14:54:40 leaseweb? 2021-11-10 14:54:50 opensource.is 2021-11-10 14:55:23 ah like that 2021-11-10 14:55:33 Do we suggest to use rsync.a.o? 2021-11-10 14:56:02 leaseweb should be good 2021-11-10 14:56:10 But they are running against limits 2021-11-10 14:56:10 not sure why they limit 2021-11-10 14:56:20 probably to limit bw 2021-11-10 14:56:26 too low? 2021-11-10 14:57:03 but their mirror is behind because they run into limits all the time? 2021-11-10 14:57:35 maybe they use leasweb for more distros 2021-11-10 15:02:51 clandmeter: I suspect that's the issue 2021-11-10 23:01:43 hello! am having an issue with a mirror - https://alpine.northrepo.ca/ is using cloudflare, which is breaking fastest mirror selection for me :( 2021-11-11 10:33:17 rails: hi 2021-11-11 15:27:16 δ½ ε₯½ 2021-11-11 15:27:18 你们ε₯½ 2021-11-11 15:27:25 ζœ‰δΊΊεœ¨ε—οΌŸ 2021-11-11 15:27:32 hello 2021-11-11 15:28:05 don't blithely PM everyone in the channel 2021-11-11 15:28:07 what do you want 2021-11-11 15:28:43 where 2021-11-11 15:28:58 if you have a question, ask it and wait patiently for someone to answer 2021-11-11 15:29:22 it occurs to me that this channel ought to be +s 2021-11-11 15:34:14 regarding cloudflare mirrors, i wonder if we should add something to healthcheck to make sure there isn't any cloudflare etc stuff 2021-11-11 15:34:58 CF-RAY header should do it 2021-11-11 15:36:06 cf-ray: 6ac897daba2842ee-FRA 2021-11-11 15:36:55 the only CDN we should suggest, i think, is the official one 2021-11-11 15:37:07 any mirror behind cloudflare really is just a CDN 2021-11-11 15:37:46 except that it lacks the benefit of being warmed by the thousands of req/s the official CDN gets 2021-11-11 15:37:55 so, the performance will be sketchy :P 2021-11-11 15:38:22 though there is also https://cloudflaremirrors.com/alpine 2021-11-11 17:59:54 clandmeter: https://gitlab.alpinelinux.org/alpine/infra/docker/alpine-www 2021-11-11 20:32:05 ikke: yes hello 2021-11-11 20:32:35 Ariadne pointed me here from another channel, fwiw 2021-11-11 20:32:45 rails: right 2021-11-11 20:33:12 Is it because it's slower than it appears from the test? 2021-11-11 20:33:17 yes 2021-11-11 20:33:31 i'm directly peered with aarnet 2021-11-11 20:33:35 he is in australia, so the cloudflare mirror has to go fetch from origin 2021-11-11 20:33:46 which is in canada 2021-11-11 20:34:04 and if it's not warmed to my edge, (MEL and ADL) 2021-11-11 20:34:15 yeah, makes sense 2021-11-11 20:35:14 i'm really starting to love alpine, tbh 2021-11-11 20:38:34 Hmm, it does actually fetches a file from the mirror for the speedtest, but I suppose because it's the index, the chance it's in the cache is higher than other files 2021-11-11 20:38:43 https://gitlab.alpinelinux.org/alpine/alpine-conf/-/blob/master/setup-apkrepos.in#L55 2021-11-11 20:38:45 it is 2021-11-11 20:39:01 (was just wondering how the time was determined) 2021-11-11 20:39:07 aha 2021-11-11 20:39:44 let me do a quick test 2021-11-11 20:51:47 https://gist.github.com/27202604d98fb2d13fc8d079ccb2a440 2021-11-11 20:52:08 once warmed to edge it's faster 2021-11-11 20:52:18 but 60x slower if not 2021-11-11 20:52:31 (i dont do math, those numbers are wrong :P) 2021-11-11 20:54:19 But it clearly shows the problem 2021-11-11 20:58:14 aye 2021-11-11 20:58:52 13x longer over 50 or so packages can add up 2021-11-11 21:19:21 rails: would you mind opening up an issue here: https://gitlab.alpinelinux.org/alpine/infra/infra as a reminder? 2021-11-11 21:21:24 no problem at all 2021-11-11 21:26:30 ikke: https://gitlab.alpinelinux.org/alpine/infra/infra/-/issues/10731 o7 2021-11-11 21:30:19 cheers 2021-11-11 21:32:32 happy to help in any way i can :) 2021-11-12 08:28:17 how do i create a merge request alpine-mksite? I forked it but it created an MR to my fork 2021-11-12 08:28:24 https://gitlab.alpinelinux.org/ncopa/alpine-mksite/-/merge_requests/1 2021-11-12 08:28:59 i think i figured it out 2021-11-12 08:29:09 ncopa: fyi, you do not need to make a fork to make a merge request 2021-11-12 08:29:17 just push a branch and create an MR from that 2021-11-12 08:29:43 but in this case, it's because your fork is private, so it will by default create MRs against itself 2021-11-12 08:33:39 so, what is the preferred workflow? i create an MR from my fork, or I create a new branch to the upstream repo 2021-11-12 08:34:06 or i just push to git master... 2021-11-12 08:35:24 If you have push access, there is no need for a fork. Whether to make an MR depends on if you want CI (not relevant in this case) or other people reviewing things 2021-11-12 08:36:15 And for CI, an MR is not strictly necessary. In many cases, the CI is triggered when you push branches. 2021-11-12 08:48:46 some global services are very slow (as in google related) 2021-11-12 08:53:20 Haven't noticed anything yet 2021-11-12 09:02:57 i have about 10 reports... 2021-11-12 09:23:06 I have no issues connecting to google 2021-11-12 13:27:57 ppc64le machine is the bottlenect for releases 2021-11-12 13:29:36 what is the status from the "new" ppc64le machine from ibm? 2021-11-12 13:31:30 ncopa: Waiting for Rafael to install Alpine Linux again (last time, after a reboot we suddently got ubuntu back) 2021-11-12 22:18:48 equinix metal dfw just died, huh 2021-11-12 22:19:09 ouch 2021-11-12 22:19:30 https://status.equinixmetal.com/incidents/hf0530xkjxb7 2021-11-12 22:22:25 yup 2021-11-12 22:22:33 we lost 7-8 machines in there 2021-11-12 22:22:34 haha 2021-11-12 22:22:36 ouch 2021-11-12 22:24:19 We have 2 servers there 2021-11-12 22:27:05 oh crap 2021-11-13 01:16:51 ikke: i upgraded tpaste to 3.14 (had to fix lua-turbo) 2021-11-13 01:17:06 also added health check 2021-11-13 01:23:55 so: from what i heard: someone unplugged the wrong fibers 2021-11-13 01:24:16 not in EM, but in Zayo 2021-11-13 02:31:27 is there a known/preferred point of contact for leaseweb mirroring? 2021-11-13 08:17:39 clandmeter: alright 2021-11-13 08:17:49 zv: what do you mean with that? 2021-11-13 10:20:10 ikke: looks like gitea likes to modify authorized_keys 2021-11-13 10:20:31 yeah 2021-11-13 12:04:21 clandmeter: thinking about upgrading gitlab today to 14.2 2021-11-13 12:04:48 ok, i need to drill some holes so i dont think i can do a lot today :) 2021-11-13 12:05:08 I think I can manage it myself :) 2021-11-13 12:05:20 should not be a lot of impact 2021-11-13 12:05:21 my electrician makes me do all the dirty work 2021-11-13 12:05:24 :D 2021-11-13 12:06:05 those Americans with their wooden houses have such an easy life :) 2021-11-13 12:06:28 Until a storm comes up :P 2021-11-13 12:16:16 ikke: disregard; there is an email address on one of the mirror pages for "issues, questions, comments" etc. 2021-11-13 19:14:45 clandmeter: https://gitlab.alpinelinux.org/alpine/infra/infra/-/issues/10731 the northrepo uses cloudflare in front of their mirror, which throws of the 'fastest mirror' test that setup-apkrepos does 2021-11-13 19:18:41 also using cloudflare for package mirroring probably violates their terms 2021-11-14 11:03:11 clandmeter: ping 2021-11-14 11:04:11 Pong 2021-11-14 11:04:31 GitLab issues? 2021-11-14 11:08:58 ahuh 2021-11-14 11:10:51 strange thing is, that in our test instance everything seems to work 2021-11-14 11:15:01 The most direct thing I see happening when I push / update a branch: time="2021-11-14T11:11:51Z" level=warning msg="[transport] transport: http2Server.HandleStreams failed to read frame: read unix /tmp/gitaly-internal724668979/internal_1.sock->@: read: connection reset by peer" pid=1 system=system 2021-11-14 13:10:07 So I'm wondering what approach to take 2021-11-14 13:10:15 We could try upgrading to a newer versions 2021-11-14 13:28:12 ok, the merge conflicts message also happens on gitlab-test 2021-11-14 14:40:50 what is actually broken? 2021-11-14 14:40:58 Merging merge requests 2021-11-14 14:41:02 via the gitlab interface 2021-11-14 14:41:11 It either says it's blocked, or that there are merge conflicts 2021-11-14 14:41:33 but i guess you can merge via cmdline 2021-11-14 14:41:46 Yes 2021-11-14 14:41:57 using git 2021-11-14 14:42:23 other git operations are ok? 2021-11-14 14:42:27 via webif 2021-11-14 14:43:11 Rebasing is affected as well 2021-11-14 14:43:22 (because the merge/rebase button is blocked) 2021-11-14 14:46:26 3:unknown time zone Etc/UTC. debug_error_string:{"created":"@1636901174.956243236","description":"Error received from peer unix:/home/git/run/gitaly/gitaly.socket","file":"src/core/lib/surface/call.cc","file_line":1055,"grpc_message":"unknown time zone Etc/UTC","grpc_status":3} 2021-11-14 14:46:44 When I try to commit something using the WebIDE 2021-11-14 14:53:17 clandmeter: you're a genius :D 2021-11-14 14:54:57 Why do I need to commit something through the webinterface before I get a proper error message πŸ˜’ 2021-11-14 14:59:55 heh 2021-11-14 15:01:05 i guess it must be somewhere else this message, maybe not properly logged. 2021-11-14 15:01:23 so its reading system timezone? 2021-11-14 15:01:55 It's doing something with the timezone at least 2021-11-14 15:02:06 we have utc set i guess 2021-11-14 15:03:30 yes 2021-11-14 15:06:01 I tried building 14.3, but some gems are giving issues 2021-11-14 15:08:02 what is the correct compose dir now? 2021-11-14 15:08:16 /srv/compose/gitlab ? 2021-11-14 15:09:06 yes, that's what it has been for quite a long time 2021-11-14 15:09:21 can you try to change timezone? 2021-11-14 15:09:24 in gitlab.yml 2021-11-14 15:09:32 just uncomment it 2021-11-14 15:09:40 and then? 2021-11-14 15:09:49 see if that makes a difference? 2021-11-14 15:10:00 Note that it's gitaly that has the issues, not gitlab 2021-11-14 15:10:30 does gitaly has its own timezone settings? 2021-11-14 15:11:10 Don't see it: https://gitlab.com/gitlab-org/gitaly/blob/master/config.toml.example 2021-11-14 15:11:58 you are using the webif 2021-11-14 15:12:07 which is gitlab not gitaly 2021-11-14 15:12:28 gitlab defers to gitaly 2021-11-14 15:12:36 it uses gitaly for all git operations 2021-11-14 15:12:46 well trying wont kill it :) 2021-11-14 15:14:21 ikke: i see your devel msg, did you fix it already? 2021-11-14 15:14:27 yes, apk add timezone 2021-11-14 15:14:27 im confused now :) 2021-11-14 15:14:29 tzdata 2021-11-14 15:14:40 ok you didnt tell me 2021-11-14 15:14:45 sorry :) 2021-11-14 15:15:21 ok i can crawl back into my hangover 2021-11-14 15:17:20 apparently jirutka was anxious to merge things 2021-11-14 15:21:21 https://gitlab.com/gitlab-org/gitaly/-/commit/9924b3263079fb507525d5c164741bb733ebfcde 2021-11-14 15:21:46 location, err := time.LoadLocation(user.GetTimezone()) 2021-11-14 19:05:41 I must say I'm glad the gitlab issue was solved πŸ™‚ 2021-11-14 19:32:07 ikke: also I'm :) 2021-11-14 20:12:17 ikke: why is there a cargo dir on distfiles? 2021-11-14 20:14:54 clandmeter: to preserve cargo crates 2021-11-14 20:15:21 im confused 2021-11-14 20:15:27 i dont see it on the original distfiles 2021-11-14 20:16:15 It was not uploaded to distfiles 2021-11-14 20:16:20 each builder just kept it in the local distfiles 2021-11-14 20:17:03 so why is it not on distfiles? 2021-11-14 20:17:08 but it is on the new one 2021-11-14 20:17:23 I don't know why it's on the new one 2021-11-14 20:17:37 it makes no sense to have it on distfiles 2021-11-14 20:17:59 why not? 2021-11-14 20:18:14 will cargo fetch from distfiles.a.o? 2021-11-14 20:18:43 No, but we could manually restore it 2021-11-14 20:19:21 In any case, ~/.cargo and ~/go take quite some space on each builder 2021-11-14 20:19:24 so where is this logic where it copies stuff to distfiles? 2021-11-14 20:19:32 /etc/abuild.conf 2021-11-14 20:19:48 I think it was disabled again 2021-11-14 20:19:53 and why is it not on origitnal distfiles? 2021-11-14 20:20:00 ah wait 2021-11-14 20:20:48 nope 2021-11-14 20:20:54 i though it might be moved to edge 2021-11-14 20:21:00 edge dir 2021-11-14 20:22:29 Only reason I can imagine is that that env variable was never set on x86_64 abuild.conf 2021-11-14 20:23:29 something on arm builder is still copying stuff to root of distfiles 2021-11-14 20:24:22 yeah, I noticed 2021-11-14 20:26:35 must be edge or 3.15 2021-11-14 20:26:54 or somebodies private container 2021-11-14 20:27:53 hmm, I thought I fixed them all to not use /var/cache/distfiles 2021-11-14 20:28:07 i bet its one of mps containers 2021-11-14 20:28:29 it looks mostly kernel stuff 2021-11-14 20:29:47 nope 2021-11-14 22:50:32 ikke: i think we need to sync the lxc configs for all builders on arm host 2021-11-14 22:50:43 seems aarch64 and armv7 are not the same 2021-11-14 22:50:53 in what sense? 2021-11-14 22:51:01 Some is deliberate 2021-11-14 22:51:03 caps 2021-11-14 22:51:09 ie, cpu assignment 2021-11-14 22:51:33 im talking about the alpine.common.conf 2021-11-14 22:51:35 ok 2021-11-14 22:51:54 aarch64 is using the one in etc 2021-11-14 22:52:05 but armv7 is using the regular one 2021-11-14 22:52:13 so its kind of confusing 2021-11-14 22:52:16 yes 2021-11-14 22:52:31 im also not sure which caps needs to be dropped 2021-11-14 22:53:27 maybe it makes sense to put the default config in git 2021-11-14 22:54:04 are you setting cpu settings in specifc configs? 2021-11-14 22:54:47 yes 2021-11-14 22:55:08 But I was just thinking it could make sense to move it to some generic config 2021-11-14 22:55:14 one per numa domain 2021-11-14 22:55:21 and then include the appropriate one 2021-11-14 22:55:25 how do you split them? 2021-11-14 22:55:37 per arch 2021-11-14 22:56:00 which builders did you set it? 2021-11-14 22:56:01 But it's not done consistently for all containers yet (developer containers) 2021-11-14 22:56:04 i didnt see one yet 2021-11-14 22:56:29 aarch64 2021-11-14 22:56:49 aarch64 builder? 2021-11-14 22:56:52 yes 2021-11-14 22:56:55 edge? 2021-11-14 22:57:14 yes 2021-11-14 22:57:26 ok i see it now 2021-11-14 22:58:35 I wodner where the: lxc.cap.drop = sys_admin sys_module mac_admin mac_override sys_time came from 2021-11-14 22:59:13 I have no idea to be honest 2021-11-14 22:59:26 and its just for aarch64 2021-11-14 22:59:54 and the armhf is also completely different 2021-11-15 09:24:42 hm, can someone peek at the mail server logs? 2021-11-15 09:24:48 I don't think emails are getting through to listserv 2021-11-15 09:25:22 clandmeter: would you happen to have time? 2021-11-15 09:27:05 also might be good to blackhole messagelabs.com, I suspect it's spam and their postmaster address bounced when I wrote to complain about malformated emails 2021-11-15 09:34:31 i could have broken smtp.a.o last time 2021-11-15 09:34:47 i was playing with something and i bet i shot myself :) 2021-11-15 09:37:52 hm, nevermind, mails are being delivered 2021-11-15 09:38:00 unless you just fixed something and it caught up 2021-11-15 09:41:48 now it's behind again. Maybe it's just delivering with a delay 2021-11-15 10:00:09 ddevault: i had not time to check, i just opened the logs only. 2021-11-15 10:00:45 can it be related to greylisting? 2021-11-15 10:00:53 maybe? 2021-11-15 10:00:59 probably 2021-11-15 10:01:01 that makes sense 2021-11-15 10:01:19 there would be a delay if the host is uknown 2021-11-15 10:01:42 I would be surprised to discover that my host is unknown 2021-11-15 10:01:54 maybe the remembering-hosts part is misconfigured? 2021-11-15 10:02:12 as i understand its mostly default config 2021-11-15 10:02:24 but with some whitelisting 2021-11-15 10:02:30 gotcha 2021-11-15 10:02:36 rspamd 2021-11-15 10:02:38 well, it's not disruptive, just annoying while trying to debug list issues 2021-11-15 10:02:47 understand 2021-11-15 10:02:48 and with the issues debugged, not a big deal to solve this right now 2021-11-15 10:03:02 if its working ill keep away from it 2021-11-15 10:03:05 cool cool 2021-11-15 10:03:12 i was playing before with transports 2021-11-15 10:03:20 thought maybe i broke something 2021-11-15 10:03:30 and virtual addresses 2021-11-15 10:03:48 seems to be difficult to mix virtual addresses and transports for the same domain 2021-11-15 10:04:06 but im not a postfix master 2021-11-15 10:04:19 I doubt there are any postfix masters 2021-11-15 10:04:26 :) 2021-11-15 10:05:16 ones you set a virtual domain, you cannot use transport to move a specific address to another server. 2021-11-15 11:06:41 clandmeter: has tpaste repo been moved ? ^^ikke: i upgraded tpaste to 3.14 2021-11-15 11:09:35 vkrishn: what do you mean with the tpaste repo? 2021-11-15 11:09:39 https://gitlab.alpinelinux.org/alpine/infra/turbo-paste 2021-11-15 11:10:19 https://gitlab.alpinelinux.org/alpine/infra/docker/turbo-paste 2021-11-15 11:10:31 just got confused, with "upgraded tpaste to 3.14" 2021-11-15 11:10:53 vkrishn: Rebuilt the docker image, which is based on latest 2021-11-15 11:10:54 i have v1.0.1 2021-11-15 11:10:59 ok 2021-11-15 11:11:02 thanks 2021-11-15 11:11:03 He means Alpine 3.14 2021-11-15 11:11:10 ah 2021-11-15 12:39:05 yes alpine upgraded not tpaste itself 2021-11-15 12:39:19 lua-turbo has some issues on more recent alpine versions 2021-11-15 12:39:49 and its no longer properly maintained, so its time to switch to something else. 2021-11-15 13:40:29 did not notice, still using in 3.9(armv7) 2021-11-15 13:40:50 on a mobile phone :-)) 2021-11-15 23:03:30 clandmeter: you lost your bet about my usage :) 2021-11-16 12:08:32 mps: why? 2021-11-16 12:29:46 clandmeter: you bet few days ago that my arm lxc have added big GBs to distfiles.a.o 2021-11-16 12:30:13 big number of GBs* 2021-11-16 12:30:29 i dont think thats correct 2021-11-16 12:30:43 there are files added to distfiles, it was not that much. 2021-11-16 12:30:46 mostly kernel related 2021-11-16 12:31:13 thats why its probably you or ncopa, dont know which container it was. 2021-11-16 12:31:31 but now that we moved to generic configs, your distfiles is not shared anymore. 2021-11-16 12:31:44 if they kernels then yes 2021-11-16 12:31:59 i also saw rpi stuff 2021-11-16 12:32:07 not sure you do rpi builds locally 2021-11-16 12:32:25 thats probably from ncopa 2021-11-16 12:33:05 right, lets blame him ;-) 2021-11-16 12:33:09 I have just one old rpi zero and when test it I build mainline kernel 2021-11-16 12:33:26 anyways its my fault for not setting it up correctly 2021-11-16 12:34:21 We had to migrate all the containers there somewhat in a hurry 2021-11-16 12:43:18 ikke: ill start to run the cron job to transfer distfiles to new distfiles 2021-11-16 12:44:06 πŸ‘ 2021-11-16 12:57:04 ikke: for x86* we only transfer old distfiles.a.o? 2021-11-16 12:57:31 x86_64 == old distfiles 2021-11-16 12:57:42 x86 is separate 2021-11-16 12:57:46 thats why im asking 2021-11-16 17:37:56 main/sntpc was hosted on https://git.alpinelinux.org/cgit/hosted/sntpc/ in the past. this is the homepage for the project in package. but it seems it was not included in the gitlab migration. does anybody know where it went? 2021-11-16 17:38:33 https://git-old.alpinelinux.org/hosted/sntpc/ 2021-11-16 17:38:55 thank you sir! 2021-11-16 17:39:47 I guess we need to decide what to do with git-old 2021-11-16 18:26:27 hmm, someone restarted lxcs on arm box 2021-11-16 18:29:48 guess who 2021-11-16 18:31:31 well, I know two suspects ;) 2021-11-16 21:34:49 ikke: ping 2021-11-16 21:35:17 pong 2021-11-16 21:35:28 how can i connect to the qemu instances? 2021-11-16 21:35:36 console i mean 2021-11-16 21:36:12 /root/vms 2021-11-16 21:36:20 there is a script for each 2021-11-16 21:36:43 im looking at adding numa support to them 2021-11-16 21:36:53 ok 2021-11-16 21:37:23 but thats not like switching a boolean :) 2021-11-16 21:37:52 yeah, I figured 2021-11-16 21:38:09 that's why I wondered if using numactl to start qemu would work 2021-11-16 21:38:21 i dont think you need that 2021-11-16 21:38:26 qemu has -numa 2021-11-16 21:38:28 yeah 2021-11-16 21:38:42 but wondered if that just emulates it for the guest 2021-11-16 21:40:43 hmm, would that make sense? 2021-11-16 21:40:56 I don't know 2021-11-16 21:41:08 but I find it strange you have to specify the complete numa layout 2021-11-16 21:41:47 with numactl, and lxc, you just specify where to bind to 2021-11-16 21:42:47 it should be as simple as: numa node,cpus=0-31,nodeid=0 2021-11-16 21:42:58 ok 2021-11-16 21:43:13 but thats the most simple config 2021-11-16 21:43:19 there are like a gaziilion other options 2021-11-16 21:43:31 yes 2021-11-16 21:43:33 they also talk about enabling hmat 2021-11-16 21:44:11 Heterogeneous memory attribute table 2021-11-16 21:44:19 never heard of it 2021-11-16 21:45:11 acpi thingy 2021-11-16 22:08:04 i think you are right 2021-11-16 22:18:19 So numactl? 2021-11-16 22:22:14 I've created the infra board on the group level now: https://gitlab.alpinelinux.org/groups/alpine/infra/-/boards 2021-11-16 22:39:27 no i think it can be done with qemu 2021-11-16 22:39:47 but like anything with qemu, its a long read. 2021-11-16 22:40:11 heh 2021-11-16 22:40:14 -object memory-backend-ram,id=id,merge=on|off,dump=on|off,share=on|off,prealloc=on|off,size=size,host-nodes=host-nodes,policy=default|preferred|bind|interleave 2021-11-16 22:40:19 i think this is what we need 2021-11-16 22:40:33 here you can define host-nodes 2021-11-16 22:41:19 but i guess we also need to lock the cpu 2021-11-16 23:23:40 :) 2021-11-16 23:54:29 ikke: both aarch64 and armv7 should run on their own domain 2021-11-16 23:54:39 its kind of hackish 2021-11-16 23:54:58 need to use numactl cause it seems qemu cant do cpu pinning 2021-11-16 23:58:29 and we are using openrc service, its a bit complicated to implement numactl into the rc script so i made some helper scripts in usr/local/bin 2021-11-16 23:58:41 time for bed 2021-11-16 23:59:03 oh and rsync did it job, distfiles on usa9 is gone and on new distfiles.a.o 2021-11-17 01:44:14 22:41 <@clandmeter> but i guess we also need to lock the cpu 2021-11-17 01:44:33 there is a patch for that 2021-11-17 01:45:03 https://github.com/saveriomiroddi/qemu-pinning 2021-11-17 01:45:09 apparently discontinued though :( 2021-11-17 07:32:17 Hello71: thx for the ref 2021-11-17 07:32:40 bit weird qemu cannot do it ootb 2021-11-17 08:38:28 I see at least one example of someone starting qemu with numactl 2021-11-17 08:47:19 https://bugzilla.redhat.com/show_bug.cgi?id=1165098 2021-11-17 08:49:25 set(something?) can pin qemu to particular cpu 2021-11-17 08:50:19 We want to bind a qemu vm with both CPU and memory to a single numa domain 2021-11-17 08:50:42 with numactl you have control over both 2021-11-17 09:03:15 ikke: thats what i did now 2021-11-17 09:03:20 just not for armhf 2021-11-17 09:04:03 but its a hack, it would be nice to have some tooling in our current setup. 2021-11-17 09:07:37 clandmeter: We could integrate it into qemu-openrc? 2021-11-17 09:07:49 Why not for armhf? 2021-11-17 09:08:46 cause i was sleepy 2021-11-17 09:08:51 ok 2021-11-17 09:08:58 That's a valid reason :P 2021-11-17 09:09:02 i tried to hack the init script 2021-11-17 09:09:38 but its not that simple to just prepend $command 2021-11-17 09:09:45 due to start-stop-daemon 2021-11-17 09:10:02 i think we could ask jirutka to look into it 2021-11-17 09:10:15 maybe there is an easy way to do it, which i didnt see. 2021-11-17 09:22:08 FYI, i deleted 3.15.0_rc1 and _rc2 2021-11-17 09:26:02 ncopa: ok, thanks 2021-11-17 09:29:25 ikke: asking again, what do you want to do with x86* builders and distfiles? 2021-11-17 09:29:51 Do you mean for the migration or in the future? 2021-11-17 09:31:09 migration, as we will have new distifles 2021-11-17 09:31:15 i guess old distfiles will be obsolte 2021-11-17 09:31:20 yes 2021-11-17 09:31:36 I don't think it matters much, just treat each as a normal host with distfiles 2021-11-17 09:33:12 yes that would make most sense 2021-11-17 09:33:29 im going to setup s390x 2021-11-17 09:34:04 what is the status for ppc? 2021-11-17 09:37:34 The host is still ubuntu 2021-11-17 09:37:38 waiting for rafael 2021-11-17 10:11:44 ikke: that means builder is offline? 2021-11-17 10:11:50 or is that another host? 2021-11-17 10:12:34 Another host 2021-11-17 10:12:52 We are still using the 3 ubuntu vms for the builder 2021-11-17 10:13:01 ah right 2021-11-17 10:13:16 Rafael installed alpine once on the physical host. I tried to install dmvpn, but ran into issues 2021-11-17 10:13:23 I rebooted it, and then it was back to ubuntu 2021-11-17 10:15:46 so the last email is from the 20st? 2021-11-17 10:15:48 oct 2021-11-17 10:16:05 21st 2021-11-17 10:45:23 s390 will be briefly offline today for a hypervisor update 2021-11-17 10:45:45 clandmeter: correcy 2021-11-17 12:56:26 ikke: it is not quite the same thing because the individual vcpus can migrate. it works mostly ok if you just want one numa node though 2021-11-17 12:59:00 Understood, but that's the gool 2021-11-17 12:59:02 goal* 2021-11-17 13:44:27 mmhmm 2021-11-19 15:53:06 arm machines are unreachable 2021-11-19 15:55:03 network congestion? 2021-11-19 16:10:08 mps: I can login, probably just very busy 2021-11-19 16:10:21 load average: 264.78, 261.11, 252.88 2021-11-19 16:30:53 ikke: must be something with network, very slow interactions in shell 2021-11-19 16:56:52 ikke: it is something with WG running in qemu VM on my side 2021-11-19 16:57:13 just tested from bare metal and it works fine 2021-11-19 16:57:18 ok 2021-11-19 21:13:25 have you guys considered looking into pritunl for remote access and access management? 2021-11-19 21:14:11 first time I've heard of it 2021-11-19 21:51:16 ugh 2021-11-19 22:06:04 all 3 vms crashed 2021-11-19 22:06:16 oom killer 2021-11-21 10:58:05 ikke: did you run the rsync script on the builders? 2021-11-21 10:59:30 No, forgot, will do it in a bit 2021-11-21 12:02:10 clandmeter: I should just run the script once and then setup the cron? 2021-11-21 12:02:28 yes, maybe best in tmux or similar 2021-11-21 12:02:32 i guess it will take some time 2021-11-21 12:03:07 and please remember, it will delete all local files it transfers 2021-11-21 12:03:18 yea 2021-11-21 12:04:15 It's running now on x86_64 2021-11-21 12:05:21 i guess x86* will a lot more historie compared to arm 2021-11-21 12:06:18 history zelfs :) 2021-11-21 12:10:18 :) 2021-11-21 12:18:17 clandmeter: did you notice the mips builder is back online? 2021-11-21 12:18:32 i didnt 2021-11-21 12:18:46 it is 2021-11-21 12:18:48 but we dont support it anymore i guess? 2021-11-21 12:19:11 12:19:03 up 354 days, 21:40, load average: 1.74, 0.48, 1.04 2021-11-21 12:20:15 somebody plug back the ethernet cable i suppose :) 2021-11-21 12:20:24 yup 2021-11-21 12:20:51 I also moved the rv64 builder 2021-11-21 12:20:53 not sure its online 2021-11-21 12:21:01 build-edge-riscv64: failed to build gdal 2021-11-21 12:21:07 it's online 2021-11-21 12:21:07 err 2021-11-21 12:21:12 i mean the real box 2021-11-21 12:21:16 oh 2021-11-21 12:21:19 right 2021-11-21 12:22:15 So funny when building gitaly. It can build git + gitaly-go faster then installing all the ruby gems 2021-11-21 12:23:19 gitaly: 219 seconds, git: 138 seconds, ruby: 360 seconds and counting 2021-11-21 15:16:13 clandmeter: x86_64 just finished 2021-11-21 15:16:24 what times should I use for x86* 2021-11-21 15:45:25 1 am 2021-11-21 15:46:26 for both? 2021-11-21 15:50:57 Split them by 1 2021-11-21 15:57:31 by 1? 2021-11-21 16:21:20 Hour 2021-11-21 16:22:07 ok 2021-11-21 16:22:10 so 1am and 2am 2021-11-21 16:22:39 or 00:30 and 1:30 2021-11-21 16:23:17 Arm is on 0:00 2021-11-21 16:24:24 so I guess sync them on the hour 2021-11-21 16:26:14 ok 2021-11-21 16:41:46 clandmeter: did you add it to /etc/crontabs/root? 2021-11-21 16:45:56 yes 2021-11-21 19:31:03 Yup 2021-11-21 19:31:37 We need to do it also on the other arch builders 2021-11-22 21:02:52 clandmeter: did you have an opinion on https://gitlab.alpinelinux.org/alpine/infra/infra/-/issues/10731? 2021-11-22 21:03:08 i did look at it 2021-11-22 21:03:27 so why is this differenct compared to dl-cdn? 2021-11-22 21:03:41 or does it not query it? 2021-11-22 21:03:52 It's less popular, so caches are cold 2021-11-22 21:04:16 hmm 2021-11-22 21:04:22 so the index is cached 2021-11-22 21:04:25 yes 2021-11-22 21:04:29 but not the packages 2021-11-22 21:04:48 but if it always end up as first, the cache should be loaded? :) 2021-11-22 21:05:05 it looks at the apkidex 2021-11-22 21:05:07 apkindex 2021-11-22 21:05:09 that loads fast 2021-11-22 21:05:15 yes i got that 2021-11-22 21:05:25 But then when you fetch packages, its slow 2021-11-22 21:05:36 maybe the cache ttl is low 2021-11-22 21:05:46 possibly 2021-11-22 21:06:03 else if it gets on top of the list, it must be used more often 2021-11-22 21:10:10 ikke: does the mirror admin have an acc on gitlab? 2021-11-22 21:10:29 I don't think so 2021-11-22 21:10:43 that would also be a nice addition if we start using MR's for mirrors 2021-11-22 21:11:48 Would need to get the existing mirror admins to create an account 2021-11-22 21:12:04 s/get/get so far/ 2021-11-22 21:12:04 ikke meant to say: Would need to get so far the existing mirror admins to create an account 2021-11-22 21:12:12 :fail: 2021-11-22 21:12:42 we could try, but at least in the future they would have one. 2021-11-22 21:12:52 they could also add an issue if they dont want to create an MR 2021-11-22 21:14:58 ikke: do you want me to contact the owner? 2021-11-22 21:15:10 we could use the outstanding ticket 2021-11-22 21:15:19 well its closed but still 2021-11-22 21:15:48 Yeah, would be nice 2021-11-22 21:16:07 does gitlab allow mail replies when its closed? 2021-11-22 21:16:08 I've removed all service desk tickets marked with the spam label btw 2021-11-22 21:16:19 clandmeter: Not idea, I would expect so 2021-11-22 21:16:26 lets see :) 2021-11-22 21:16:41 did you reply via email or webif? 2021-11-22 21:17:16 looks like an email 2021-11-22 21:17:59 I think webif, I've not sent an e-mail to them 2021-11-22 21:18:29 https://gitlab.alpinelinux.org/alpine/infra/mirrors/-/issues/39 2021-11-22 21:21:33 ok i referred to the issue 2021-11-22 21:21:37 lets see if he replies 2021-11-23 09:34:31 clandmeter: he did, and resolved! 2021-11-23 09:34:44 :) 2021-11-23 09:34:46 i saw it 2021-11-23 09:34:57 should i close the issue? 2021-11-23 09:35:03 nod 2021-11-23 09:35:10 done! 2021-11-23 09:35:18 nice! 2021-11-23 09:35:34 i love gitlab 2021-11-23 09:35:36 only 1000 issues left :) 2021-11-23 09:35:47 sheesh 2021-11-23 09:35:55 can i help somehow? 2021-11-23 09:36:56 that would be appreciated 2021-11-23 09:37:50 we could really use help with our infra 2021-11-23 09:39:26 i need to add alpine to all of aarch64's hypervisors 2021-11-23 09:41:27 clandmeter: can you reboot the ppc64le vms? 2021-11-23 09:41:32 ro-filesystem :/ 2021-11-23 09:41:39 uhoh 2021-11-23 09:44:29 im rebooting them now 2021-11-23 09:44:37 im checking 2021-11-23 09:44:42 but seems ncopa is aswel 2021-11-23 09:44:43 gbr3-vm1.alpinelinux.org is rebooting 2021-11-23 09:45:07 i'm rebooting vm2 too 2021-11-23 09:45:33 and vm3 2021-11-23 09:45:41 they all had readonly filesystems 2021-11-23 09:46:12 I think that if they do not come up we simply skip ppc64le this release 2021-11-23 09:46:17 im kinda tired 2021-11-23 09:46:57 maybe disk was full? 2021-11-23 09:47:15 It happened more often 2021-11-23 09:47:23 some hickup I suspect 2021-11-23 09:47:41 reboot fixed it 2021-11-23 09:47:49 (can take a while) 2021-11-23 09:47:54 gbr3-vm1 is still down 2021-11-23 09:48:23 they also didn't respond to my email 2021-11-23 09:49:39 interesting, maybe dissect messages.log? 2021-11-23 09:49:49 (or lump it at me if you dont want to? :P) 2021-11-23 09:54:03 could be they are doing maintenance without letting us know... 2021-11-23 09:54:41 maybe we should send them a friendly email explaining what happened 2021-11-23 09:55:11 i send them a friendly reminder but that doesnt seem to help either :| 2021-11-23 09:59:57 i was thinking a friendly email explaining why there is no alpine 3.15 ppc64le 2021-11-23 10:00:13 im gonna stopp build-3-13-ppc64le for now 2021-11-23 10:01:11 but this really sucks tbh 2021-11-23 10:01:27 it will become messy to make the 3.15 release without ppc64le too 2021-11-23 10:01:36 :( 2021-11-23 10:01:57 there are checks that will fail unlesw we completely remove ppc64le 2021-11-23 10:03:27 sometimes you need to stop beeing friendly to get things done (i know its sad). 2021-11-23 10:04:57 i guess if the ppc64le builders are not done within an hour or two we'll just go ahead and remove it from 3.15.0 2021-11-23 10:08:04 i guess skipping a release could awake some ppl, but i guess it puts all the trouble on your plate. 2021-11-23 11:45:56 i think someone on twitter noticed 2021-11-23 11:46:00 and pinged ibm 2021-11-23 11:47:08 oh? 2021-11-23 11:47:26 https://twitter.com/thaJeztah/status/1463087914389327872 2021-11-23 11:47:45 https://twitter.com/powerpcspace 2021-11-23 11:48:58 lets hope that helps 2021-11-23 15:15:59 ibm responded 2021-11-23 15:29:10 yeah. finally 2021-11-23 16:03:55 where are distfiles nowdays? 2021-11-23 16:04:14 I think we need to delete the yt-dlp https://build.alpinelinux.org/buildlogs/build-edge-ppc64le/community/yt-dlp/yt-dlp-2021.11.10.1-r1.log 2021-11-23 16:04:35 I don't think we moved the DNS record yet 2021-11-23 16:05:46 i cannot find it on distfiles.alpinelinux.org/var/cache/distfiles/edge/ 2021-11-23 16:05:55 me neither 2021-11-23 16:07:55 curl -I https://distfiles.alpinelinux.org/distfiles/edge//yt-dlp-2021.11.10.1.tar.gz on the ppc64le builder -> 404 2021-11-23 17:52:11 Load average: 37.52 38.76 37.89 2021-11-23 18:52:47 ikke: that load average was the arm machine? 2021-11-23 18:52:56 do we have any cpu graphs or similar? 2021-11-23 18:54:16 ncopa: We should've, but I agent apparently gets oom killed 2021-11-23 18:55:05 ncopa: There are still stats from the past 2021-11-23 18:55:08 https://zabbix.alpinelinux.org/history.php?action=showgraph&itemids%5B%5D=32214 2021-11-23 18:55:28 https://zabbix.alpinelinux.org/zabbix.php?action=dashboard.view&dashboardid=5 2021-11-23 19:00:26 :D 2021-11-23 19:01:48 armhf peak load 198 2021-11-23 19:02:27 i wonder if we should reduce the parallel build option on each builder 2021-11-23 19:02:37 or reduce number of CPUs for each lxc 2021-11-23 19:03:14 I'm not sure how much it matters, but we were working binding the qemu vms to a single numa domain 2021-11-23 19:03:26 ok 2021-11-23 19:03:39 how may vcpus does each vm get? 2021-11-23 19:03:43 32 2021-11-23 19:03:51 isnt too bad 2021-11-23 19:04:01 the lxc builders gets more I guess? 2021-11-23 19:04:05 80 2021-11-23 19:04:28 (all cores of a single numa domain) 2021-11-23 19:27:11 clandmeter: what was the status on setting the numa domain for the CI hosts? 2021-11-23 19:53:51 from equinix metal ppl: https://twitter.com/w8emv/status/1463231904237379586 2021-11-23 19:54:40 Yeah, we already talked about if before with him 2021-11-23 19:55:38 https://twitter.com/w8emv/status/1376561430951034887 2021-11-23 19:58:35 We adjusted the lxc containers, but not the qemu vms yet 2021-11-23 19:58:56 i guess we need to run qemu under numactl or somethingm? 2021-11-23 19:59:07 yes I would think so as awell 2021-11-23 19:59:09 not sure how to ping the vcpus to a numa domain 2021-11-23 19:59:10 as well 2021-11-23 19:59:38 clandmeter looked at dedicated qemu options, but those are more for emulating numa 2021-11-23 20:19:01 well that's if you want to use numa inside the vm 2021-11-23 20:19:28 if you are pinning to a single node then i think you don't need any options, just numactl qemu 2021-11-23 20:19:59 Yes, that's what I figured 2021-11-23 20:27:38 similar with the affinity patches i mentioned. it only matters if you want 1 vcpu <-> 1 physcpu. if you trust linux scheduler (and ignore smt) then you don't need affinity 2021-11-23 21:02:50 ikke: 2 of them are alraedy on a domain 2021-11-23 21:02:55 just not armhf 2021-11-23 21:03:04 any reason armhf is not yet? 2021-11-23 21:03:16 you asked me already ;-) 2021-11-23 21:03:19 haha :D 2021-11-23 21:03:26 i was sleepy 2021-11-23 21:03:53 and we need to choose which numa 2021-11-23 21:04:30 anyways t he hack is simple 2021-11-23 21:04:32 just not pretty 2021-11-23 21:04:52 first make it work, then make it pretty :) 2021-11-23 21:05:33 you would technically need to make a specific initd for numactl and define the options like qemu 2021-11-23 21:10:34 usa9-dev1 [~]# numastat 2021-11-23 21:10:35 node0 node1 2021-11-23 21:10:37 numa_hit 729905201554 827373452723 2021-11-23 21:10:39 numa_miss 298690782 45423191 2021-11-23 21:10:51 Something to monitor :) 2021-11-23 21:12:54 is it much work to get a graph of that in zabbix? 2021-11-23 21:13:34 what is the use case to monitor this? 2021-11-23 21:14:20 if we bind each container or vm into its own domain, will that make much of a difference? 2021-11-23 21:14:31 We should see less misses 2021-11-23 21:14:40 each to a "specific" domain 2021-11-23 21:14:57 uhm 2021-11-23 21:15:00 A miss means that process was running on a different domain than the memory it needs 2021-11-23 21:15:11 misses only functions when you excplicity set it i guess? 2021-11-23 21:15:19 No 2021-11-23 21:15:54 If you don't bind a process to a numadomain, it can be scheduled on the other domain 2021-11-23 21:16:12 Then the memory lookup crosses the numa domain, which is expensive 2021-11-23 21:16:14 numa_miss A process wanted to allocate memory from another node, but ended up with memory from this node. 2021-11-23 21:16:47 this tells something else? 2021-11-23 21:17:00 you are talking about numa_foreign 2021-11-23 21:17:01 Ok, about allocation,s not access 2021-11-23 21:17:21 clandmeter: a matter of perspective 2021-11-23 21:17:38 planned for this node but ended up in the other one, or planned for the other node, but ended up in this one 2021-11-23 21:18:00 That's how I read it 2021-11-23 21:18:09 https://www.kernel.org/doc/html/latest/admin-guide/numastat.html 2021-11-23 21:18:18 do we have any docs about this issue? 2021-11-23 21:18:48 ed mentioned it, but do we have any references? 2021-11-23 21:19:29 https://www.cc.gatech.edu/~echow/ipcc/hpc-course/HPC-numa.pdf 2021-11-23 21:19:41 https://en.wikipedia.org/wiki/Non-uniform_memory_access 2021-11-23 21:19:41 [WIKIPEDIA] Non-uniform memory access | "Non-uniform memory access (NUMA) is a computer memory design used in multiprocessing, where the memory access time depends on the memory location relative to the processor. Under NUMA, a processor can access its own local memory faster than non-local memory (memory local to another processor or memory..." 2021-11-23 21:20:36 i mean directly related to this CPU architecture 2021-11-23 21:21:30 we are not doing this on x86 2021-11-23 21:21:55 afaik they are single socket 2021-11-23 21:22:13 maybe the interconnection between the arm cpu's is non optimal 2021-11-23 21:22:52 i also read that its actually not needed as the kernel should kind of take care of it. 2021-11-23 21:23:55 ah 2021-11-23 21:24:00 ed mentioned the article 2021-11-23 21:24:01 https://www.anandtech.com/show/16315/the-ampere-altra-review/3 2021-11-23 21:24:43 clandmeter: hmm, x86_64 is dual socket 2021-11-23 21:25:13 each architecture does it differently 2021-11-23 21:25:22 right 2021-11-23 21:27:00 we should just get a dremel and make two boards out of it :) 2021-11-23 21:27:11 :D 2021-11-23 21:46:37 https://www.scylladb.com/2021/09/28/hunting-a-numa-performance-bug/ 2021-11-23 22:22:33 ikke: its alive :) 2021-11-24 02:55:44 "we should just get a dremel and make two boards out of it :)" 2021-11-24 02:55:51 i actually spit my coffee out 2021-11-24 07:16:49 good morning! so the brazilian ppc64le.a.o is back! Do we want move 3.15 builder to it before or after the release? 2021-11-24 07:28:45 yes i added our keys 2021-11-24 07:28:52 i think we can disable password auth 2021-11-24 07:29:51 does it make sense to move it before release? 2021-11-24 07:30:17 i guess we also need to test reboot it :) 2021-11-24 07:31:39 probably does not make sense to move it before release 2021-11-24 07:31:51 im just gonna have some breakfast and then I'll tag the release 2021-11-24 07:32:06 πŸ‘Š 2021-11-24 08:08:47 yeah, I agree, let's do the release first before moving 2021-11-24 11:49:37 How do we add v3.15 to https://pkgs.alpinelinux.org/packages? 2021-11-24 11:50:19 Need to add it to the config and then run the cron script 2021-11-24 11:50:31 There is an open issue for it to start using releases.json 2021-11-24 11:55:54 can you follow that up ikke? 2021-11-24 11:56:02 ncopa: will do 2021-11-24 11:58:45 ncopa: import is running, will take some time 2021-11-24 12:00:09 I changed commit stats to commit contributions 2021-11-24 12:00:19 https://wwwdev.alpinelinux.org/posts/Alpine-3.15.0-released.html 2021-11-24 14:39:38 looks like build-edge-ppc64le is having issues again 2021-11-24 14:40:16 /dev/sdu2 on / type ext4 (ro,relatime,errors=remount-ro,stripe=8) 2021-11-24 14:40:18 great 2021-11-24 14:40:34 maybe we should move to the new machine 2021-11-24 14:42:39 ikke: anything in messages or dmesg alluding to an IO error? could be a bad disk 2021-11-24 14:42:54 These are VMs 2021-11-24 14:43:43 dmesg is flooded with journald not being able to write to disk 2021-11-24 14:44:04 So nothing usefull in there 2021-11-24 15:15:55 i think possibly journalctl might have something in this case 2021-11-24 15:17:07 rejecting I/O to oflfine device 2021-11-24 15:19:25 they mention having issues themselves 2021-11-24 15:19:32 yes 2021-11-24 15:19:41 so we want to quickly migrate to our old builder again 2021-11-24 15:19:41 maybe some san or whatever they are using 2021-11-24 15:19:48 yeah 2021-11-24 15:20:10 should we first try to reboot the new one? 2021-11-24 15:20:15 and disable password auth 2021-11-24 15:20:18 yes 2021-11-24 15:20:24 and check if we can get kernel modules 2021-11-24 20:54:58 I've setup dmvpn on ppc64le, which now succeeded without issues 2021-11-25 08:24:02 on my ncopa-edge-ppc64le: ERROR: Unable to lock database: Read-only file system 2021-11-25 08:25:16 im rebooting gbr3-vm1.a.o 2021-11-25 08:42:25 wow 2021-11-25 08:43:26 ikke: did you try rebooting the ppc box? 2021-11-25 08:45:10 clandmeter: yes 2021-11-25 08:48:30 ikke: and i guess it came back? :) 2021-11-25 08:48:48 Well.. 2021-11-25 08:50:39 I cannot reach it anymore. I think it's network related 2021-11-25 10:05:03 sigh... 2021-11-25 10:07:45 :( 2021-11-25 10:26:29 v3.15 does not show up yet https://pkgs.alpinelinux.org/packages 2021-11-25 10:28:41 now it does 2021-11-25 10:29:34 Needed to restart the application after everything was imported 2021-11-25 10:39:13 awesome! thanks! 2021-11-25 10:42:54 ikke: do we need to send another reminder to ppc ppl? 2021-11-25 11:02:57 ncopa: what about mention supported architectures on https://alpinelinux.org/releases/ ? 2021-11-25 11:03:33 ah 3.15 is not yet mentioned 2021-11-25 11:04:40 3.15 is there 2021-11-25 11:05:08 heh, needed to refresh 2021-11-25 11:06:24 how/where do we list the architectures there without making it a visual mess? 2021-11-25 11:06:42 also what do we do with mips64? 2021-11-25 11:22:03 supported architectures: x86, ...., ... 2021-11-25 11:22:11 doesnt have to be in the table 2021-11-25 11:22:33 so if we exclude mips, it will not be supported anymore 2021-11-25 11:23:30 i would use hint.js, like packages 2021-11-25 11:24:36 packages does not use js :) 2021-11-25 11:24:53 if you are refereing to pkgs.a.o 2021-11-25 11:25:07 .. css 2021-11-25 11:25:09 my bad 2021-11-25 11:25:10 xD 2021-11-25 11:25:27 im not good at webux/i stuff, aha 2021-11-25 11:25:35 join the club 2021-11-25 11:25:50 thus my struggles with js atm 2021-11-25 11:25:56 alpine-infra-not-good-at-ui-stuff 2021-11-25 11:26:37 i tried to stay away as much as possible from js when creatging pkgs. 2021-11-25 11:26:50 i'm not even doing ui, tbh, this is IaaC. DNSControl 2021-11-25 11:26:51 <3 2021-11-25 11:28:27 i miss having my own gitlab+runners for my own infra 2021-11-25 15:05:27 why didn't the x86_64 builder build lua-schema while the rest did? 2021-11-25 16:06:58 sounds like something is broken 2021-11-25 16:08:44 looks like builder is still working on php8? 2021-11-25 16:34:00 for missing builds, improving mqtt msgs and keeping the ouput as searchable tree could be one solution 2021-11-25 16:34:28 grep'ing the logs would be a pain 2021-11-25 16:34:45 vkrishn: sorry context? 2021-11-25 16:35:05 why didn't the x86_64 builder build lua-schema while the rest did? ^^ 2021-11-25 16:35:09 sorry 2021-11-25 16:39:27 is this channel for other alpine users ? 2021-11-25 16:40:00 It's a public channel 2021-11-25 16:40:24 pheeu.. ! 2021-11-25 16:43:44 was refering to msg.alpinelinux.org, kinda nice thing in al 2021-11-25 16:44:39 The builders ingest messages from msg.a.o, so we cannot just change the structure 2021-11-25 16:45:38 and messages are also anounced on #alpine-commits 2021-11-25 16:45:48 yes, true, and adding new topic? 2021-11-25 16:47:42 its not current consubles that need changes but adding new topic 2021-11-25 16:47:59 And what would this new topic provide? 2021-11-25 16:49:52 currently the info is there, when builder (arch a) says uploading to... 2021-11-25 16:50:11 the info just needs to be collected and stored 2021-11-25 16:51:56 and then simple tree grep to the stored info to trigger a alarm 2021-11-25 16:57:47 is there any survery/info of arch wise installations? 2021-11-25 16:58:09 like which arch is mostly used? x86_64? 2021-11-25 16:59:06 We don't spy on our users 2021-11-25 16:59:13 ;) 2021-11-25 17:01:36 well a mere count() of APKINDEX.tgz pulls could do something 2021-11-25 17:02:09 with ip address saving ofcourse :) 2021-11-25 17:02:20 with NO^ 2021-11-25 17:03:40 we dont't track our users 2021-11-25 17:05:19 ok 2021-11-25 21:35:34 mips64 should probably be removed from p.a.o 2021-11-25 21:35:47 or at least for the edge release 2021-11-26 09:28:51 how does alpine deploy gitlab, anyway 2021-11-26 09:28:55 i'm sure i've asked before 2021-11-26 09:29:53 We have our own docker images 2021-11-26 09:30:06 https://gitlab.alpinelinux.org/alpine/infra/docker/gitlab 2021-11-26 09:30:56 ah 2021-11-26 10:31:57 i think its possible to run in a docker container 2021-11-26 10:32:32 What? 2021-11-26 10:32:53 for github actions I mean 2021-11-26 10:32:59 ah 2021-11-26 10:33:17 I assume you refer to CI for python musl 2021-11-26 10:33:52 yeah, i didnt notice i changed channel and assumed the gitlab question was about python CI. sorry 2021-11-26 11:25:59 ooh 2021-11-26 11:26:15 It's back 2021-11-26 11:26:49 yes 2021-11-26 11:26:52 there is an email 2021-11-26 11:27:01 some boot config was missing 2021-11-26 11:27:08 yes, read it 2021-11-26 11:29:35 clandmeter: do you want me to continue, or do you want to handle the setup? 2021-11-26 11:29:57 uhm, to move the builders? 2021-11-26 11:30:22 to setup everything first 2021-11-26 11:30:55 i think you started it, so i guess it makes sense you finish that part. 2021-11-26 11:31:03 i can move the buidlers when you are done if you like 2021-11-26 11:31:32 ok 2021-11-26 11:31:36 but today is fully booked with social activities.... 2021-11-26 11:31:37 I see dmvpn is working now as well 2021-11-26 11:32:21 good 2021-11-26 11:32:33 i need to finish the distfiles setup too 2021-11-26 11:32:37 nod 2021-11-26 11:32:45 I'll finish setting up the builder 2021-11-26 11:33:06 so we can update dns 2021-11-26 11:33:54 i guess we need to remove or update the ssh config on all builders 2021-11-26 11:34:13 currently they scp over static defined address 2021-11-26 11:34:16 ssh was / is used for logs 2021-11-26 11:34:39 Which is served under build.a.o 2021-11-26 11:35:10 do we still want to scp over dmvpn? 2021-11-26 11:35:23 for distfiles you mean? 2021-11-26 11:35:27 yes 2021-11-26 11:35:29 the logs 2021-11-26 11:35:43 I think dmvpn was used so that it could sync directly to a container 2021-11-26 11:35:48 nod 2021-11-26 11:35:57 which now is not needed anymore 2021-11-26 11:36:08 but now we cannot control the endpoint by dns 2021-11-26 11:36:09 Right, so I don't have issue to sync it to a public IP 2021-11-26 11:36:10 due to config 2021-11-26 11:36:48 but, how do we deal with build logs being hosted under build.a.o? 2021-11-26 11:37:11 what do you mean? 2021-11-26 11:37:21 https://build.alpinelinux.org/buildlogs/build-edge-x86_64/main/abuild/abuild-3.9.0-r0.log 2021-11-26 11:37:38 which is what pkgs.a.o links to, as well as algitbot 2021-11-26 11:37:54 but thats the same file as distfiles correct? 2021-11-26 11:38:28 build.alpinelinux.org == distfiles.alpinelinux.org? 2021-11-26 11:40:19 yeah, they point to the same container 2021-11-26 11:40:37 we could move build.a.o 2021-11-26 11:40:54 its a simple static page with some files 2021-11-26 11:41:12 Fine with me 2021-11-26 11:52:42 clandmeter: I think we want to setup a similar lxc config as we did with usa9? 2021-11-26 11:52:51 the structure I mean 2021-11-26 11:52:51 nod 2021-11-26 11:53:10 i just took the default template and added some includes 2021-11-26 11:53:39 dev and builder configs 2021-11-26 11:53:50 will be easier to mass update something 2021-11-26 11:57:08 ikke: we could use the same script after we rsynced everything over 2021-11-26 11:57:31 yea 2021-11-26 11:58:35 Ok, I have a test container running on bra1 now 2021-11-26 12:00:11 clandmeter: I did not copy the config from usa9 over yet, but I think we can think about start syncing the containers over 2021-11-26 12:01:05 +1 2021-11-26 12:07:57 Ariadne: do you think it's worth to run gitlab on k8s? 2021-11-26 15:34:57 vim on x86_64 edge is having BAD SIGNATURE issues 2021-11-26 15:43:57 PureTryOut: yes, will look at that in a bit 2021-11-26 15:44:19 Thanks 2021-11-26 16:54:37 ikke: no, i want to run my own gitlab 2021-11-26 16:54:50 Ariadne: ok 2021-11-27 01:10:32 Ariadne: SAME. i have no hardware nice enough to house one :( 2021-11-27 01:10:59 hmm? 2021-11-27 01:11:08 selfhosting gitlab 2021-11-27 01:11:20 i have a moonframe 2021-11-27 01:11:35 wait 2021-11-27 01:11:42 gitlab works on your z1? 2021-11-27 01:11:53 (i think its a z1?) 2021-11-27 01:57:33 z13 2021-11-27 10:13:16 hey Ariadne, would it be possible to create a subproject for https://gitlab.alpinelinux.org/team/alpine-desktop called something like "Issues"? It'd be a good place to discuss desktop related (but non DE-specific) issues that cover more than just a single package 2021-11-27 10:52:35 wouldnt a subgroup called issue be confusing? 2021-11-27 11:28:56 Seems like ruby build deadlocked on 3-12-arm 2021-11-27 11:40:04 PureTryOut: not my department, but yes, some sort of issue tracking would be welcome 2021-11-27 11:41:55 "wouldnt a subgroup called..." <- I'm thinking of doing it like KDE does it, e.g. https://invent.kde.org/teams/plasma-mobile/issues. It isn't really a repo, and just used to track issues which apply to more than 1 repo at a time 2021-11-27 11:42:12 oh sorry I shouldn't have done a Matrix reply, force of habit 2021-11-29 10:14:02 ncopa: releases.json still includes mips? 2021-11-29 10:16:18 clandmeter: not for 3.15 2021-11-29 10:16:44 top level does 2021-11-29 10:16:51 not sure what that actually does 2021-11-29 10:16:55 and also for other branches 2021-11-29 10:17:12 not sure we want to keep it if its not supported anymore 2021-11-29 10:17:39 im looking at pkgs.a.o 2021-11-29 10:18:12 i think we need a license from martijn 2021-11-29 10:18:40 PureTryOut: can you ask Martijn if he can define a license? 2021-11-29 10:18:56 I didn't set a license? 2021-11-29 10:18:59 Oops 2021-11-29 10:19:05 ah you are here :) 2021-11-29 10:19:26 martijnbraam: hi 2021-11-29 10:19:31 im looking at your implementation 2021-11-29 10:19:39 adding some missing things 2021-11-29 10:20:38 clandmeter: afaik the top-level is just a combination of all arches 2021-11-29 10:21:19 what i would like to add is to make the importer fetch the releases.json and keep a local copy which the app can use to select repo arch and similar 2021-11-29 10:21:19 clandmeter: implemention of...? pkgs.a.o? 2021-11-29 10:24:18 ikke: ok, but we are not supporting any mips version anymore, so i guess we need to exclude them? 2021-11-29 10:24:40 PureTryOut: moving to the python one 2021-11-29 10:24:50 nice, that'd be awesome! 2021-11-29 10:25:56 PureTryOut: https://gitlab.alpinelinux.org/alpine/infra/infra/-/issues/10736 2021-11-29 10:27:51 clandmeter: we don't remove branches that are no longer supported 2021-11-29 10:28:31 dont remove the branch 2021-11-29 10:28:33 remove the arch 2021-11-29 10:28:50 if we dont have the hardware, we cannot support it. 2021-11-29 10:29:10 listing it in releases.json would technically mean we support it 2021-11-29 10:29:27 I mean, if we don't remove the branches, why should we remove the arches? 2021-11-29 10:29:53 im not following 2021-11-29 10:29:54 What we do not have is a way to indicate that an arch is EOL 2021-11-29 10:30:42 if you remove the arch from the repo, why keep it in releases.json? 2021-11-29 10:31:03 like you said there is no interface to make it eol 2021-11-29 10:42:41 I've added the license 2021-11-29 10:49:03 thanks 2021-11-29 11:18:51 clandmeter: once we remove mips from edge, we can remove it from that list for edge 2021-11-29 11:23:21 ikke: why is it not included for 3.14? 2021-11-29 11:24:46 apparently forgotten in the commit: https://gitlab.alpinelinux.org/alpine/infra/alpine-mksite/-/commit/ba70a6dd9be6f9ad0be60d76c202a779141c9a60 2021-11-29 11:25:19 Or, I do recall there being issues with the builder already at the time 2021-11-29 11:25:36 so we skipped mips64 at the release, but it came back later 2021-11-29 11:35:43 clandmeter: btw: https://gitlab.alpinelinux.org/alpine/infra/alpine-mksite/-/merge_requests/34 2021-11-30 08:04:32 morning 2021-11-30 08:17:17 o/ 2021-11-30 09:24:48 how goes? 2021-11-30 09:37:22 Busy with work 2021-11-30 11:43:51 hi 2021-11-30 11:43:55 hey 2021-11-30 11:44:07 I have been sick for a few days. still have fever 2021-11-30 11:44:18 anything urgent you need me to handle? 2021-11-30 11:45:10 No, not atm 2021-11-30 11:45:17 Hope you feel well soon! 2021-11-30 11:45:50 thanks 2021-11-30 11:50:19 ncopa: get well soon 2021-11-30 11:59:52 clandmeter: I've setup awall as well on bra1 now 2021-11-30 12:01:06 Should I disable permitrootlogin now? 2021-11-30 12:01:13 (default to prohibit-password) 2021-11-30 12:03:37 ncopa: take more red wine and you will be ok soon :) 2021-11-30 12:04:01 anyway I wish you to be fine asap 2021-11-30 12:04:22 yes, get well soon! 2021-11-30 12:34:56 ikke: yes pls 2021-11-30 16:30:17 clandmeter: done 2021-11-30 16:41:55 ncopa: get well soon 2021-11-30 16:42:12 note-to-all: try switching to simple cooked foods, prefer simple fortified foods during these times 2021-11-30 16:53:19 vkrishn: you forgot vine and brandy :) 2021-11-30 16:53:38 vodka also 2021-11-30 17:00:47 nooo, I read alcohol depletes or interferes with vitamins/mineral absorption, thereby reducing your immunity 2021-11-30 17:01:50 Alcohol also kills bacteria / virusses :) 2021-11-30 17:02:10 if you need to sanitize things around you, try spraying 'sodium hipo-chloride' is your room, keep area ventilated after spray 2021-11-30 17:02:42 weak solution of it, ask local chemist about it 2021-11-30 17:03:31 even water extract of calcium-hypo-chloride would work 2021-11-30 17:04:54 alcohol is in a lot of medicine 2021-11-30 17:05:02 keep you body full with daily requirement of vitamins/mineral, read on wikepdia about mininum requirement, its published by almost all developed countries 2021-11-30 17:05:16 vkrishn: maybe take this to #alpine-offtopic 2021-11-30 17:05:33 yes, only if taken in prescribed limits 2021-11-30 17:05:40 oookii 2021-11-30 17:05:40 don't trust everything you read, especially nonsences 2021-11-30 17:08:28 clandmeter: I'm syncing build-3-10-ppc64le to the new builders now 2021-11-30 17:21:51 mps: :) 2021-11-30 17:22:06 clandmeter: okidoki 2021-11-30 17:22:13 huh 2021-11-30 17:22:31 ikke: ^ 2021-11-30 17:22:41 :)\ 2021-11-30 17:22:46 talking to myself again 2021-11-30 17:22:52 Do you have that more often? :P 2021-11-30 17:23:35 I try not to 2021-11-30 17:54:06 ok, build-3-10-ppc64le is working now on the new builder 2021-11-30 17:55:03 how many releases do we want on new pkgs? 2021-11-30 17:55:11 just the supported ones? 2021-11-30 17:57:38 Good question 2021-11-30 17:57:57 i think we can just start with supported 2021-11-30 17:58:15 at least we dont have to wait 10 days to let if finish syncing :) 2021-11-30 18:02:48 nod 2021-11-30 18:08:29 ikke: its running 2021-11-30 18:08:38 Cool 2021-11-30 18:08:50 i guess we could have reused the current dbs 2021-11-30 18:09:03 but im not 100% sure its the same 2021-11-30 18:09:10 im too lazy to check 2021-11-30 18:09:10 Ok 2021-11-30 18:09:27 and its good to test run it :D 2021-11-30 22:01:09 hmm, that powerpc machine is not bad to be honest (Just looking at specs) 2021-11-30 22:01:26 72 cores, 128G mem 2021-11-30 22:52:56 the arm one is also not bad, but we still manage to make it crawl :)