2021-01-01 15:55:13 i have deleted release candidates from dl-master. there are 128G free now 2021-01-01 15:57:23 👍 2021-01-03 17:19:11 ikke: Do you happen to know if I can generate custom tarballs and set those as release in Gitlab CI? 2021-01-03 17:19:33 For apk-polkit-rs I need to make custom tarballs for each tag to include the cargo dependencies so it can be built w/o network 2021-01-03 17:19:49 Would be neat if gitlab-ci automatically generated that tarball on a tag 2021-01-03 17:20:03 Cogitri: afaik we need a newer version of gitlab 2021-01-03 17:20:14 If found https://docs.gitlab.com/ee/ci/yaml/README.html#release but AFAICS it can only set description? 2021-01-03 17:20:59 gitlab 13.5 2021-01-03 17:21:08 https://about.gitlab.com/releases/2020/10/22/gitlab-13-5-released/#attach-binary-assets-to-releases 2021-01-03 17:23:18 Ah thanks 2021-01-03 17:24:20 Not clear from the documentation how it works, though 2021-01-03 17:25:19 https://gitlab.com/gitlab-org/release-cli/-/tree/master/docs/examples/release-assets-as-generic-package/ I guess 2021-01-03 17:26:20 Yes, reading that 2021-01-03 17:33:19 I'm not sure where to store these assets 2021-01-03 17:33:42 I guess you'd need to use artifacts that are stored forever 2021-01-03 18:50:40 Seems like there's some package registry thingie I can upload to with Gitlab 2021-01-03 18:51:06 Ohhhh, seems like I can even get download paths without that sha512 sum in it so it's as easy as changing $pkgver in APKBUILDs 2021-01-03 18:57:24 Tested it on GNOME's gitlab since they have a more recent Gitlab version and uploading seems to work but the download on https://gitlab.gnome.org/Cogitri/test-gtk4/-/releases/0.12.0 404's, great :D 2021-01-03 22:01:31 ikke: fwiw works now, seems like the admins had to toggle some thingie to enable the package registry. Here's the registry: https://gitlab.gnome.org/Cogitri/test-gtk4/-/packages and here's the .gitlab-ci.yml: https://gitlab.gnome.org/Cogitri/test-gtk4/-/blob/master/.gitlab-ci.yml in case you're curious :) 2021-01-03 22:05:16 ok, I think we would need to do that as well 2021-01-03 22:06:08 Don't they have support for the `release:` section yet? 2021-01-03 23:07:04 https://gitlab.com/groups/gitlab-org/-/epics/2510 doesn't seem like they do 2021-01-03 23:07:53 Huh, seems like I can't download from a package registry as unauthorised user? 2021-01-03 23:10:42 https://docs.gitlab.com/ee/user/packages/generic_packages/#download-package-file : Prerequisites: You need to authenticate with the API. 2021-01-03 23:10:43 Yikes 2021-01-03 23:12:05 https://gitlab.com/gitlab-org/gitlab/-/issues/271534 2021-01-04 00:04:50 https://gitlab.gnome.org/Cogitri/gnome-health/-/commit/03073ec2bf7a2f23d3a62021f9586d264ecc1952 with this "neat" workaround it works, hooray 2021-01-04 05:29:29 it is an ugly hack indeed 2021-01-04 07:38:38 looks like we now have 3.0T total disk space on dl-master. 2.1T available 2021-01-04 08:23:32 Oh, nice, the storage has increased? 2021-01-04 08:56:58 apparently it has 2021-01-04 09:38:41 the emails im seeing are weird 2021-01-04 09:38:54 ncopa: you also receive them, but i guess you dont notice them. 2021-01-04 09:39:05 What e-mails are you receiving? 2021-01-04 09:39:14 freespace ones 2021-01-04 09:39:57 Disk size: 2021-01-04 09:39:57 1500000.0 GiB 2021-01-04 09:40:07 what is that? 2021-01-04 09:40:25 i think they gave us more diskspace 2021-01-04 09:40:30 we now have 3TB 2021-01-04 09:40:46 i suspect its thanks to jirutka 2021-01-04 09:40:48 Free space: 2021-01-04 09:40:48 2131.01 GiB 2021-01-04 09:40:54 thats whats in the email 2021-01-04 09:40:56 from 11h ago 2021-01-04 09:41:19 subject: VPS #8185: 0.14 % of disk space left 2021-01-04 09:41:46 i think someone increased the storage 2021-01-04 09:42:09 i think jiruka also receives them, could be he asked for more space. 2021-01-04 09:43:42 i suspect that is what happened 2021-01-04 09:43:50 im checking the panel 2021-01-04 09:43:54 but it does not show more space 2021-01-04 09:44:21 i logged in to the machine and it shows 3TB disk space 2021-01-04 09:44:34 as a single disk? 2021-01-04 09:44:39 yes 2021-01-04 09:45:09 i beleive you, i just dont underrstand this panel :) 2021-01-04 09:45:23 looks like the os gets 920GB 2021-01-04 09:45:33 and there is another 2.1 dataset 2021-01-04 09:45:44 https://tpaste.us/W74M 2021-01-04 09:46:01 simfs? 2021-01-04 09:46:19 it could also be that it they temporarily gave us more diskspace so we dont get locked out, i dunno 2021-01-04 09:46:30 maybe we should ask jirutka if he knows anything about it 2021-01-04 09:46:57 if i have some free time i can prepare uk.a.o 2021-01-04 09:47:03 the other servers are ready 2021-01-04 09:47:35 https://kb.vpsfree.org/manuals/vps/vpsadminos 2021-01-04 09:47:46 The VPS is running on OpenVZ Legacy. This virtualization platform is old and deprecated. We recommend an upgrade to vpsAdminOS, our new virtualization platform. Please see the knowledge base English / Czech for more information. 2021-01-05 12:10:20 can anyone check what this is about? https://github.com/alpinelinux/docker-alpine/issues/122 2021-01-05 12:12:29 I'm already working with tynor88 here https://gitlab.alpinelinux.org/alpine/aports/-/issues/12268 2021-01-05 12:25:02 ncopa: guy just using some weird not official mirror? "fetch https://alpine.pkgs.org/edge/alpine-main-x86_64/x86_64/APKINDEX.tar.gz" 2021-01-05 12:26:22 and APKINDEX.tar.gz "Error 404: Not Found" 2021-01-05 12:54:03 yeah, i wonder what that is 2021-01-05 13:31:57 Any special reason for https://github.com/alpinelinux/docker-alpine not being hosted on gl.a.o? 2021-01-05 13:57:26 nobody cared enough to move it 2021-01-05 14:42:55 k :( Would be nice to have it under the same roof so issues could be delegated and moved to the relevant project. 2021-01-05 14:42:55 A "soft migration" could be done by disabling GH issues and add a couple of lines to the readme. 2021-01-06 14:57:36 build-3-13-mips64 is missing on build.a.o 2021-01-06 14:57:56 yea 2021-01-06 14:58:02 the host seems to be awol 2021-01-06 16:10:04 Ariadne: seems like mips64 builder is gone again 2021-01-06 16:15:59 ok 2021-01-06 16:16:05 i'll check on it 2021-01-06 17:25:21 I'd like to tag rc3 2021-01-06 23:50:55 ikke: if you are around could I get you to check where the aarch64 ci runner has wandered of to? 2021-01-07 05:33:48 TBK[m]: seem to be all accounted for 2021-01-07 05:38:52 ACTION must learn to read 2021-01-07 05:39:06 booted the ci runner vm again 2021-01-07 14:35:44 Ariadne: what happened to the mips64 builder? 2021-01-07 14:35:57 should we drop mips64? 2021-01-07 14:36:01 or should we drop 3.13 release? 2021-01-07 15:07:14 ugh. i should fix the setup-disk to not install syslinux on non x86* 2021-01-07 18:16:44 tried following this install buide for raspi, https://wiki.alpinelinux.org/wiki/Raspberry_Pi_4_-_Persistent_system_acting_as_a_NAS_and_Time_Machine ... getting "filesystems couldnt be fixed" after last rebooot 2021-01-07 18:17:46 doesnt mount the vfat during boot, to /media/mmcblk0p1 which has "boot" whic /boot is symlinked to 2021-01-07 18:18:15 also after every reboot i have to re-run setup-alpine to get network and hostname etc going 2021-01-07 18:21:35 blawiz: we have #alpine-linux channel for user help 2021-01-07 18:26:31 blawiz: do you install in sys mode or 'run from ram', i.e. default 2021-01-07 18:31:56 not sure tbh 2021-01-07 18:35:14 blawiz: but please join #alpine-linux channel and ask there 2021-01-07 18:35:40 okok 2021-01-07 18:36:23 weird the channel is not listed in https://kiwiirc.com/search?q=alpine&network=* 2021-01-08 05:40:15 ncopa: its internet connection is down, we are working to fix it asap 2021-01-08 05:40:25 i think the modem needs to be rebooted again 2021-01-08 05:40:53 i'm planning to get another builder and put it in my colo to replace the current one :) 2021-01-08 08:10:56 would be good 2021-01-08 12:47:19 connectivity should be restored 2021-01-08 12:48:11 ack 2021-01-08 12:48:20 we kinda got banned 2021-01-08 12:48:22 lol 2021-01-08 12:48:36 oof 2021-01-08 12:48:46 traffic? 2021-01-08 12:49:03 yeah tripped some alarm 2021-01-08 14:49:49 ugh 2021-01-09 01:53:56 No space left on device. -> build-edge-aarch64: failed to build ceph: https://build.alpinelinux.org/buildlogs/build-edge-aarch64/community/ceph/ceph-15.2.8-r1.log 2021-01-09 08:06:55 yes, ARM builders seem out of disk 2021-01-09 08:18:46 ok, the cron that cleans up docker stuff is missing there 2021-01-09 08:19:04 oh 2021-01-09 08:19:08 not docker :P 2021-01-09 08:34:48 builder is '/dev/mapper/vg0-lv_root 455795016 432572156 0 100% /' 2021-01-09 08:36:30 dedupping reclaimed 16G 2021-01-09 08:42:27 NOW you mention it? :P 2021-01-09 13:17:36 Seems like armv7 is full again, Rust is too huge :D 2021-01-09 13:18:05 yes, trying to figure out how to get more space 2021-01-09 13:18:10 or rather, clean things up 2021-01-09 13:19:22 Cleaning up my aarch64 container if that helps 2021-01-09 13:19:41 if it's on usa4, it would help 2021-01-09 13:21:06 I think it is 2021-01-09 13:21:13 Cleaned it up, not a bad idea either way :) 2021-01-09 13:22:17 <500M :P 2021-01-09 13:49:37 huh, I'm using 6-7GB 2021-01-09 13:54:27 now about 3GB 2021-01-09 13:58:32 now 2GB, can't go down more 2021-01-09 13:58:58 mps: thanks 2021-01-09 13:59:18 6G free atm 2021-01-09 13:59:43 There are a lot of older distfiles for edge that I suppose can be removed, but I'd need something to identify them 2021-01-09 14:35:08 hi, can I have my gitlab account updated to allow me to create public repos under my personal namespace 2021-01-09 14:35:18 I want to experiment with some API integrations 2021-01-09 14:47:21 ddevault: this is a global setting, we cannot enable it per user (we can make repos public per user / namespace, but not give them permissions to make public repos) 2021-01-09 14:50:33 okay, thanks ikke 2021-01-09 14:50:38 I think I can get what I need with internal repos 2021-01-09 14:51:03 I'll ask again if not and we can look for something else to do, thanks 2021-01-09 21:11:47 okay, I have completed an experimental version of this concept 2021-01-09 21:12:13 I have some code which can take patches emailed to the mailing list and open gitlab MRs for them, then bidirectionally forward feedback as appropriate 2021-01-09 21:12:38 I will need someone with admin access to aports on gitlab.a.o to help get it up and running 2021-01-09 21:12:50 2021-01-09 21:13:03 I was thinking about something like that 2021-01-09 21:13:08 example here: https://gitlab.alpinelinux.org/ddevault/scdoc/-/merge_requests/6 2021-01-09 21:31:24 ddevault: that sounds amazing 2021-01-09 21:33:11 I'm not interested in spending 5 minutes to do what could take 30 seconds because no one wants to review the mailing list anymore 2021-01-09 22:42:59 Oo 2021-01-09 23:59:26 I'm going to take lists.a.o briefly offline to do that side of the deployment work, presuming no one objects 2021-01-10 00:09:31 back online 2021-01-10 00:22:28 summary of the necessary changes: https://paste.sr.ht/~sircmpwn/16d4cfa18614a75f762771a701c8544c8c7c9f7e 2021-01-10 04:34:11 nice work 2021-01-10 12:09:56 clandmeter: any opinion on ^^ 2021-01-10 20:04:13 ikke: about sr.ht? 2021-01-10 22:03:37 is the mips64 box AWOL again? 2021-01-10 22:45:56 no 2021-01-11 08:01:49 ikke: ? 2021-01-11 08:02:00 the mirror? 2021-01-11 08:02:17 clandmeter: the proposal from ddevault 2021-01-11 09:28:38 ddevault: hi, what is the current status about your cooperation with our infra team? 2021-01-11 12:47:46 clandmeter: I will commit to as much work as is necessary to maintain my interests in alpine 2021-01-11 12:48:01 namely keeping the list running and now, it seems, some secondary software to forward patches to gitlab 2021-01-11 12:48:11 so I can keep my packages up to date 2021-01-11 12:53:30 ddevault: understand, will you keep lists.a.o updated? 2021-01-11 12:53:51 sure 2021-01-11 12:54:08 it takes care of itself for the most part, hasn't crashed since I first deployed it 2021-01-11 12:54:30 i think its still running an older version? 2021-01-11 12:54:35 I updated it yesterday 2021-01-11 12:54:41 nod 2021-01-11 12:54:45 good 2021-01-11 12:54:53 thanks for that 2021-01-11 12:55:09 i have a concern about your request 2021-01-11 12:55:42 i guess it needs admin perms to impersonate users? 2021-01-11 12:55:58 no, that's not how it works 2021-01-11 12:56:13 it would just use one account to submit MRs on behalf of others, and clarifies the original author in the MR description 2021-01-11 12:56:34 it needs admin perms to set up webhooks, and normal perms to use the API to open MRs, leave comments, etc 2021-01-11 12:57:01 ok 2021-01-11 13:01:11 maybe im not getting it, but the MR's will be created against branches on the main aports tree? not like we do now when the user uses its own branch? 2021-01-11 13:01:56 it pushes a branch called list-patch-%d or something similar, to the main aports repository 2021-01-11 13:02:01 and sets the flag which deletes the branch after merge 2021-01-11 13:02:17 We could make a dedicated fork for this, I guess? 2021-01-11 13:02:29 it would complicate things 2021-01-11 13:02:32 is there any reason to? 2021-01-11 13:02:52 gitlab already makes references in the upstream repo for every MR opened 2021-01-11 13:02:54 similarly to github 2021-01-11 13:04:06 https://tpaste.us/vZWO 2021-01-11 13:04:21 refs/heads namespace is clean now 2021-01-11 13:05:15 git ls-remote | wc -c gives 4.5MiB from where I'm standing 2021-01-11 13:05:33 and in any case, it cleans up after itself, the branches are removed after merge 2021-01-11 13:07:34 Would mean all users fetch those temporary branches though 2021-01-11 13:07:42 Would be a lot nicer to have it in a fork imho 2021-01-11 13:15:01 putting it in a fork makes it a lot less useful as a re-usable appliance 2021-01-11 13:15:11 part of the appeal of writing this is that it might be useful to other sr.ht users elsewhere 2021-01-11 13:15:22 and I don't want to maintain a patched codebase for alpine's sake 2021-01-11 13:16:13 I can imagine there are more projects that work with a fork + MR workflow 2021-01-11 13:16:44 and many who also commit to branches and MR from those 2021-01-11 13:16:52 sure 2021-01-11 13:16:52 it's not easy to tell which is being used just by observing MRs 2021-01-11 13:17:02 enough projects who pus hdirectly to master as well 2021-01-11 13:17:11 in any case, using a fork requires asking the user to make some error-prone judgement calls 2021-01-11 13:17:24 who is the user in this case? 2021-01-11 13:17:25 where does the fork go? who owns it? what is it named? The fork name would normally clash with the upstream repo 2021-01-11 13:17:39 many people might not like to have two versions of the repo in their profile, upstream and upstream.for-ml-patches 2021-01-11 13:18:07 the user -> whoever is in charge of the integration 2021-01-11 13:18:30 right now the integratino is two steps: choose gitlab repo, choose mailing list 2021-01-11 13:18:38 integration* 2021-01-11 13:19:06 I imagine what we ask would add just one more parameter? What project to make the MR against? 2021-01-11 13:19:21 btw, what about patches aginst stable branches? 2021-01-11 13:19:51 the implicit step would be setting up a fork beforehand 2021-01-11 13:20:07 sure, but that can be optional 2021-01-11 13:20:19 patches against stable branches wouldn't work correctly with this approach 2021-01-11 13:20:33 I'll have to come up with something later 2021-01-11 13:20:50 [PATCH v3.12] or something would probably work fine 2021-01-11 13:21:06 anyway, a few temporary branches seems like a reasonable price to me 2021-01-11 13:21:28 despite maxice8's best efforts, several people are still submitting patches via the mailing list, and I would rather not see them get shafted 2021-01-11 13:21:50 7 people in the last month 2021-01-11 13:22:09 yes, I see all these patches 2021-01-11 16:41:16 clandmeter: ncopa suggested trying to upgrade musl in gitlab from edge to see if gitaly-ssh still hangs. Anything against that? Reverting that should be easy enough. 2021-01-11 16:41:52 Np 2021-01-11 16:42:08 I would do it from the current container 2021-01-11 16:42:43 done 2021-01-11 16:42:46 yes 2021-01-11 16:42:54 Easy to drop it 2021-01-11 16:43:15 apk add -X https://dl-cdn.alpinelinux.org/alpine/edge/main -u musl 2021-01-11 16:44:35 No missing symbols? 2021-01-11 16:44:36 Will keep an eye to see if we still get hanging gitaly-ssh processes 2021-01-11 16:45:02 I think for x86_64 it was ABI compatible? 2021-01-11 16:45:37 https://abi-laboratory.pro/index.php?view=timeline&l=musl 2021-01-11 16:45:40 yes 2021-01-11 16:45:55 No removed symbols 2021-01-11 16:47:03 rebasing still works 2021-01-11 17:18:44 upgrading musl should always work, downgrading may not 2021-01-11 17:19:06 ok 2021-01-11 17:22:17 so are we at an impasse with the gitlab bot here 2021-01-11 17:26:57 ncopa: any opinion on this? ^ (summary: the integration that ddevault made requires topic branches to be created directly on alpine/aports 2021-01-11 17:30:55 ikke: tbh, i don't want get involved in that. my head is more than enough busy with other things. I trust you and clandmeter on this. 2021-01-11 17:31:24 you know gitlab much better than i do 2021-01-11 17:31:36 This is not a gitlab specific thing 2021-01-11 17:31:52 just whether you mind topic branches created directly on alpine/aports 2021-01-11 17:31:56 oh 2021-01-11 17:32:00 right 2021-01-11 17:32:22 so every time i do git pull i'll get those branches locally i suppose? 2021-01-11 17:32:37 will they get deleted once they are merged? 2021-01-11 17:32:38 Yes, for as long as they are still there 2021-01-11 17:32:40 yes 2021-01-11 17:33:06 you need to set fetch.prune to true to automatically remove remote tracking branches 2021-01-11 17:33:48 will I have to clean up references to the deleted branch locally too? 2021-01-11 17:34:01 not if you use that config setting 2021-01-11 17:34:15 then after git fetch (or git pull) from alpine/aports, they are pruned 2021-01-11 17:34:59 and fetch.prune is false by default? 2021-01-11 17:35:02 yes 2021-01-11 17:35:18 so i need to reconfig every single place i have checked out 2021-01-11 17:35:31 and on the build servers too i suppose 2021-01-11 17:36:13 yes 2021-01-11 17:36:57 git fetch --prune 2021-01-11 17:37:04 or git pull --prune 2021-01-11 17:37:07 i guess 2021-01-11 17:37:44 i guess i can fix the build script to deal with it 2021-01-11 17:37:56 how will it work for stable branches? 2021-01-11 17:38:20 fetch --prune? 2021-01-11 17:38:54 yeah, instead of setting the config i can change the script to do `git pull --prune` 2021-01-11 17:38:59 yes 2021-01-11 17:39:10 so i dont need remeber set that every time i set up new build server 2021-01-11 17:39:15 it's a safe option if you don't care about remote tracking branches for removed branches 2021-01-11 17:39:40 what happens if an MR never gets merged? 2021-01-11 17:39:47 the the feature branch will stay there forever? 2021-01-11 17:39:49 Then we need to clean it up at some point 2021-01-11 17:39:52 yes 2021-01-11 17:40:20 given that we currently have 174 open MRs 2021-01-11 17:40:41 we can expect to have ~150 -> 200 feature branches at all times 2021-01-11 17:40:51 i dont like it tbh 2021-01-11 17:40:54 This is only for patches sent to the ML 2021-01-11 17:41:28 i suppose we can live with that 2021-01-11 17:41:33 test and see how it goes 2021-01-11 17:41:39 how are those features branches named? 2021-01-11 17:41:48 who/what creates the feature branch? 2021-01-11 17:42:09 list-patch- 2021-01-11 17:42:20 mailed-patch-, actually 2021-01-11 17:42:36 and what happens if there already exists a branch with that name? 2021-01-11 17:42:43 and they are created by a sr.ht service which has hitherto not been in use for alpine linux, and is designed to facilitate linkage between sr.ht services and third-party software 2021-01-11 17:43:01 it will fail, but this is an unlikely situation 2021-01-11 17:43:13 someone would probably have to arrange for that deliberately with the intention of breaking it 2021-01-11 17:43:16 ddevault: I guess it does not handle rerolls for patch series either? 2021-01-11 17:43:16 im just want make sure so not someone sends a crafted email and delete 3.12-stable for example 2021-01-11 17:43:27 that's not possible 2021-01-11 17:43:44 it's mailed-patch-, where the integer is filled in from the database and increments upwards from one 2021-01-11 17:44:00 unless we name an important branch mailed-patch- where n is a future ID, it won't be an issue 2021-01-11 17:44:07 ddevault: just a suggestion, maybe use a branch namespace? 2021-01-11 17:44:12 ddevault: that is my concern. that someone intentionally overwrites an already existing branch 2021-01-11 17:44:31 refs/heads/ml/* 2021-01-11 17:44:34 ikke: presently yes, but some plans upstream will determine the relationship between versions in a patch series, and this will make it possible to expand on this feature in the gitlab integration 2021-01-11 17:44:34 or whatever 2021-01-11 17:44:44 as for namespaces, mailed-patch- is a namespace, it just doesn't use slashes 2021-01-11 17:44:59 ddevault: right, so you cannot use gits built-in support for namespaces 2021-01-11 17:45:15 git has built-in support for namespaces? 2021-01-11 17:45:54 / is a native namespace divider, but there is also some (rarely used) namespace support) 2021-01-11 17:46:08 I don't think that's true 2021-01-11 17:46:17 I have never heard of such a thing and a cursory review of some relevant man pages turns up nothing 2021-01-11 17:46:24 namespaces in branch names are by convention only 2021-01-11 17:47:01 slashes seem more brittle to me, because git actually does use them for refs/heads/etc 2021-01-11 17:47:01 in either case, right now i want get 3.13 out. anything that slows that down or delays the release is unpopular 2021-01-11 17:47:15 I was told that from 3.13, patches sent to the mailing list will not be reviewd 2021-01-11 17:47:21 so I consider this a blocker unless those plans change 2021-01-11 17:47:28 reviewed* 2021-01-11 17:47:36 I have not heard such a thing 2021-01-11 17:48:05 maxice8 said so much on the mailing list 2021-01-11 17:48:10 and has been telling everyone who sends in a patch to go to gitlab instead 2021-01-11 17:48:22 and if ikke and clandmeter want me to make decision, tha answer is going to be: "is it needed for the 3.13 release? will it help us get the 3.13 out? can it wait til after 3.13?" 2021-01-11 17:48:41 it can wait if we can agree not to ignore the list after 3.13 comes out 2021-01-11 17:48:56 ddevault: well, he said he won't be reviewing it anymore 2021-01-11 17:49:02 is anyone else? 2021-01-11 17:49:04 right now everything that does not helps getting 3.13 out we get a "no" from me :) 2021-01-11 17:49:14 ddevault: I am, but I don't have a lot of bandwidth 2021-01-11 17:49:31 abandonment through reviewer attrition is abandonment all the same 2021-01-11 17:49:38 sure 2021-01-11 17:49:52 can we not make some kind of compromise here, commit to reviewing from the list until the release is behind us and we can get this tool in place? 2021-01-11 17:50:12 ddevault: We cannot make commitments on behalve of others 2021-01-11 17:50:21 sure, but can anyone commit on behalf of themselves 2021-01-11 17:50:30 I'll try to still apply patches on a best-effort basis 2021-01-11 17:50:56 I will review until 3.13.0 is out, if you want I can also keep reviewing until the tool is implemented. 2021-01-11 17:51:05 that would be agreeable, thank you 2021-01-11 17:51:20 in that case, we can defer the tool discussion until post-release 2021-01-11 17:57:18 ddevault: i currently ignore any MR or patch from mailinglist that does not help 3.13 get out 2021-01-11 17:57:45 yeah, no complaints here 2021-01-11 17:57:51 just concerned about the post-release 2021-01-12 12:32:36 Not that important, but is it just me or is the DKIM signature missing for every message on the ~devel list? 2021-01-12 12:32:52 Just curious if that's an oddity of the ML or if I didn't setup my mail server correctly 2021-01-12 12:35:12 Cogitri: I see it in your last mail 2021-01-12 12:37:11 Hm, clicking on "Details" in my message in https://lists.alpinelinux.org/~alpine/devel/%3CCAGP1gyPexhACLxkTfmqVYX%2BDg9awd0LqwjnnSvHaTWc%3Dvp1XUg%40mail.gmail.com%3E says that the DKIM signature is missing, but seems like it works for other things so I guess that's just the ML 2021-01-12 12:37:15 Thanks mps :) 2021-01-12 12:39:28 hmm, yes. interesting 2021-01-12 14:41:42 ikke: any difference now with new musl? 2021-01-12 14:42:28 7 gitaly-ssh processes hanging around 2021-01-12 14:44:40 clandmeter: s390x builder suffers from long dns delays (causing timeout issues_ 2021-01-12 14:46:22 clandmeter: queries to unknown hosts seem to consistently take 5 seconds 2021-01-12 15:10:30 ikke: this is all related? 2021-01-12 15:10:44 2 separate issues 2021-01-12 15:10:50 nod 2021-01-12 15:11:05 But builds are failing on s390x atm due to it 2021-01-12 15:11:16 so both new gitlab and musl does not solve the hanging processes 2021-01-12 15:11:28 1lookup proxy.golang.org on 172.16.10.1:53: read udp 172.16.10.4:53948->172.16.10.1:53: i/o timeout 2021-01-12 15:11:31 clandmeter: seems like it 2021-01-12 15:51:39 proxy.golang.org got address 172.217.16.17, maybe somebody filter it out as bogons ip (by setting up wrong netmask or something) :D 2021-01-12 15:51:57 MY-R: it's not a spefic hostname 2021-01-12 15:52:12 one time it's proxy.golang.org, next time it's a different name 2021-01-12 15:52:29 oh, ok then ignore me :) 2021-01-12 22:40:33 usa4 disk is nearly full 2021-01-13 14:32:52 still dns issues on the s390x builder :-/ 2021-01-13 16:10:54 no dns cache? :< 2021-01-13 16:11:38 dnsmasq does caching 2021-01-13 16:12:02 But not that long apparently 2021-01-13 16:12:07 but it doesnt server expired caches? 2021-01-13 16:12:39 maybe we should set up unbound for the build servers 2021-01-13 16:12:39 unbound and kresd/knot cache resolver got that option 2021-01-13 16:13:03 ncopa: But I think that still does not solve the underlying issue 2021-01-13 16:13:23 at least what I saw, for some reason dns queries take 5 seconds 2021-01-13 16:13:28 packetloss? 2021-01-13 16:13:49 Looks like it, but hard to verify 2021-01-13 16:14:16 i worked around it last time by doing ping 2021-01-13 16:14:22 so the name got cached 2021-01-13 16:14:23 I did try a packet dump. but for some reason I never saw the response from the external dns server, even though the query went through 2021-01-13 16:14:34 even for me unbound without enabled expired cache got some long delays with resolving randomly some names what was rly annoying because I have stable enough network 2021-01-13 16:14:49 might be that the response packet got lost 2021-01-13 16:15:10 and knot cache by default dump caches on disk and can serve expired too 2021-01-13 16:16:06 hmm, might it be that dns uses a response from cache when there is no response? (like MY-R was alluding to I guess) 2021-01-13 16:17:16 it still will try resolve name after timeout and if got some learning function with predicting and refresh caches 2021-01-13 16:17:25 it got* 2021-01-13 16:17:59 but ye, better got response than dont get it at all 2021-01-13 16:18:04 right, but that might explain why I still got a result on the CLI, even though I never saw a packet coming back 2021-01-13 16:18:51 use few dns hosts like 1.1.1.1 and 8.8.8.8 and 9.9.9.9 and it will pick up the fastest one 2021-01-13 16:19:02 I tried 2021-01-13 16:19:07 without resolve :P 2021-01-13 16:20:00 or use recursive mode so wont care about any google/cloudflare stuff which tend to block some requests from time to time or got some rate limits 2021-01-13 16:20:37 but ye that with unbound or knot cache at least 2021-01-13 16:25:22 ikke: did you put that option in dnsmasq.conf? all-servers 2021-01-13 16:25:42 without it it will resolve one by one, with that option it will try all servers at once 2021-01-13 16:31:00 MY-R: I haven't verified, will check 2021-01-13 16:31:50 because ye is weird that it couldnt resolve anything from multiple servers, so 'all-servers' would help for sure 2021-01-13 16:40:00 ikke: ahh didnt see your meassge, so you always got response but with some big delay? 2021-01-13 16:43:08 MY-R: for uncached results, it seemed to always take 5 seconds, which looks suspicious to me 2021-01-13 16:43:19 but it can be a default timeout in dnsmasq 2021-01-13 16:47:11 ikke: so ye all-servers is first option to try 2021-01-13 17:09:44 MY-R: I guess all-servers helps 2021-01-13 17:10:08 shotgun approach :P 2021-01-13 17:17:07 ikke: ye with single forward server everything can happen, it is UDP :P 2021-01-13 17:17:38 can lost somewhere between, then dnsmasq was waiting 5 seconds to time out and pickup another, and now not waiting at all 2021-01-13 17:17:46 sure, normally it's not an issue, but apparently our s390x builder does not have a reliable connection 2021-01-13 17:19:57 since few years I have to use at least two forwarders or just recursive because those weird random delays 2021-01-13 17:20:27 so wouldnt call it shotgun but necessary evil :P 2021-01-13 17:51:55 MY-R: if you remember back a few years ago when Alpine first started to be used as a Docker base image by many people, there were complains about Alpine's then behaviour being "broken" as people were relying on DNS lookups to go sequentially through the list of servers (i.e. 1st would point to Consul, 2nd to Internet) in /etc/resolv.conf whereas as Alpine did lookups in parallel (as per the DNS RFCs) if upstream server responding 2021-01-13 17:51:55 quicker than Consul then it would cause lookup failures for .consul domain 2021-01-13 18:00:27 minimal: ye I think I read that but for me those people were broken ;) many tools already ignoring order in resolv.conf file 2021-01-13 18:02:08 yupe, that's why I pointed out Alpine was following RFCs 2021-01-14 07:52:48 Built new Docker image for aports-qa-bot with autolabeling support and the new mentors group 2021-01-14 07:59:19 Cogitri: alright 2021-01-14 07:59:32 so only making sure it uses the latest image, right? 2021-01-14 08:00:29 Yup, pulling new image and restarting should do 2021-01-14 08:01:03 ✔️ 2021-01-14 08:04:18 Thanks :) 2021-01-14 10:04:17 Pushed another version which doesn't do an API call per commit to add/remove commits, could you pull and restart again? 😅 2021-01-14 10:05:34 yes 2021-01-14 10:05:48 done 2021-01-14 10:08:54 Thanks 2021-01-14 10:27:48 the https://dl-master.alpinelinux.org/ has a cert failure due to hostname is cz.a.o. can we update the cert to allow both dl-master and cz.a.o? 2021-01-14 10:29:58 it's also referred to by master.a.o 2021-01-14 10:30:12 I don't have access, so I cannot arrange it 2021-01-14 10:33:27 Not sure if we want our regular wildcard cert there? 2021-01-14 11:51:47 what is the reason to properly support it? 2021-01-14 11:51:54 we do not advertise it do we? 2021-01-14 11:54:49 dl-master? 2021-01-14 12:02:49 yes 2021-01-14 15:04:12 the alpine-mksite downloads things from there 2021-01-14 15:04:17 currently using http 2021-01-14 15:04:22 but it would be nice to use https 2021-01-14 15:43:41 reasong why we want https for dl-master: https://gitlab.alpinelinux.org/alpine/infra/alpine-mksite/-/issues/3 2021-01-15 10:40:59 ikke: pkgs is not smart enough to handle updates like that 2021-01-15 10:41:29 like what? 2021-01-15 10:41:46 like what you did 2021-01-15 10:42:01 i guess you added 3.13? 2021-01-15 10:42:38 yes, I did, indeed 2021-01-15 10:42:47 I tried to recall what I did last time 2021-01-15 10:42:54 (And I ran the import command 2021-01-15 10:42:59 which seems to import everything) 2021-01-15 10:43:28 it runs the update every 15 min 2021-01-15 10:43:34 yes 2021-01-15 10:43:34 which it does now a lot of times 2021-01-15 10:43:46 grinding to a halt i guess 2021-01-15 10:44:33 https://tpaste.us/zx6a 2021-01-15 10:53:16 ikke: i turned of the update container and run the update in tmux 2021-01-15 10:55:16 need to add some flock logic to the update script 2021-01-15 10:58:29 ikke: looks like it finished 2021-01-15 10:58:37 seems most of 3.13 was already added 2021-01-15 20:52:39 is there a way that I can trigger the website to be rebuilt and deployed? 2021-01-15 20:56:15 good question 2021-01-15 20:59:28 It's updated either by commits from the mksite repo, or from aports 2021-01-15 21:07:56 it used to be commits to the private mksite 2021-01-15 21:08:42 it's now on gitlab 2021-01-15 21:09:00 figured as much, hence the 'it used to be' :) 2021-01-15 21:09:10 it might as well use webhooks and other fancy stuff now 2021-01-15 21:09:16 mcrute already contributed to it 2021-01-15 21:33:08 armhf has the wrong distfile for libtorrent-rasterbar 2021-01-16 00:56:49 ikke: sorry had to step away... yeah I knew that contributing would re-deploy it but was hoping to not have to make a dummy commit 2021-01-16 00:57:02 maybe there's a crank somewhere I can turn to do a deploy? 2021-01-16 00:57:16 we just pushed an updated YAML to the cloud images repo and need the page re-generated 2021-01-16 14:09:31 clandmeter: ping 2021-01-16 14:10:30 pong 2021-01-16 14:10:39 I did a thing 2021-01-16 14:11:12 latest musl adds a new syscal that gives issues with docker/seccomp 2021-01-16 14:11:25 so I thought, lets try to upgrade usa7 to alpine 3.13 :P 2021-01-16 14:11:43 ok, and it breaks? 2021-01-16 14:11:53 well, in the oob console, it now asks for a "user password" 2021-01-16 14:12:06 and mentions after failed tries "hdd is locked" 2021-01-16 14:12:15 huh 2021-01-16 14:12:25 yes 2021-01-16 14:12:38 that does not sound like an os issue? 2021-01-16 14:12:41 no 2021-01-16 14:12:51 It happens before it boots the OS 2021-01-16 14:13:12 whats on usa7? 2021-01-16 14:13:17 x86_64 ci host 2021-01-16 14:13:23 ok 2021-01-16 14:15:42 but it does not ring a bell for you either then 2021-01-16 14:16:22 vaguely 2021-01-16 14:16:42 the interface has changed a bit 2021-01-16 14:19:15 ikke: did you make a screenshot? 2021-01-16 14:19:19 or similar 2021-01-16 14:19:45 no 2021-01-16 14:19:58 rebooting it 2021-01-16 14:20:08 i just made it run into resque os 2021-01-16 14:20:43 i want to see if we get the same issue 2021-01-16 14:20:52 https://ibb.co/KqPHHNZ 2021-01-16 14:20:55 from my console history 2021-01-16 14:21:32 lets keep that around 2021-01-16 14:21:38 in case we need to email packet 2021-01-16 14:21:57 sorry, i will keep using the old name until i get my head around the new one :) 2021-01-16 14:22:16 ok so i get the same issue 2021-01-16 14:22:30 we need to report it 2021-01-16 14:22:52 yea 2021-01-16 14:23:03 THat was my plan, but want to verify with you that it's not something we did 2021-01-16 14:23:17 Not that I found that very likely 2021-01-16 14:23:20 ok, will you report it? 2021-01-16 14:23:33 I'll try 2021-01-16 14:23:48 I can also, but i need to run in 30 min 2021-01-16 14:23:56 np, I'll do it 2021-01-16 14:24:02 i normally first start a chat 2021-01-16 14:24:15 but im not sure if thats still avaible. 2021-01-16 14:24:39 There is a community slack 2021-01-16 14:24:51 i used the chat on the website 2021-01-16 14:24:55 but i dont see it anymore 2021-01-16 14:25:13 There is a 'contact us' button 2021-01-16 14:26:01 Yes 2021-01-16 14:26:09 Use that 2021-01-16 14:26:23 I think they prefer email 2021-01-16 14:30:12 ok, submitted something 2021-01-16 14:30:36 ok cross fingers :) 2021-01-16 14:30:47 at least it has no important data right? 2021-01-16 14:31:01 not that I;m aware of 2021-01-16 14:31:52 It's a very bare metal server :P- 2021-01-16 14:32:31 But my plan is to get our servers upgraded to alpine 13 2021-01-16 14:32:32 another good reason to reboot servers from time to time :P 2021-01-16 14:32:35 MY-R: nod 2021-01-16 14:32:56 our CI servers have little state, so I'm not that afraid to start with those 2021-01-16 14:33:24 its still a weird issue 2021-01-16 14:33:29 i wonder how this happend 2021-01-16 14:33:43 remember old debian times when it was always lotery if server start or not because everyone was messing something around and didnt make notes :P 2021-01-16 14:34:05 to alpine 3.13* 2021-01-16 14:34:12 alpine 13 will take a while 2021-01-16 14:35:57 apparently those are sata ssds 2021-01-16 14:36:02 https://www.micron.com/products/ssd/product-lines/5200 2021-01-16 14:36:36 https://media-www.micron.com/-/media/client/global/documents/products/technical-marketing-brief/5200_sed_tcg-e_tech_brief.pdf?la=zh-tw&rev=14cf6a54b6d54139a075ca8823df984c 2021-01-16 14:36:42 self encrypting drives 2021-01-16 14:36:59 yes its a sata 2021-01-16 14:37:05 https://tweakers.net/pricewatch/1183355/micron-5200-max-480gb/specificaties/ 2021-01-16 14:37:28 whats funny is that this config was always weird 2021-01-16 14:37:33 "self encrypting" 2021-01-16 14:37:46 sounds like that happened 2021-01-16 14:37:55 the cpu is an engineering sample 2021-01-16 14:38:02 right, the 128 core one 2021-01-16 14:38:27 DXE--ACPI Initialization..Welcome to GRUB! 2021-01-16 14:38:27 error: failure reading sector 0x0 from `hd2'. 2021-01-16 14:38:32 yes, had that too 2021-01-16 14:38:56 probably because the disks are still locked? 2021-01-16 14:39:03 i reported it ones, but never got a reply 2021-01-16 14:39:15 yes, but i wonder what the boot drive is 2021-01-16 14:39:24 it should not be the 480 drives 2021-01-16 14:39:48 the 240G nvme/ 2021-01-16 14:39:51 ? 2021-01-16 14:39:52 but maybe grub is unhappy it cant read it 2021-01-16 14:40:13 i think the nvme should be boot yes 2021-01-16 14:40:22 but its been a long time since we installed it 2021-01-16 14:40:53 could be packet has some master password to unlock t he drive 2021-01-16 14:43:39 the vague remembrance was about when i wanted to do a secure erase on another packet server before 2021-01-16 14:43:50 aha, ok 2021-01-16 14:43:57 when decommissioning 2021-01-16 14:44:04 so in the end i did it sw based 2021-01-16 14:45:48 But we need some kind of solution for our ci hosts and faccesat2 2021-01-16 14:46:23 faccewhat? 2021-01-16 14:46:28 new syscall 2021-01-16 14:46:36 added in the latest musl 2021-01-16 14:46:38 google yields no results 2021-01-16 14:46:50 msybe stat? 2021-01-16 14:46:54 https://lwn.net/Articles/820410/ 2021-01-16 14:47:11 https://github.com/moby/moby/pull/41353 2021-01-16 14:47:14 double s :) 2021-01-16 14:47:42 but apparently an up-to-date seccomp profile is not enough 2021-01-16 14:47:59 libseccomp needs to be 2.4.8 or newer 2021-01-16 14:48:08 (according to Hello71) 2021-01-16 14:48:33 I tried running docker with the latest seccomp profile, but it was still denied 2021-01-16 14:48:40 i guess we first wait for packet to responde? 2021-01-16 14:48:50 I guess so 2021-01-16 14:49:24 ok i will check back later, need to run now. or ping me tomorrow morning 2021-01-16 14:49:46 o. 2021-01-16 14:50:03 I can setup a temporary (smaller) host somewhere to hande x86_64 ci? 2021-01-16 14:50:58 sure 2021-01-16 14:51:04 packet should be ok 2021-01-16 14:51:16 maybe remove the temp servers 2021-01-16 14:51:24 err 2021-01-16 14:51:31 linode? 2021-01-16 14:51:40 linode ys :) 2021-01-16 14:52:41 mps: do we still need the mailman server? 2021-01-16 14:53:56 ikke: lets keep them 2021-01-16 14:54:03 we can run the ci for x time 2021-01-16 14:54:10 decide later what to kill 2021-01-16 14:54:19 its charged per hour iirc 2021-01-16 14:54:37 i have some other stuff also 2021-01-16 14:54:41 i will clean it up 2021-01-16 14:54:44 was some test 2021-01-16 14:58:18 ikke: could you wait half an hour, I'm not at home now but would like to make backup. then it could be removed 2021-01-16 14:59:23 mps: sure 2021-01-16 15:29:39 ikke: ready. you can remove it 2021-01-16 15:29:43 ok 2021-01-16 15:56:34 chatting with equinix support atm 2021-01-16 16:06:06 clandmeter: so something caused the disks to be encrypted, but not clue what. We are most likely going to have to reinstall that server 2021-01-16 16:09:27 clandmeter: will reinstall the server 2021-01-16 16:15:17 Wow 2021-01-16 16:15:25 That's crazy 2021-01-16 16:15:27 yes 2021-01-16 16:15:48 Can it be us? 2021-01-16 16:16:02 I have no clue 2021-01-16 16:16:44 What could have set the encryption password? 2021-01-16 16:38:23 clandmeter: ok, the locked disks cause issues, even for reinstalling, they recommend that we provision a new instance 2021-01-16 16:42:25 maybe it isnt encryption password but drive is locked, something must have set password/lock flag directly to disk or in uefi/bios 2021-01-16 16:45:25 Sounds like it 2021-01-16 17:23:59 clandmeter: equinix rescue os is now alpine 3.12 :D 2021-01-16 17:39:52 yeah, just rub it in 2021-01-16 17:56:34 Anyone experience with LACP bonding? Previously we didn't have any special bonding setup, but now when I setup bonding, I have packetloss 2021-01-16 17:59:47 ok, bond-mode 802.3ad was enough 2021-01-16 19:03:55 clandmeter: I've re-installed usa7, but somehow cannot seem to get it to boot from disks 2021-01-16 19:04:58 It keeps going to ipxe, which seems to fail, but it keeps overwriting it's own output 2021-01-16 19:23:07 ok, progress 2021-01-16 20:07:49 I'm seeing "Welcome to grub" now, but nothing much else :( 2021-01-16 20:35:39 sigh 2021-01-16 20:36:07 takes ages to reboot every time 2021-01-16 20:37:50 sorry i can't help 2021-01-16 20:37:58 apprecies the gesture 2021-01-16 20:38:03 trying extlinux 2021-01-16 20:38:17 appreciate* 2021-01-16 20:38:36 is it uefi or old bios 2021-01-16 20:38:50 Seems right no legacy bios 2021-01-16 20:38:53 right now* 2021-01-16 20:39:08 I guess, not sure 2021-01-16 20:40:15 few days ago I made script to install syslinux on uefi boxes, tested it on macbook and older lenovo notebook 2021-01-16 20:40:54 if you think it could help I can paste it 2021-01-16 20:42:38 ikke: maybe you missed something in /etc/default/grub or you didnt add 'nvme' to mkinitfs dunno :\ 2021-01-16 20:43:02 MY-R: probably something like that 2021-01-16 20:43:16 But hard to see the forest throug the trees 2021-01-16 20:43:41 if the module is missing it should at least try to start kernel and show first few messages 2021-01-16 20:43:44 MY-R: THat _is_ something I missed from the old setup 2021-01-16 20:43:53 unless serial is broken as well 2021-01-16 20:44:29 if you got boot loader menu it is not probably 2021-01-16 20:44:42 no, I didn't see the menu 2021-01-16 20:44:44 just the header 2021-01-16 20:44:51 ah 2021-01-16 20:45:55 but even "Welcome to grub" says that grub sent this over serial 2021-01-16 20:46:09 hm 2021-01-16 20:47:00 no documentation how things are working on packet.net? 2021-01-16 20:47:24 https://metal.equinix.com/developers/docs/guides/ 2021-01-16 20:48:02 sadly they don't have alpine images, so I'm using an alpine rescue OS to install alpine 2021-01-16 20:48:58 There is another server there that I installed in a similar way, so maybe copy things from there 2021-01-16 20:49:08 Was the CI fixed ? 2021-01-16 20:49:37 no, guess what I'm working on 2021-01-16 20:50:04 (obviously yak shaving) 2021-01-16 20:50:25 huh 2021-01-16 20:50:49 I decided to be couragious and tried to upgrade our x86_64 CI host to alpine 3.13 2021-01-16 20:50:59 that kinda went south 2021-01-16 20:51:25 oof 2021-01-16 20:51:44 Doesn't that mean we are shipping a potentially broken docker setup on 3.13 or is it just related to our CI ? 2021-01-16 20:51:58 it's a seccomp issue 2021-01-16 20:52:12 faccessat2 is blocked, but shouldn't 2021-01-16 20:52:39 the CI hosts were not on 3.13 2021-01-16 20:52:46 and most still are not 2021-01-16 20:52:52 well, none of them is even atm 2021-01-16 21:01:01 the other server is running in efi mode 2021-01-16 21:01:07 so let me check the bios 2021-01-16 21:05:44 switching to uefi now 2021-01-16 21:35:17 hi 2021-01-16 21:35:20 hey 2021-01-16 21:35:29 ikke: i think you need uefi 2021-01-16 21:35:34 clandmeter: I tried uefi 2021-01-16 21:35:40 but then it does not find any boot devices 2021-01-16 21:35:47 and rescue OS is not working either 2021-01-16 21:36:12 rescue is 3.12 you mentioned? 2021-01-16 21:36:15 yes 2021-01-16 21:36:32 so they recently upgraded 2021-01-16 21:36:35 yup 2021-01-16 21:37:20 how do you switch to eufi and back? 2021-01-16 21:37:22 I'm now switching back to bios and trying extlinux 2021-01-16 21:37:27 enter the bios? 2021-01-16 21:37:28 yes 2021-01-16 21:37:56 i think uefi should work in rescue os, if not its a bug. 2021-01-16 21:38:16 It's not even managing to get there 2021-01-16 21:39:19 clandmeter: It's now booting from the hard drive, but stopping at "Welcome to GRUB!" 2021-01-16 21:39:27 any idea? 2021-01-16 21:39:44 are you using the correct serial? 2021-01-16 21:39:54 I copied the settings from the previous installation 2021-01-16 21:40:00 (I still had the output from update-conf) 2021-01-16 21:40:18 but this is another server? 2021-01-16 21:40:22 yes 2021-01-16 21:40:58 there was a page mentioning the right settings 2021-01-16 21:41:08 but the rebranding does not help saerching 2021-01-16 21:41:54 Rebooting into rescue os now 2021-01-16 21:42:27 at least its not an arm 2021-01-16 21:42:42 is that even worse? 2021-01-16 21:42:55 (hard to imagine) 2021-01-16 21:43:07 yes it takes minutes to boot 2021-01-16 21:43:19 and fails every x times 2021-01-16 21:43:21 well, this also takes minutes to boot 2021-01-16 21:43:41 also in bios mode? 2021-01-16 21:43:50 yes 2021-01-16 21:43:57 in uefi mode it fails quite fast 2021-01-16 21:44:31 so we moved from 128 cores to 24? 2021-01-16 21:44:45 I guess 48? 2021-01-16 21:45:00 1 x AMD EPYC 7402P 24-Core Processor @ 2.8GHz 2021-01-16 21:45:40 if it boots into rescue, you can check which serial its using 2021-01-16 21:45:49 through dmesg? 2021-01-16 21:46:02 was thinking about how to find out 2021-01-16 21:46:09 /proc/cmdline is not mentioning it 2021-01-16 21:46:26 the module is not loaded i think? 2021-01-16 21:46:39 what module? 2021-01-16 21:46:47 oh cmdline 2021-01-16 21:46:53 sorry i was mixing config.tz 2021-01-16 21:46:55 gz 2021-01-16 21:46:56 he 2021-01-16 21:47:20 ok, booted 2021-01-16 21:47:24 does it boot in rescue? 2021-01-16 21:47:38 cmdline should show afaik 2021-01-16 21:47:39 yes 2021-01-16 21:47:58 oh, it was truncated 2021-01-16 21:48:35 console=tty0 console=ttyS1,115200] 2021-01-16 21:48:58 nproc -> 48 2021-01-16 21:49:44 SMT 2021-01-16 21:49:57 yes 2021-01-16 21:50:06 but wasn't 128 smt as well? 2021-01-16 21:50:13 yes 2021-01-16 21:50:21 i htink it was also single socket 2021-01-16 21:50:22 so 64 -> 48 cores 2021-01-16 21:50:28 sorry 2021-01-16 21:50:31 64 -> 24 2021-01-16 21:50:36 yup 2021-01-16 21:50:46 but hey, whos complaining :) 2021-01-16 21:50:49 :) 2021-01-16 21:51:06 the chat support was very nice btw 2021-01-16 21:51:06 the serial settings are different then yours? 2021-01-16 21:52:05 console=ttyS1 2021-01-16 21:53:13 https://tpaste.us/E1a5 this is what I had 2021-01-16 21:53:19 But I guess it should be S1 2021-01-16 21:53:49 yes -unit=1 2021-01-16 21:54:16 mps: ah, thanks 2021-01-16 21:54:50 also for kernel cmdline console=ttyS1 2021-01-16 21:54:52 yes 2021-01-16 21:54:57 we should report the rescue os does not boot in efi mode. 2021-01-16 21:55:22 clandmeter: it does not even get to booting via ipxe 2021-01-16 21:57:03 is the server still supermicro? 2021-01-16 21:58:33 product: PowerEdge R6515 (SKU=NotProvided;ModelName=PowerEdge R6515) 2021-01-16 21:58:50 ok 2021-01-16 21:59:22 Ok, adjusted the serial device 2021-01-16 21:59:24 will try to reboot now 2021-01-16 21:59:32 (ran grub-mkconfig -o ...) 2021-01-16 21:59:50 grub is so much fun 2021-01-16 22:00:27 it even have its script lang :) 2021-01-16 22:00:28 heh 2021-01-16 22:00:35 clandmeter: we could switch to extlinux? 2021-01-16 22:01:08 as long as it boots i dont care that much 2021-01-16 22:01:12 at least it supports efi 2021-01-16 22:01:21 i'll reboot first 2021-01-16 22:01:35 i know for our servers we mostly use efi now, or things like gpu's have issues. 2021-01-16 22:01:43 clandmeter: syslinux also support efi 2021-01-16 22:02:02 only on x86 i think 2021-01-16 22:02:46 didn't tried on arm 2021-01-16 22:03:07 https://wiki.syslinux.org/wiki/index.php?title=Install#UEFI 2021-01-16 22:05:15 Booting from Hard drive C: 2021-01-16 22:05:18 GRUB loading. 2021-01-16 22:05:20 Welcome to GRUB! 2021-01-16 22:05:24 and then a lot of silence 2021-01-16 22:05:57 ok, extlinux it is 2021-01-16 22:06:03 hoping that goes better 2021-01-16 22:06:41 i had many of such fights with those servers. 2021-01-16 22:06:45 Attempt 2021-01-16 22:06:49 for sure the aarch64 2021-01-16 22:07:07 and tianocore takes forever to load. 2021-01-16 22:07:14 like the matrix on repeat 2021-01-16 22:09:02 Maybe to do with ifupdown-ng, but I had to set bond-mode now as well 2021-01-16 22:09:09 802.3a 2021-01-16 22:09:31 Otherwise quite a bit of packetloss 2021-01-16 22:09:57 i think i always set bonding mode 2021-01-16 22:10:07 ah ok 2021-01-16 22:10:11 I never noticed it 2021-01-16 22:10:11 when loading the module 2021-01-16 22:10:15 ah 2021-01-16 22:10:25 iirc 2021-01-16 22:10:31 I noticed it during setup-alpine in the recue os 2021-01-16 22:11:50 This is the last attempt for today 2021-01-16 22:12:42 huh 2021-01-16 22:12:54 now it suddenly switched to sdb instead of sda 2021-01-16 22:13:11 (and no sda) 2021-01-16 22:13:34 ok i will also call it a day, i can check it tomorrow if you want. 2021-01-16 22:14:08 thanks for spending the time already. 2021-01-16 22:14:21 its frustrating... 2021-01-16 22:14:39 from experience :) 2021-01-16 22:14:43 hehe 2021-01-16 22:17:10 Ok, I think i've installed extlinux now 2021-01-16 22:18:23 added console=ttyS1 to the cmdline 2021-01-16 22:21:43 hmm, grub is still installed :-/ 2021-01-16 22:35:51 Progress 2021-01-16 22:35:58 mount: mounting UUID=adbfa266-4535-44a8-b5a5-92a2539ba8d4 on /sysroot failed: No such file or directory 2021-01-16 22:37:12 so I guess now it's a matter of getting the modules in 2021-01-16 22:37:18 but that's for tomorrow 2021-01-17 07:50:26 clandmeter: it's now booted in an emergency shell becuase it cannot find the root partition. I can manually active the lvm volume group though 2021-01-17 08:28:41 Morning 2021-01-17 08:28:52 hi 2021-01-17 08:29:02 Did you fix it? 2021-01-17 08:29:04 no 2021-01-17 08:29:14 Not sure what I'm missing 2021-01-17 08:29:19 but lvm does not get activates somehow 2021-01-17 08:29:44 Nvme is enabled? 2021-01-17 08:33:29 I've added it to the mkninitfs features 2021-01-17 08:34:08 in the emergency shell, I can do lvm vgscan; lvm vgchange --sysinit --activate y, and then the volumes are available 2021-01-17 08:36:11 Can you boot with debug? 2021-01-17 08:42:28 I have time again later 2021-01-17 08:45:56 I have good experience with lvm, i.e. never use it :) 2021-01-17 11:03:34 clandmeter: booted with quiet removed and debug aded 2021-01-17 11:05:01 and I have to repeat that 'quiet' should be removed by default from alpine kernel/bootloader cmdline 2021-01-17 11:06:17 https://tpaste.us/0Evz 2021-01-17 11:07:50 dev mapper not started in initramfs? 2021-01-17 11:08:26 sounds like it, but what should start it? 2021-01-17 11:08:52 I really have no idea 2021-01-17 11:09:31 ask on -devel or -linux maybe someone there know 2021-01-17 11:09:57 I did 2021-01-17 11:10:01 no responses yet 2021-01-17 11:10:32 btw, why is lvm on these machines 2021-01-17 11:11:05 modules=sd-mod,usb-storage,ext4 2021-01-17 11:11:13 We use lvm by default everywhere 2021-01-17 11:11:52 Makes it easier to deal with partitions 2021-01-17 11:12:02 hmm 2021-01-17 11:12:38 and it works normally just fine 2021-01-17 11:12:52 hi 2021-01-17 11:12:55 hey 2021-01-17 11:13:00 i think i know what should start it 2021-01-17 11:13:12 please enlighten me 2021-01-17 11:13:28 we have a small c based tool, i dont remember the name 2021-01-17 11:13:33 ifplugd? 2021-01-17 11:13:42 or something like that 2021-01-17 11:14:03 nlplug-findfs 2021-01-17 11:14:08 yes 2021-01-17 11:14:12 that should be it 2021-01-17 11:15:14 if you set debug, it will spit a lot of data 2021-01-17 11:16:09 did not see anythign 2021-01-17 11:16:44 I'll try to run it manuall 2021-01-17 11:16:47 manually* 2021-01-17 11:18:00 did you set debug_init? 2021-01-17 11:19:34 ah, no, just debug 2021-01-17 11:19:54 prepare for lots of debug info 2021-01-17 11:20:11 basically dmesg output directly to the console 2021-01-17 11:22:49 running nlplug-findfs -p /sbin/mdev -d now 2021-01-17 11:23:10 taking a bit of time 2021-01-17 11:25:56 exit due to timeout 2021-01-17 11:26:11 how did you install alpine? 2021-01-17 11:26:16 via rescue os? 2021-01-17 11:26:31 yes 2021-01-17 11:26:53 But I also upgraded to 3.13 2021-01-17 11:27:32 so you boot 3.12 and upgrade then reboot 2021-01-17 11:27:40 no 2021-01-17 11:27:48 boot 3.12 install, upgrade, reboot? 2021-01-17 11:28:11 boot rescue os; setup-alpine, skip disk 2021-01-17 11:28:14 upgrade 2021-01-17 11:28:26 setup-disk <..> 2021-01-17 11:29:23 did you run update-conf? 2021-01-17 11:29:44 hmm,n o 2021-01-17 11:30:11 maybe there is something that packet has changed in its rescue os that is biting you 2021-01-17 11:31:19 whats in mkinitfs.conf? 2021-01-17 11:31:34 which options are enabled? 2021-01-17 11:31:58 rescue os is network based, so its missing disk based modules (probably) 2021-01-17 11:56:46 /mnt/etc/mkinitfs # cat mkinitfs.conf 2021-01-17 11:56:46 features="ata base ide scsi usb virtio ext4 lvm nvme" 2021-01-17 12:01:02 looks good 2021-01-17 12:01:46 is the bootable flag set?, not sure that makes a difference. 2021-01-17 12:01:57 Otherwise it would not even boot iirc 2021-01-17 12:02:05 it already gets to initramfs 2021-01-17 12:02:55 did you get the debug log? 2021-01-17 12:03:05 from nlplug? 2021-01-17 12:03:46 I want to write the output to a file 2021-01-17 12:03:54 looks like outputting to a console delays it too much 2021-01-17 12:07:09 hmm, seems it does not output to stdout / stderr 2021-01-17 12:10:01 https://gitlab.alpinelinux.org/alpine/mkinitfs/-/blob/master/initramfs-init.in 2021-01-17 12:10:36 yes, I'm looking at that 2021-01-17 12:10:49 https://gitlab.alpinelinux.org/alpine/mkinitfs/-/blob/master/initramfs-init.in#L507 2021-01-17 12:11:05 oh, I did not pass a root 2021-01-17 12:11:35 hmmmm 2021-01-17 12:11:52 let me reboot again 2021-01-17 12:19:14 clandmeter: If I manually run it, it seems to work :-. 2021-01-17 12:19:16 :-/ 2021-01-17 12:20:02 clandmeter: https://tpaste.us/axqE 2021-01-17 12:20:05 does it boot without lvm? 2021-01-17 12:20:21 I have not tried 2021-01-17 12:23:01 hmm, I found a type 2021-01-17 12:23:03 typo 2021-01-17 12:23:09 /dev/mapper/vg0-lv_root on /sysroot failed: No such file or directory 2021-01-17 12:23:16 nlplug-findfs -p /sbin/mdev /dev/mapper/vg0_lvroot 2021-01-17 12:32:25 sight 2021-01-17 12:32:30 sigh* 2021-01-17 12:33:07 sounds almost like a timing issue or something like that 2021-01-17 12:37:40 Ok, i've now added debug_init to the cmdline 2021-01-17 12:43:22 + nlplug-findfs -p /sbin/mdev -d /dev/mapper/vg0_lvroot 2021-01-17 12:47:26 clandmeter: https://tpaste.us/qgJ5 2021-01-17 12:52:38 send time I execute the exact same command, and it works :( 2021-01-17 12:52:43 send/second 2021-01-17 12:54:49 lesigh 2021-01-17 12:55:37 clandmeter: can you please take a look? 2021-01-17 13:35:22 I'm trying a hack now 2021-01-17 13:35:42 add an additional sleep before nlplug-finfs 2021-01-17 13:35:45 findfs 2021-01-17 14:49:01 equinix console is now unavailable :/ 2021-01-17 16:31:44 I decided to do a reinstall 2021-01-17 16:31:47 more cleanly 2021-01-17 16:31:56 And I also noticed that I selected the 480G drivers 2021-01-17 16:32:02 before 2021-01-17 16:45:50 w00t 2021-01-17 16:45:53 w00t 2021-01-17 16:45:54 w00t 2021-01-17 17:59:48 clandmeter: so confirmed, it seems to be an issue with alpine 3.13 2021-01-17 17:59:52 installing 3.12 -> works 2021-01-17 17:59:58 ugprading to 3.13 -> broken 2021-01-17 18:00:02 downgrading to 3.12 -> works 2021-01-17 18:17:43 Hmm 2021-01-17 18:18:06 So there is a change in mkinitfs repo? 2021-01-17 18:18:11 not sure 2021-01-17 18:18:13 maybe kernel? 2021-01-17 18:18:36 it's nlplug-findfs that somehow does not find the fs 2021-01-17 18:18:41 Try edge kernel and reboot :) 2021-01-17 18:18:51 so I don't think it's mkinitfs-init itself 2021-01-17 18:18:56 Or just 3.13 kernel 2021-01-17 18:19:08 Yeah, can try that 2021-01-17 18:28:38 clandmeter: 3.13 kernel does not work 2021-01-17 18:32:55 Ah ok 2021-01-17 18:33:17 Lvm related? 2021-01-17 18:33:41 yes 2021-01-17 18:33:50 it worked when it didn't use lvm 2021-01-17 18:34:00 Interesting 2021-01-17 18:34:08 but I guess that's because it does not relies on nlplug-findfs 2021-01-17 18:34:20 Good work, at least we know where to look now 2021-01-17 18:34:53 I guess more setups are broken 2021-01-17 19:21:38 ikke: i wonder what is causing this. 2021-01-17 19:21:48 if iuse qemu it works just fine 2021-01-17 19:22:26 clandmeter: If I compare the output of nlplug-findfs, for some reason there are not events for the block devices 2021-01-17 19:22:30 /dev/sd* 2021-01-17 19:22:44 but if I run it manually afterwards, it's present 2021-01-17 19:23:12 yes i also noticed missing block devices in your paste 2021-01-17 19:23:48 nlplug-findfs: uevent: action='add' subsystem='block' devname='sdc2' devpath='/devices/pci0000:00/0000:00:03.2/0000:01:00.0/host12/port-12:0/end_device-12:0/target12:0:0/12:0:0:0/block/sdc/sdc2' 2021-01-17 19:23:51 this one 2021-01-17 19:40:09 clandmeter: I'm reinstalling x86 as well 2021-01-17 19:40:23 reinstalling? 2021-01-17 19:40:34 Wanted to upgrade it to 3.12 2021-01-17 19:41:01 https://tpaste.us/K5aP 2021-01-17 19:41:18 I messed something up when I installed it previously with raid 2021-01-17 19:41:37 (and maybe we did some lvm raid-like setup as well) 2021-01-17 20:32:01 clandmeter: testing 3.13 kernel on nld7 now (with efi, testing if it has the same issue) 2021-01-17 20:34:17 seems like the same issue 2021-01-17 20:34:21 nope 2021-01-17 20:34:25 (was impatient) 2021-01-17 20:42:28 so nld7 is now 3.13 2021-01-17 20:42:40 so seems to only affect legacy boot 2021-01-17 20:55:51 mps: lets continue here 2021-01-17 20:56:43 I don't have to add anything more and don't want to annoy you when you are working on servers 2021-01-17 20:56:53 np 2021-01-17 21:01:41 https://gitlab.alpinelinux.org/alpine/aports/-/issues/12325 2021-01-17 21:03:05 just pushed linux-edge-5.10.8 :) 2021-01-17 21:03:50 heh 2021-01-17 21:04:40 but I don't know could it fix this problem 2021-01-17 21:05:25 Do you think it would be worth to report this upstream? 2021-01-17 21:05:47 Not sure what information to provide exactly 2021-01-17 21:06:00 what? kernel or nl.... 2021-01-17 21:06:06 find 2021-01-17 21:06:46 I don't have box with nvme where I could test this 2021-01-17 21:06:55 not sure if this is related to nvme 2021-01-17 21:07:15 also I'm not 2021-01-17 21:07:29 though 5.10.8 have one nvme fix 2021-01-17 21:07:47 let me look again 2021-01-17 21:08:23 https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.10.8 2021-01-17 21:08:36 nvme-tcp: Fix possible race of io_work and direct send 2021-01-17 21:09:06 yes, no clue if it is related at all 2021-01-17 21:09:07 'tcp' for nvme? don't understand this 2021-01-17 21:09:23 nvme over network? 2021-01-17 21:10:27 who knows, maybe nvme uses tcp to communicate with devices. never looked at this 2021-01-17 21:11:56 https://blogs.oracle.com/linux/nvme-over-tcp 2021-01-17 21:12:07 interesting 2021-01-17 21:47:54 yupe its been around for a little while, basically and alternative to iSCSI over FibreChannel/Ethernet/whatever 2021-01-18 08:15:08 mps: its still rather new, i had a talk with them last year in the beginning of the pandemic. 2021-01-18 08:15:43 ikke: im bumping into a few lua pkg checksum errors 2021-01-18 08:33:21 5.11-rc4 have this changelog 'nvme: don't intialize hwmon for discovery controllers' 2021-01-18 08:33:55 maybe (probably?) it will be backported to 5.10.9 2021-01-18 10:21:37 clandmeter: on what url? 2021-01-18 11:16:18 clandmeter: I could try to upgrade usa7 to 3.13 but pin the kernel 2021-01-18 14:57:15 ikke: ok 2021-01-18 14:57:25 do we know what is the actual issue? 2021-01-18 14:58:16 i already cleared the pkgs that have checksum errors 2021-01-18 16:15:11 clandmeter: No, I have no idea what the actual issue is, or how to track it down 2021-01-18 16:15:32 it looks some kind of race condition, but adding sleep before nlplug-findfs does not appear to help 2021-01-19 12:55:55 can someone from infra team help set up PureTryOut https://gitlab.alpinelinux.org/PureTryOut to have git push access to testing and community? Thanks! 2021-01-19 13:11:03 I'll do it later today 2021-01-19 13:18:11 s390x buildozer (edge/3.13/3.12) do not appear to work at all and the same applies to ppc64le edge. 2021-01-19 17:13:14 i'm confuzzled on something involving our mailing lists 2021-01-19 17:13:43 i am sending messages to our mailing list, with what appear to be valid dkim-signature, but the archives do not show a dkim-signature under 'details' 2021-01-19 17:14:09 I think Cogitri mentioned something similar\ 2021-01-19 17:14:11 i am, however, operating my mailer on an alpine s390x VM, which is big-endian, so it may be that i am generating broken signatures 2021-01-19 17:14:57 yes, he mentioned the same 2021-01-19 17:16:23 Yes, my DKIM signature apparently goes to /dev/null on the Ml 2021-01-19 17:16:35 But seems to work just fine on other things apparently 2021-01-19 17:16:57 ddevault: ^ 2021-01-19 17:16:58 i have no idea if my signatures are working or not :D 2021-01-19 17:17:03 And some random dkim validators on the net say it's fine so yeah 🤷‍♂️ 2021-01-19 17:17:24 it relies on the mail server to add an authentication header 2021-01-19 17:18:06 How would we do that with postfix 2021-01-19 17:18:30 run it through a DKIM verification milter 2021-01-19 17:18:37 in sr.ht production we use go-msgauth 2021-01-19 17:18:42 https://github.com/emersion/go-msgauth 2021-01-19 17:19:02 i presently use opendkim, but opendkim seems not so great. how does go-msgauth compare? 2021-01-19 17:19:15 we wrote go-msgauth for the express purpose of replacing opendkim 2021-01-19 17:19:20 haha 2021-01-19 17:19:31 here's our alpine package https://git.sr.ht/~sircmpwn/sr.ht-apkbuilds/tree/master/item/sr.ht/go-msgauth 2021-01-19 17:19:39 never cleaned it up for upstreaming into aports, though 2021-01-19 17:19:47 see the confd file, which is where all the configuration occurs 2021-01-19 17:20:18 note, we use this both for verifying signatures for incoming emails and signing outgoing emails, if you don't want to do the latter it may require some tweaking 2021-01-19 17:20:38 query: if dkim_sign_domains= is left blank, will it sign for any domain? 2021-01-19 17:20:44 I think so 2021-01-19 17:20:46 err, no 2021-01-19 17:20:52 it won't sign anything, but it will verify signatures 2021-01-19 17:21:07 I think - this package hasn't been tested outside of our use-case 2021-01-19 17:21:09 would be nice to have an sign anything outbound 2021-01-19 17:21:51 i've been moving my email usecases onto my own infrastructure now that i can afford to get a /24 toasted if it goes to shit 2021-01-19 17:22:08 IPv4 reputation issues with email are overstated 2021-01-19 17:22:17 hmm, true :) 2021-01-19 17:23:26 looks like a nice software 2021-01-19 17:26:01 ddevault: would you have any objection to my cleanng up your APKBUILD and including it into alpine testing? 2021-01-19 17:26:10 no objection 2021-01-19 17:26:39 if you make breaking changes, please let me know so that I can migrate if it eventually makes its way into community 2021-01-19 17:26:49 cool, i'll work on it this weekend 2021-01-19 17:27:17 i don't envision making any breaking changes :) 2021-01-22 00:48:03 just in case I will leave it this here: 2021-01-22 00:48:10 "Scheduled - While performing a scheduled maintenance to replace a faulty switch, we ran into an issue that caused a brief outage in the EU-Central (Frankfurt) data center. At this time all services are restored. We have scheduled an emergency maintenance on Thursday, January 21, 2021 from 23:00 UTC until January 22, 2021 02:00 UTC to complete the switch replacement. While we do not expect 2021-01-22 00:48:13 any downtime, there may be a period of brief packet loss or latency." 2021-01-23 15:02:39 We need to cleanup s390x builder, root is full 2021-01-23 15:02:56 /var I mean 2021-01-23 15:15:11 I've deleted edge distfiles older then 30 days for now 2021-01-23 15:15:14 10G free 2021-01-24 14:17:23 clandmeter: fyi, usa7 (x86_64 CI host) is now running alpine 3.13 with an older kernel 2021-01-24 14:27:48 gitlab-runner-aarch64 is running 3.13 as well now 2021-01-24 14:34:41 and gitlab-runner-armv7 2021-01-24 14:38:06 those are VMs, so upgrade is fairly trivial 2021-01-24 14:38:29 I guess ppc64le and s390x are more tricky 2021-01-24 21:14:35 ikke: ok nice. 2021-01-24 21:15:24 clandmeter: any idea how we should to ppc64le / s390x? We do not really have an oob console to fall back to 2021-01-24 21:15:45 In the bast I guess we just did it 2021-01-24 21:15:51 good question 2021-01-24 21:16:07 we should probably first check if we have contact 2021-01-24 21:16:19 nod 2021-01-24 21:16:27 and mention the upgrade plan 2021-01-24 21:16:46 ive not talked to anyone much 2021-01-24 21:17:16 i guess we can ask tmhoang? 2021-01-24 21:18:48 I guess 2021-01-25 14:28:10 ikke: Could you pull appstream-generator (both the image and the compose) and restart it? 2021-01-25 16:20:03 Cogitri: done 2021-01-25 16:21:38 Thanks 2021-01-25 16:31:28 hey ikke is there a way I can manually rebuild and re-deploy the website without pushing a fake bump commit? the AMI page really needs rebuilt 2021-01-25 16:32:11 mcrute: not right now 2021-01-25 16:32:24 I can see if I can trigger the build script 2021-01-25 16:32:53 thanks... I'm working on a better way to do this but it's not going to be ready for a little bit 2021-01-25 16:33:15 I can send you SSH keys if you can give me access to the right thing so I don't have to bother you in the meantime 2021-01-25 16:34:55 I think we'd rather make sure it's not necessary to do it manually 2021-01-25 16:35:41 that would certainly be ideal 2021-01-25 16:36:03 okay, well if there's anything I can do to help give me a shout 2021-01-25 16:37:54 mcrute: shouldn't there be a commit against https://gitlab.alpinelinux.org/alpine/infra/alpine-mksite? 2021-01-25 16:39:36 no the builder just loads the yaml we publish in the AMI repo 2021-01-25 16:39:47 similar to how the releases page works 2021-01-25 16:40:24 mcrute: it uses make to build the site, but atm it does not see any changes 2021-01-25 16:41:14 the make target for the cloud pages wgets the yaml we publish so the build would need run but no new commits would exist 2021-01-25 16:43:40 there is an update-release target that removes the releases.yaml file and runs make again 2021-01-25 16:43:47 that is missing for the cloud images 2021-01-25 16:43:53 I can do it manually for now 2021-01-25 16:44:12 okay I'll fix that in the mksite scripts 2021-01-25 16:44:23 can you check it's up-to-date now? 2021-01-25 16:45:10 it is not yet 2021-01-25 16:46:03 hmm, it did not do the curl part 2021-01-25 16:46:56 ok, fixed 2021-01-25 16:47:46 mcrute: we have a webhooks-to-mqtt service, I guess we could use that to trigger rebuilding the cloud images part 2021-01-25 16:48:11 looks good now 2021-01-25 16:48:20 ok, good 2021-01-25 16:48:26 thanks 2021-01-25 16:48:44 and, I don't mind being bothered about this, just had to figure out how to properly trigger this 2021-01-25 16:48:48 how do I trigger that webhook? i can add it to our inages repo 2021-01-25 16:48:58 mcrute: let me get the details 2021-01-25 16:49:27 I just don't want to bother you with trivial things we could script :-) 2021-01-25 16:49:36 nod 2021-01-25 16:52:25 mcrute: so we first need a propert make target for this 2021-01-25 16:52:38 like update-release 2021-01-25 16:54:44 ok 2021-01-25 18:59:40 https://status.linode.com/incidents/dm9mc14kv2t4 2021-01-25 20:10:12 zabbix has been upgraded to 5.2 2021-01-25 20:20:36 \o/ 2021-01-25 21:07:00 also I had outage at linode 2021-01-25 21:07:51 interesting thing is that one linode VM in Frakfurt worked fine but important one was cut of the net 2021-01-26 05:28:28 mps: yes, it was a network outage, the servers themself were still running 2021-01-26 12:52:35 ikke: hi 2021-01-26 12:52:40 sorry i am swamped 2021-01-26 12:53:18 could not respond your msgs on time :( 2021-01-26 12:53:31 and i dont think it will improve coming weeks 2021-01-26 12:54:23 No problem, just trying to keep you updated 2021-01-26 12:55:45 Could we extend webhooks so that mcrute can post to it and trigger the clouds page to be updated? 2021-01-26 12:56:09 It's currently geared towards gitlab events 2021-01-26 13:06:42 ikke: i think its trivial to add something 2021-01-26 13:07:21 what is the cloud page? 2021-01-26 13:07:25 the one of www? 2021-01-26 13:09:41 ikke: how does mcrute want to talk to it, via gitlab? 2021-01-26 13:11:54 The process is similar to how the releases are updated 2021-01-26 13:12:22 Which is triggered via mqtt 2021-01-26 13:13:29 currently everything is triggered via mqtt iirc 2021-01-26 13:13:53 yes 2021-01-26 13:14:07 so webhooks.a.o is just an interface to mqtt 2021-01-26 13:14:10 yup 2021-01-26 13:14:23 So my idea was to provide a webhook that mcrute could trigger 2021-01-26 13:14:32 from his side, when the images are updated 2021-01-26 13:15:01 right 2021-01-26 13:15:08 but it can also be done from the other side (i think) 2021-01-26 13:15:23 his changes are in gitlab? 2021-01-26 13:15:39 not at the moment 2021-01-26 13:15:46 ah ok 2021-01-26 13:15:50 then it would be easy :) 2021-01-26 13:16:17 its never easy, only seems so ;-) 2021-01-26 13:16:41 if its from ie github we could add another entrypoint 2021-01-26 13:17:34 so you would have to analyze the payload and run some script 2021-01-26 13:21:16 ikke: if you are feeling adventurous, you could try to build www via a pipeline 2021-01-26 13:22:17 which could be triggered via a webhook 2021-01-26 13:31:04 clandmeter: Yeah, I was thinkg about that as well 2021-01-26 13:31:12 but the trigger sounded like an easier interrim solution 2021-01-26 13:31:26 yes maybe 2021-01-26 13:32:05 we could also start doing MR's on www changes 2021-01-26 13:32:20 push them to wwwtest 2021-01-26 13:32:22 I already created an MR for the changelog 2021-01-26 13:32:43 https://gitlab.alpinelinux.org/alpine/infra/alpine-mksite/-/merge_requests?scope=all&utf8=%E2%9C%93&state=merged 2021-01-26 13:33:09 the harder part are the changes outside of source control 2021-01-26 13:33:18 git commits, releases, cloud images 2021-01-26 13:37:47 changes outside of source control? 2021-01-26 13:37:56 like aports git commits? 2021-01-26 13:41:38 ah we have /releases now :) 2021-01-26 13:44:10 All of those 2021-01-26 13:56:05 ikke: i think we can trigger ci from another project? 2021-01-26 13:57:41 mcrute: maybe an idea to combine arches on a single line at https://alpinelinux.org/cloud/ ? 2021-01-26 13:58:22 and have a launch button per arch 2021-01-26 14:01:34 what generates the latest-releases.yaml on the mirrors? 2021-01-26 14:49:27 i think ncopas release scripts 2021-01-26 14:58:46 ah ok 2021-01-26 14:59:28 it gets updated on rsync/* events from mqtt 2021-01-26 15:59:16 what gets updated? 2021-01-26 16:02:07 https://www.alpinelinux.org/releases/ 2021-01-26 16:10:29 clandmeter: it would be nice to be able to trigger the webhook from GitHub for now... we'll move to GitLab soon but we're still publishing image updates in the interim 2021-01-26 16:11:05 clandmeter: do you mean combine arches in a single table? 2021-01-26 16:11:29 with more cloud providers the page size becomes nProviders * nRegions * nArchs 2021-01-26 16:11:54 hi 2021-01-26 16:12:01 was thinking about making it searchable/filterable like the Ubuntu one https://cloud-images.ubuntu.com/locator/ 2021-01-26 16:12:24 hi :-) 2021-01-26 16:12:47 i have tried to limit the use of js, but i know its not always possible. 2021-01-26 16:13:56 I'm not a huge fan of JS but don't have a great idea to make such a big list manageble otherwise, do you have a better idea? 2021-01-26 16:22:49 i guess provider+region will have multiple arches, these could be combines in the launch column? 2021-01-26 16:23:12 that would cut a single table in half 2021-01-26 16:24:23 there is a lot of duplicated data atm in the table. 2021-01-26 17:34:23 I like that... I'll bake it into V2 (which is coming soon(TM)) 2021-01-27 19:29:12 looks like the CI for secdb is broken? https://gitlab.alpinelinux.org/alpine/infra/docker/secdb/-/merge_requests/3 2021-01-27 19:29:40 jobs are getting stuck 2021-01-27 20:13:13 ncopa: shared runners were not enabled 2021-01-27 20:13:16 on the project 2021-01-27 20:15:34 hmm, no, that's not enough 2021-01-27 20:15:55 ah, right 2021-01-27 20:16:09 that's something quite anoying with gitlab 2021-01-27 20:16:29 We deliberately limited the docker image runners to not be public shared runners 2021-01-27 20:16:43 because they give direct access to the docker socket on the host 2021-01-27 20:42:16 but that also prevents them from being used by forks :-/ 2021-01-27 20:44:31 sounds like a good reason to minimize how often MRs are required 2021-01-27 20:45:24 What do you mean? 2021-01-27 20:46:00 I guess a better solution would be to look into one of the daemonless alternatives 2021-01-27 20:47:26 But it's kind of anoying that you cannot easily share runners to forks 2021-01-27 20:47:37 except for creating globally shared runners 2021-01-27 20:49:24 i mean try to move as much code as possible to separate repos 2021-01-27 20:50:04 Isn't https://gitlab.alpinelinux.org/alpine/infra/docker/secdb/ a separate repo? 2021-01-27 20:50:14 and how would that avoid MRs being made? 2021-01-27 20:50:22 eh... never mind 2021-01-27 20:50:35 sorry, I'm not following you :) 2021-01-27 21:09:29 I guess podman would work for building images in CI 2021-01-28 22:26:14 how do i trigger a rebuild of the https://alpinelinux.org/ site so the 3.13.1 gets published? 2021-01-28 22:31:55 pushing anything to aports git master was the answer 2021-01-29 08:29:00 ncopa, ikke: we have news from arm. 2021-01-29 08:31:02 we can swap out the current boxes for single or dual socket ampere machines 2021-01-29 08:33:28 morning clandmeter. sounds good. what is the disk size of those? 2021-01-29 08:33:41 check your email ;-) 2021-01-29 08:33:53 Mt Snow: 128GB (8x 16GB), 1TB SSD U.2, 1x 25GbE NIC 2021-01-29 08:33:53 Mt Jade: 16x 16GB per socket, 2x SSD U.2 NVMe, 1x 25GbE NIC 2021-01-29 08:37:14 jade is 512G with 160 cores 2021-01-29 09:16:13 so we could have 2 jade machines? 2021-01-29 09:17:31 :) 2021-01-29 09:17:51 doesnt say how big disks are on the jade machines 2021-01-29 09:18:15 i have no clue why somebody would choose snow... but maybe its a gigabyte machine :) 2021-01-29 09:23:16 maybe we just go ahead with our usa1-dev machine? its the aarch64 builder 2021-01-29 09:23:45 then we can verify that it can actually run 32 bit containers before we move the armv7 builder 2021-01-29 09:24:46 I could ask upfront about it 2021-01-29 09:24:58 i also wonder how it will perform in qemu 2021-01-29 09:26:30 LBlaboon: hey! I pushed alpine 3.13.1 yesterday. maybe update the linode images? 2021-01-29 10:00:01 ncopa: i found a presentation which mentions it can run 32bit applications 2021-01-29 10:01:18 great 2021-01-29 11:12:17 ncopa: if we do the builder first, it could mean we run in trouble 2021-01-29 11:14:56 maybe better to firstly try with CI? 2021-01-29 11:22:53 I did not receive that e-mail 2021-01-29 11:22:56 but nice 2021-01-29 11:26:01 what do we have for the CI? 2021-01-29 11:27:01 Currently our arm CI runs as a VM one the builders 2021-01-29 11:27:16 on the* 2021-01-29 11:28:28 and armhf is missing 2021-01-29 11:31:32 its usa4, isnt it? 2021-01-29 11:31:44 for armv7 and armhf 2021-01-29 11:31:47 and armv7 ci 2021-01-29 11:32:02 the aarch64 ci? 2021-01-29 11:32:13 aarch64 ci is on usa1 2021-01-29 11:32:44 ok, so it shoudlnt matter which we do first in other words 2021-01-29 11:33:03 usa1 is the weakest so it is the one that will benefit most i suppose 2021-01-29 11:33:31 but I don't mind if you prefer do usa4, dont let me stand in the way :) 2021-01-29 11:33:54 for me it does not matter 2021-01-29 11:45:36 ok good 2021-01-29 11:45:40 for me it also doesnt matter 2021-01-29 11:45:52 we need to find out how much time we have to migrate 2021-01-29 11:46:46 let me reply the email and cc you 2021-01-29 12:02:27 done 2021-01-29 15:35:06 ncopa: yep, we're already working on it. taking us a little longer because we're doing rebuilds of everything to patch the sudo CVE from earlier this week 2021-01-29 15:49:54 That one is fun 2021-01-29 15:50:09 Are you building it yourself? 2021-01-29 15:52:08 yep 2021-01-29 15:52:47 Aha ook 2021-01-29 15:52:52 Ok* 2021-01-30 00:31:13 forgot to mention this earlier, but 3.13.1 is out on linode 2021-01-30 18:53:12 aarch64, armv7 and s390x edge buildozers all appear according to build.a.o to be stuck on community/telepathy-glib 0.24.2-r0