2022-05-01 20:44:52 ikke: i enabled the new webhook container 2022-05-03 04:33:52 gitlab security notification 2022-05-05 07:45:25 re alpine-instealler-testsuite gitlab action. could we use a small equinix metal instance for it? ie. have a runner in a bare metal machine, so we dont need nested virtualizaion? 2022-05-05 07:45:53 or maybe we could even spawn it up and tear it down when running it 2022-05-05 07:46:46 there are no small equinix instances anymore 2022-05-05 07:47:00 the one we had is not available anymore 2022-05-05 07:47:58 ncopa: you can just use a shell runner 2022-05-05 07:47:58 s/small/smallest available/ 2022-05-05 07:47:58 ncopa meant to say: re alpine-instealler-testsuite gitlab action. could we use a smallest available equinix metal instance for it? ie. have a runner in a bare metal machine, so we dont need nested virtualizaion? 2022-05-05 07:48:01 its already available 2022-05-05 07:48:10 i use it to use qemu 2022-05-05 07:48:20 i set them up recently 2022-05-05 07:48:43 both for x86 and aarch64 2022-05-05 07:50:17 ncopa: https://gitlab.alpinelinux.org/clandmeter/alpine-disk-image/-/blob/master/.gitlab-ci.yml 2022-05-05 07:51:01 the idea is: when tagging a release, run a job that will: 1) spin up a bare metal machine (using terrafor), 2) install git qemu etc 3) checkout alpine-installer-testsuite 4) download the release artifacts to the baremetal machine, 5) run the testsuite 6) report back to gitlab runner 2022-05-05 07:52:40 the gitlab runner (shell) already has qemu and git installed 2022-05-05 07:53:24 oh, cool 2022-05-05 07:53:43 i think what i do is similar to what you do 2022-05-05 07:54:02 except i use packer and you use expect 2022-05-05 07:54:47 shell executor does not use any virtualization 2022-05-05 07:55:59 i do use docker, but its just to deploy things 2022-05-05 08:09:15 My idea is to use space we have on azure for that 2022-05-05 08:09:55 azure is baremetal? 2022-05-05 08:09:58 no 2022-05-05 08:10:02 that it's not 2022-05-05 08:10:10 hence the nested virtualization 2022-05-05 08:10:29 There are instance types which support it 2022-05-05 08:11:06 if we expose /dev/kvm in docker, what would be the downside to that? 2022-05-05 08:11:15 on real hw 2022-05-05 08:12:17 If we just run the suite on tags, which we control, I see little issue 2022-05-05 08:12:50 the disadvantage of using a shell executor is that its not a clean env on each run 2022-05-05 08:12:54 correct 2022-05-05 08:13:13 but if you do your magic in qemu, it does not really matter that much. 2022-05-05 08:13:23 also correct 2022-05-05 08:14:15 spinning a real machine on equinix on each run seems like lot of wasted time, i guess it will take some time to provision the server. 2022-05-05 08:19:05 We could check what we can do with spot instances? 2022-05-05 08:21:41 how long time does it takes to spin up an alpine instance on equinix? 2022-05-05 08:27:14 i have no clue, but only loading the efi firmware takes a lot of time 2022-05-05 08:27:32 is it 30 seconds or 5 mins? 2022-05-05 08:27:52 depends on hw i guess 2022-05-05 08:28:12 im using a server for work atm that takes minutes to boot 2022-05-05 08:29:45 i mean why would you want to spin a bare metal server to afterwards run qemu in it? 2022-05-05 08:30:17 to run the alpine-installer-testsuite 2022-05-05 08:30:21 what you are testing runs in qemu right? not on metal? 2022-05-05 08:30:28 yes 2022-05-05 08:30:57 so then its kind of useless to needing to spin a new server and wait for boot install and kexec 2022-05-05 08:31:02 i set up a matrix of different emulated hardware combinations (IDE disk, virtio disk, nvme) and run the installer on it 2022-05-05 08:31:25 i test with one or two disks 2022-05-05 08:31:33 test boot with bios or uefi 2022-05-05 08:31:45 how many arches are you testing? 2022-05-05 08:31:57 you want to run emulated or kvm? 2022-05-05 08:32:00 right now only x86 and x86_64 2022-05-05 08:32:08 preferrible not 2022-05-05 08:32:27 i'd also like to test on aarch64 hw 2022-05-05 08:32:31 some arches will be more difficult to test i guess? 2022-05-05 08:32:34 test armv7 and aarch64 2022-05-05 08:32:37 yup 2022-05-05 08:32:39 like s390x 2022-05-05 08:33:00 i think for ppc64le and s390x we might need to do some simple testing with emulated qemu 2022-05-05 08:33:11 running with kvm is always nice 2022-05-05 08:33:15 yup 2022-05-05 08:33:28 thats why i wonder why not just expose it in docker? 2022-05-05 08:33:49 you mean run qemu in docker? 2022-05-05 08:33:56 what would you miss out on? 2022-05-05 08:34:00 yeah 2022-05-05 08:34:06 i can do that 2022-05-05 08:34:14 just need to pass in /dev/kvm i guess 2022-05-05 08:34:18 i was thinking to do the same 2022-05-05 08:34:23 and move aways from shell executor 2022-05-05 08:34:51 but docker itself runs on bare-metal? 2022-05-05 08:34:56 we need to do that on the runner, but we can set up specific ones for just kvm usage 2022-05-05 08:35:39 docker runs on baremetal? 2022-05-05 08:35:46 dont understand that question :) 2022-05-05 08:36:24 i think "set up specific ones for just kvm usage" answered my question 2022-05-05 08:36:37 ok :) 2022-05-05 08:36:59 that way you dont need to wait for boot, installer, kexec 2022-05-05 08:37:08 just have a docker image fit for your needs 2022-05-05 08:37:51 right 2022-05-05 08:38:17 i was just thnking that we don't need have the bare-metal machine constantly running 2022-05-05 08:38:45 since those tests only need to run when we tag release candidates or releases 2022-05-05 08:39:33 well that could be possible, but with the cost mentioned above. 2022-05-05 08:39:40 and another thing 2022-05-05 08:39:45 we cant spin arm servers 2022-05-05 08:40:13 ok 2022-05-05 08:40:20 i still need to sent another email to arm ppl 2022-05-05 08:40:53 to show them how important alpine is to the arm community 2022-05-05 08:41:05 and 1 server is not enough for such tasks 2022-05-05 08:43:37 clandmeter: and riscv 2022-05-05 08:43:51 ? 2022-05-05 08:44:11 alpine is important for riscv, imo 2022-05-05 12:26:05 We do get hardware from riscvm, but here is no server-grade hw yet 2022-05-05 12:48:08 ikke: yes, i know, but I have rights to dream 2022-05-05 18:43:50 ikke: for some reason someone emailed me saying they can't even fork aports as public 2022-05-05 18:44:23 Well, public is disabled for users. They should be able to set it to internal 2022-05-05 18:44:44 yes, but you said it needs to be public for anything to work later 2022-05-05 18:45:18 and that things automatically go from internal to public 2022-05-05 18:45:23 yes 2022-05-05 18:45:24 so.. what is the point of disabling it 2022-05-05 18:45:30 We only do it for aports 2022-05-05 18:45:47 this is so frustrating that i wish we just allowed public-only and called it a day 2022-05-05 18:45:50 i'm so tired of this 2022-05-05 18:46:01 The reason we set it up like that is that we want to discourage users from using this instance for random projects 2022-05-05 18:46:51 The issue that prevents contributors from rebasing when the project is internal has been fixed, so just a little bit of patience left before that lands 2022-05-05 18:47:26 yes, but if they fork aports as private then the same issue remains 2022-05-05 18:47:43 nothing rewrites private into anything 2022-05-05 18:51:37 re random projects; is there no way to forbid new non-fork repo creation entirely in CE 2022-05-05 18:53:46 I suppose setting the projects limit to 0 2022-05-05 19:42:34 So it turned out I immediately upgraded to gitlab 14.10 instead of first to 14.9 2022-05-05 19:42:47 https://docs.gitlab.com/ee/update/#1490 2022-05-05 19:45:49 ah 2022-05-05 19:46:16 neat 2022-05-05 19:46:28 The mentioned background jobs are running 2022-05-05 19:47:41 they should perhaps complete.. 2022-05-05 19:48:00 yes, I see one already making progress 2022-05-05 19:48:06 MigratePersonalNamespaceProjectMaintainerToOwner: members 99.00% 2022-05-05 19:48:23 at last i can be a true owner /s 2022-05-05 19:48:30 :D 2022-05-05 19:52:55 How about NullifyOrphanRunnerIdOnCiBuilds: ci_builds 7.00% 2022-05-05 19:54:50 a whole null of my own, too 🥺 2022-05-05 20:56:41 rebasing !34009 also fails with the new hook 2022-05-05 21:05:26 I see what's going on, will look at it tomorrow 2022-05-06 09:51:18 we don't have rust in s390x and riscv-64 2022-05-06 09:53:01 new clamav needs rust 2022-05-06 12:13:57 psykose: one thing I did change is that by default projects are set to internal instead of private, maybe that will reduce the amount of forks that start as private 2022-05-06 12:15:52 hope so 2022-05-06 12:23:03 thanks :3 2022-05-06 16:24:01 14.10.2 does not solve the issue errors 2022-05-06 16:24:04 sadly 2022-05-06 17:54:50 Oof, the irony: a 500 response from gitlab when trying to report an issue :/ 2022-05-08 15:44:35 any plans to fix that Issues in Gitlab are no longer viewable if not logged in? Or is this a new Gitlab "feature" that cannot be reverted? 2022-05-08 15:46:42 https://gitlab.com/gitlab-org/gitlab/-/issues/361699 2022-05-08 15:46:54 Want to fix it, but trying to figure out how 2022-05-08 15:53:28 ok, thanks 2022-05-08 15:55:15 Something graphql related 2022-05-08 15:58:58 weird that the MR page works though when not logged in 2022-05-09 12:30:14 I received confirmation that the rv64 board is being shipped 2022-05-09 12:32:05 someone could've walked it across the atlantic in this time /s 2022-05-09 12:40:42 ikke: also 2022-05-09 12:40:48 few days ago 2022-05-09 12:41:36 I downloaded some docs about JH7100 and started to read when I have some time 2022-05-09 12:42:04 and I started to skim over ISA 2022-05-09 12:42:44 btw, last night I managed to destroy my macbook m1 2022-05-09 12:42:59 ouch, how did you manage to do that? 2022-05-09 12:43:46 simple, spilled water over it :\ 2022-05-09 12:44:59 bummer 2022-05-09 12:47:12 will try to call service if they can 'save' it 2022-05-09 13:35:45 "Your package is delayed due to local reasons." 2022-05-09 13:35:50 More delays 2022-05-09 13:57:39 giga delay 2022-05-09 14:03:16 ikke: can't you add another worker machine so the package will build quicker? ;-) 2022-05-09 14:16:54 Same here 2022-05-10 04:31:23 apparently we are not the only one having the problem with the error on the issue list when not being logged in 2022-05-10 18:00:32 https://git.pleroma.social/pleroma/pleroma/-/issues 2022-05-10 18:00:38 same issue 2022-05-10 18:00:57 minimal: ^ 2022-05-10 18:05:44 ikke: on Gitlab's own issues page you can't search for an issue (such as this problem) without being logged in....doh! 2022-05-10 18:06:52 Yeah, that's anoying 2022-05-10 18:07:04 But at least you can still see the issues 2022-05-10 18:24:31 yeah but short of trawling through all their recent issues there's no easy way to know (i.e. search) if its already on their radar 2022-05-10 18:24:48 I did search for it 2022-05-10 18:25:27 and someone else managed to find at least this one 2022-05-11 10:26:38 So apparently the error has to do with 'public' being restricted in our instance 2022-05-11 14:42:04 Someone from gitlab is working on a fix for the issue 2022-05-11 14:45:43 \o/ 2022-05-11 17:53:31 Ariadne told me she cannot push to main anymore 2022-05-11 17:54:02 can we just delete this stupid hook 2022-05-11 17:54:10 we either trust people to push to alpine or we don't 2022-05-11 17:54:38 i guess that 1. makes sense 2. would simplify things 2022-05-11 17:56:53 what is enforcing that limitation? 2022-05-11 18:00:29 I also have that. "You are not allowed to push to the main repository. Try again." 2022-05-11 18:01:33 but it seems like i can press there mer when pipeline finishes 2022-05-11 18:05:28 ikke: can we just delete this? i don't think it really gets us anything 2022-05-11 18:06:10 like i understand the theory behind it -- graduate people up to having main access -- but realistically, it just blocks people from getting work done 2022-05-11 18:06:39 and now it is blocking anyone from getting *anything* done 2022-05-11 18:06:51 does it? it seems to work for me now? 2022-05-11 18:06:54 not sure what is happening 2022-05-11 18:06:59 i saw the error message 2022-05-11 18:07:52 but after a refresh or something it disapeared? 2022-05-11 18:08:52 merge something, then :p 2022-05-11 18:09:10 i vote for !33751 2022-05-11 18:10:26 nope it does not work for me 2022-05-11 18:12:11 weird. I wonder how it is supposed to work. and what enforces it 2022-05-11 18:12:29 shell script with some greps and a magic file 2022-05-11 18:14:44 i think if we want to have this, we should do it as separate projects in gitlab 2022-05-11 18:14:55 instead of something ported from gitolite days 2022-05-11 18:15:06 but i am not convinced we really need it 2022-05-11 18:15:12 we should just get rid of it 2022-05-11 18:15:20 it's a point of failure that serves only political purpose 2022-05-11 18:15:47 psykose: do you know how/where they are executed? 2022-05-11 18:15:49 from where 2022-05-11 18:16:00 gitlab runs it as a pre-commit hook 2022-05-11 18:16:02 some hook container on something i don't have access to 2022-05-11 18:19:48 /srv/docker/repositories/alpine-aports/custom_hooks/update 2022-05-11 18:20:31 Sorry /srv/docker/gitlab/... 2022-05-11 18:21:54 can we just delete this damn thing 2022-05-11 18:22:51 it is political theatre and provides no actual security benefit to alpine -- if somebody wants to fuck up alpine with commit rights, they will just jump through the additional hoops to get main access anyway 2022-05-11 18:23:24 ikke: so it is not to be found from the webui? 2022-05-11 18:23:29 in fact, right now, it is causing security problems to alpine 2022-05-11 18:23:35 because i cannot mitigate CVEs 2022-05-11 18:25:35 ncopa: correct 2022-05-11 18:26:05 where does this machine run? 2022-05-11 18:26:19 deu2-dev1 2022-05-11 18:26:26 Linode 2022-05-11 18:29:51 ok. i dont have login there 2022-05-11 18:33:10 Oof 2022-05-11 21:15:00 ikke: was it the hook that failed? 2022-05-12 04:48:55 more like 100% 2022-05-12 04:50:05 100% is more than 95% :P 2022-05-12 04:51:27 indeed so 2022-05-12 04:51:28 :3 2022-05-12 04:51:49 afaik most of the space is trimmed at the end of the run once the old gopkgs will be trimmed 2022-05-12 04:51:58 just well, 6 worlds at once is a big growth 2022-05-12 04:52:35 Yeah, at leas in the current raid config, that box does not have enough space 2022-05-12 04:52:40 (lvm raid) 2022-05-12 04:57:55 also very fun, zabbix removed the unsupported tags from github, so 5.4 is gone for 3.15 2022-05-12 05:02:12 they have 'official' tarballs on the website, https://www.zabbix.com/download_sources#unsupported but they have dbschema missing or something 2022-05-12 05:04:15 they can be generated 2022-05-12 05:04:52 make dbschema 2022-05-12 05:06:32 that's what i mean :) it's undefined 2022-05-12 05:06:51 looking at the configure stuff, everything is gated behind #if DBSCHEMA 2022-05-12 05:06:58 and DBSCHEMA is set from a `test -d create` 2022-05-12 05:07:20 if you force make the directory, you get failures with ` No rule to make target '../../../create/src/schema.tmpl', needed by 'dbschema.c'. Stop. ` 2022-05-12 05:07:32 so, simply the create folder with the base things is not part of the tarball 2022-05-12 05:07:53 did you run autoreconf? 2022-05-12 05:08:00 https://gitlab.alpinelinux.org/alpine/aports/-/blob/master/community/zabbix/APKBUILD#L98 2022-05-12 05:08:27 i mean.. the apkbuild is running it right there 2022-05-12 05:08:29 not doing this by hand 2022-05-12 05:08:33 ah ok 2022-05-12 05:09:23 idk why the 'official' tarballs are missing this 2022-05-12 05:10:25 oh, hm 2022-05-12 05:10:49 they are already built, that's why 2022-05-12 05:11:03 no need to run it at all, unless it's supposed to be updated 2022-05-12 05:12:36 Right, if you download it from github (source), you do need to do it, but hte ofificial tarballs already have it included 2022-05-12 05:13:00 mhm 2022-05-12 05:13:04 guess it can just be removed then 2022-05-12 05:13:07 But strange that it fails 2022-05-12 05:13:43 well, the rule is not generated without the base files being present 2022-05-12 05:13:51 they purposefully just remove them 2022-05-12 05:14:53 ah ok 2022-05-12 05:17:38 We do at least have the source we built with on distfiles 2022-05-12 05:18:55 are you doing a go upgrade? 2022-05-12 05:18:57 it's fine to swap i think 2022-05-12 05:19:00 yeah, this was the last thing 2022-05-12 05:19:13 just doublechecking a diff on the edge one 2022-05-12 05:19:23 5.4 is eol, will not receive any updates anymore 2022-05-12 05:19:24 removing make dbschema removes some files, now seeing if new source + remove is identical 2022-05-12 05:19:30 yeah, but it's a 3.15 rebuild 2022-05-12 05:19:36 ahuh 2022-05-12 05:19:58 I mean that there will be no other versions anymore anyway 2022-05-12 05:20:02 mhm 2022-05-12 05:22:29 it does have one more side effect 2022-05-12 05:22:51 some useless .gitignore folders are removed from usr/share/webapps/zabbix in zabbix-webif 2022-05-12 05:23:09 aha 2022-05-12 05:23:11 and all of usr/share/webapps/zabbix/tests/ is dropped, which leads to a 63->60 mb change 2022-05-12 05:23:21 you should change to the official tarball next upgrade 2022-05-12 05:23:42 The main reason I used source is that it's easier to test rc's 2022-05-12 05:23:48 yeah, it is 2022-05-12 05:24:03 in the past that was usefull specifically for go and 32-bits 2022-05-12 05:24:22 and with that, both go's are now sent 2022-05-12 05:24:26 rip to diskspace 2022-05-12 05:25:56 i think there's some $HOME/go / .cargo cache to trim by now 2022-05-12 05:26:31 yeah 2022-05-12 05:32:39 150G should be enough for a while 2022-05-12 05:32:58 yeah 2022-05-12 05:33:00 for builder in build-3-16-* build-3-15-* build-edge-*; do rm -rf ./$builder/rootfs/home/buildozer/.cargo ./$builder/rootfs/home/buildozer/.cache/go-build ./$builder/rootfs/home/buildozer/.cache/yarn ./$builder/roo 2022-05-12 05:33:02 tfs/home/buildozer/go; done 2022-05-12 05:33:28 it wasn't not enough currently, but there is that middle period where old packages aren't deleted yet, so unfortunate 2022-05-12 05:34:16 But we need to think about the next release 2022-05-12 05:34:32 i don't think it's possible without another machine 2022-05-12 05:34:49 At least this one still has unassigned space 2022-05-12 05:34:52 ah 2022-05-12 05:34:55 then sure 2022-05-12 05:35:03 there will be space freed up on proper release 2022-05-12 05:35:10 I'm just conservative in assigning it 2022-05-12 05:35:10 er, no 2022-05-12 05:35:43 The rc's can be removed 2022-05-12 05:36:09 those are pretty small anyway 2022-05-12 05:36:12 yup 2022-05-12 07:05:43 ikke: ? 2022-05-12 07:11:14 re update hook? 2022-05-12 07:14:14 yup 2022-05-12 07:24:17 the hook was working, the acl list outdated 2022-05-12 09:13:04 ah ok, thx for checking. 2022-05-12 09:16:44 rebooting deu1/deu7 for the usual kernel and things :) 2022-05-12 09:20:05 algitbot: ping 2022-05-12 09:21:19 ikke: we need to think about a plan to get those new t1 servers going 2022-05-12 09:21:29 or at least the new NL one 2022-05-12 09:31:47 You wanted to use zfs, right? 2022-05-12 09:47:01 we can, the disk setup is kind of optimal for zfs usage 2022-05-12 09:47:08 but its not a must for me 2022-05-12 09:50:34 I don't mind 2022-05-12 09:50:47 I have no experience with it myself though 2022-05-12 09:51:50 lets change that :) 2022-05-12 09:52:12 i have played with it, just not that much. 2022-05-12 09:52:41 do we want to run docker on it? 2022-05-12 09:57:07 good quesiton, not sure 2022-05-12 10:32:32 ugh, from 150G free to 36G free in a couple of hours 2022-05-12 10:34:31 i'm curious where the space went, considering no actual new packages were added 2022-05-12 10:35:34 ~/go apparently 2022-05-12 10:36:05 hah 2022-05-12 10:36:09 well, that makes sense 2022-05-12 11:42:49 i might need some handholding to set up a CI job for alpine-conf. I'd like to install kyua and run `make test` for every MR 2022-05-12 11:43:19 afaik that is editing the .gitlab-ci.yml in the alpine-conf repo 2022-05-12 11:43:54 +- perhaps hitting a toggle somewhere 2022-05-12 11:44:26 maybe I need to create a docker image as well? 2022-05-12 11:44:31 then the rest is the gitlab ci pipeline thing, for which the apk-tools ones might be of use https://gitlab.alpinelinux.org/alpine/apk-tools/-/blob/master/.gitlab-ci.yml 2022-05-12 11:44:36 ehh, i don't think so 2022-05-12 11:44:47 apk just uses standard alpine:latest 2022-05-12 11:45:04 oh... thats nice 2022-05-12 11:45:23 even if you want to 'add stuff' into the container, unless it's building custom stuff upfront, it's not slow to apk add / run a bit of shell 2022-05-12 11:45:39 if you want to build a whole thing then yeah, a custom image would be useful to make it faster 2022-05-12 11:46:42 but from the sound of it, this is just running make test on some shell scripts, so just alpine should be fine 2022-05-12 11:48:40 If it's just a few deps, it's maybe easier to do it directly in .gitlab-ci.yaml. If you need a more elaborate setup, we can make an image 2022-05-12 11:48:48 yeah 2022-05-12 11:48:58 i guess I'll create an MR with it 2022-05-12 11:49:55 Don't forget to add tags to the jobs to select the runners 2022-05-12 11:50:16 you can probably copy the apk one and leave only the test stage and go from there 2022-05-12 11:57:38 Something like this? 2022-05-12 11:57:38 https://tpaste.us/qaNa 2022-05-12 11:57:38 $ tpaste < .gitlab-ci.yaml 2022-05-12 11:58:33 Note that each jobs starts with a clean tree by default 2022-05-12 11:59:59 so if the goal is to just run tests (that depends on built files), it's probably enough to have a single job that does both 2022-05-12 12:00:06 yeah 2022-05-12 12:01:03 Multiple stages can make sense if you want to avoid expensive builds by checking things in advance 2022-05-12 12:04:50 do I need to add runners for it? https://gitlab.alpinelinux.org/alpine/alpine-conf/-/merge_requests/65 2022-05-12 12:05:14 ncopa: you need to enable shared runners here: https://gitlab.alpinelinux.org/alpine/alpine-conf/-/settings/ci_cd 2022-05-12 12:05:46 under the runners section 2022-05-12 12:06:16 And then restart the job 2022-05-12 12:06:54 yes! 2022-05-12 12:07:57 :3 2022-05-12 12:09:29 sweet! 2022-05-12 12:09:38 something is broken in the kyua setup though 2022-05-12 12:10:05 i assume the test-env it sources doesn't exist or something 2022-05-12 12:10:08 is something supposed to make it 2022-05-12 12:26:10 ah, the ole strace in the ci job :) 2022-05-12 12:26:47 :D 2022-05-12 13:01:37 i cannot figure out what it does and why it fails 2022-05-12 13:05:18 Can you reproduce it locally in a docker container? 2022-05-12 13:12:20 nope 2022-05-12 13:12:24 but I figured it out now 2022-05-12 13:12:49 i have a *.sh in .gitignore, so the tests/test_env.sh was never included in git 2022-05-12 13:13:04 aha 2022-05-12 13:13:06 hey i guessed that :) 2022-05-12 13:13:26 oh.. sorry i missed that while on lunch... 2022-05-12 13:13:33 hehe 2022-05-12 13:13:37 hope it was as tasty as mine 2022-05-12 13:13:47 psykose: i definitively need to better at listening 2022-05-12 13:19:24 \o/ its now in prod! 2022-05-12 13:19:48 \/o 2022-05-12 13:19:58 and with that i'm off to bed 2022-05-12 13:20:21 o/ 2022-05-12 13:34:24 good night! 2022-05-12 15:05:04 any idea why there are no runners for this MR? https://gitlab.alpinelinux.org/alpine/alpine-conf/-/merge_requests/64 2022-05-12 15:11:56 ncopa: did you enable shared runners after rebasing / starting the pipeline? 2022-05-12 15:12:12 no 2022-05-12 15:12:23 do i need enable shared runners for the users fork? 2022-05-12 15:12:36 Normally not, we don't need to do it for aports either 2022-05-12 15:12:45 And they are enabled 2022-05-12 16:35:22 clandmeter: What do you think about lifthing the restriction on public repos on our instance? More then once it has created issues 2022-05-12 16:35:52 i think we have to be careful with it 2022-05-12 16:36:51 what would be the reason to do it? 2022-05-12 16:37:29 1) get rid of issues caused by it (apparently gitlab is not testing with this enabled a lot) 2022-05-12 16:38:06 2) psykose (and I assume others as well) get frustrated because they cannot rebase some MRs in certain cases 2022-05-12 16:39:48 thats related to permsisions? 2022-05-12 16:39:49 it's also annoying because internal/private forks default to making MR in own repo (because alpine repos are public and gitlab is like: we need to protect you) 2022-05-12 16:40:59 clandmeter: Yes 2022-05-12 16:43:08 if psykose has issues with it, maybe she can help montior abuse if we switch? 2022-05-12 16:46:10 abuse is the main concern here right? 2022-05-12 16:47:17 The primary concert was users starting to host their own personal projects which is a liability for us 2022-05-12 16:47:45 We already have people agreeing to some terms regarding to that 2022-05-12 16:48:43 does it impact CI usage? 2022-05-12 16:49:08 No, in the sense that people can already abuse CI with private projects 2022-05-12 16:49:23 nod 2022-05-12 16:49:56 i guess in the end it does not really matter? 2022-05-12 16:50:06 Yeah, I was thinking the same 2022-05-12 16:52:46 So should I proceed and remove it? 2022-05-12 16:53:04 sounds good to me 2022-05-12 16:53:22 and would be good if we have somebody to keep an eye on it 2022-05-12 16:54:46 minimal: issues are visible again when not logged in 2022-05-12 16:55:04 clandmeter: I think you should be able to get ssh keys again now as well 2022-05-12 16:55:33 :) 2022-05-12 16:55:56 Pandoras box has opened 2022-05-12 17:40:29 ikke: good news :-) 2022-05-12 17:40:43 ike: was it a bug or a feature? 2022-05-12 17:41:00 bug 2022-05-12 17:41:22 but triggers by having public repos restricted 2022-05-12 17:41:27 triggered* 2022-05-12 19:34:45 ncopa: I think for existing forks, you'd need to enable shared runners, new forks will have it enabled 2022-05-12 19:35:03 https://gitlab.alpinelinux.org/dbradley/alpine-conf/-/jobs/719708 2022-05-12 23:52:31 sure, i can watch gitlab if you want me to, though i'm unsure of what lets one do that aside from some admin ui 2022-05-12 23:53:01 it's all fine by me though, i don't mind 2022-05-13 09:02:48 psykose: did you create MR for clamav 0.104.3 2022-05-13 09:03:47 no, how come 2022-05-13 09:03:47 I doubt that the 0.105.0 will be solved soon, it needs rust but we don't have rust on s390x and riscv64 2022-05-13 09:03:52 yep 2022-05-13 09:03:59 just thought i'd make it .3 since we had it in 3.15 already 2022-05-13 09:04:16 re: rust, imo it's fine to just disable it on those arches i guess 2022-05-13 09:04:25 psykose: ok, I will cherry pick from 3.15, I upgraded clamav there few days ago 2022-05-13 09:04:26 and perhaps in the next 6 months we will have rust there, finally :) 2022-05-13 09:04:40 mps: i mean that i already merged 0.104.3 into edge 2022-05-13 09:04:51 ah, good 2022-05-13 09:05:56 'perhaps' sounds like infinite ;) 2022-05-13 09:06:45 well, for s390x it's guaranteed 2022-05-13 09:06:52 or there will not be s390x anymore :p 2022-05-13 09:06:58 for riscv i guess it is more infinite.. 2022-05-13 09:07:11 but it already builds, just need to figure out some internal issues, hopefully upstream helps 2022-05-13 09:08:34 (I'm not sure we need s390x (and ppc64le) but ...) 2022-05-13 09:22:43 ppc64le at least usually works :) 2022-05-13 09:22:52 (until binutils or go or whatever else magically stops working..) 2022-05-13 09:54:28 anything that has ring as dependency: am I a joke to you? 2022-05-13 17:19:12 what do y'all use for arm builders, and where did you get them? 2022-05-13 17:29:45 zv: it is one neoverse-n1 machine and builders for armhf,armv7 and aarch64 are 3 lxc 2022-05-14 07:44:38 zv: arm sponsors us a machine 2022-05-14 07:45:19 netbox upgraded to 3.2.3 2022-05-14 14:34:20 ikke: do you have an account manager there / would you be in a position to share their contact info? 2022-05-16 06:44:25 zv: do you need arm64 resources for something?