2021-07-01 08:23:12 ikke: looks like bundler is broken 2021-07-01 09:42:03 clandmeter: ^ 2021-07-01 09:42:19 nod 2021-07-01 10:35:42 ikke: i think i have a fix 2021-07-01 10:36:18 ah, nice 2021-07-01 10:39:22 the prlblem is gitaly 2021-07-01 10:40:16 the offending commit is: https://gitlab.com/gitlab-org/gitaly/-/commit/365674695fcbc1ac6332cb8bbc646441e1e0f15a 2021-07-01 10:42:24 deployment is the issue 2021-07-01 10:43:05 need to look how to properly manage it 2021-07-01 10:43:13 but for now i just it to false 2021-07-01 10:43:17 ill push 13.12 2021-07-01 10:52:07 ok 2021-07-01 10:54:57 ikke: https://gitlab.alpinelinux.org/alpine/infra/docker/gitlab/-/pipelines/86080 2021-07-01 10:55:06 🀞 2021-07-01 10:55:37 btw: https://gitlab.alpinelinux.org/alpine/infra/docker/gitlab/-/issues/7 2021-07-01 10:55:51 That's something we need to change 2021-07-01 10:56:15 yes 2021-07-01 10:56:25 thats what i mean 2021-07-01 10:56:29 we need to look at bundler 2021-07-01 10:56:32 how to manage it 2021-07-01 10:56:37 there are some changes also in docs 2021-07-01 10:58:10 its a bit tricky 2021-07-01 10:58:27 in principle we need to use bundler config whatever 2021-07-01 10:58:40 but it seems that ie gitaly does not honor those settings 2021-07-01 10:58:57 it will only look at those env vars 2021-07-01 10:59:14 right, so they are doing some custom things? 2021-07-01 10:59:59 yes 2021-07-01 11:00:06 we are actually doing a hack 2021-07-01 11:00:13 and they unhack us :) 2021-07-01 11:00:37 in short 2021-07-01 11:00:40 :D 2021-07-01 11:00:41 from wh at i understand 2021-07-01 11:01:02 in production you need to set deployment = true 2021-07-01 11:01:18 and it will bundle the deps per bundle into its own bundle 2021-07-01 11:02:03 its the normal world nowdays 2021-07-01 11:02:21 copy a gazillion similar deps everywhere 2021-07-01 15:57:44 clandmeter: build passed 2021-07-01 17:49:05 oh 2021-07-01 18:56:15 clandmeter: as you might have noticed, I started restoring an up-to-date version of gitlab to gitlab-test 2021-07-01 18:56:28 yup 2021-07-01 18:57:20 was the update already pushed to github? 2021-07-01 18:57:30 github? 2021-07-01 18:57:53 we still build gitlab with drone, right? 2021-07-01 18:58:06 build to push to hub 2021-07-01 18:58:12 we do? 2021-07-01 18:58:15 yes 2021-07-01 18:58:31 You felt that we needed to be able to build gitlab outside of gitlab 2021-07-01 18:58:40 ah ok 2021-07-01 18:58:43 we also build on gitlab 2021-07-01 18:58:47 yes, just as CI 2021-07-01 18:58:48 https://gitlab.alpinelinux.org/alpine/infra/docker/gitlab/-/jobs/427221 2021-07-01 18:58:52 ah ok 2021-07-01 18:58:53 I've added that 2021-07-01 18:58:59 yeah 2021-07-01 18:59:06 If you want, I can also let it push to hub 2021-07-01 18:59:16 it was pushed 2021-07-01 18:59:20 as it shows on hub 2021-07-01 18:59:23 ok 2021-07-01 18:59:29 but i had no idea it was via github 2021-07-01 18:59:33 too long ago :) 2021-07-01 18:59:37 yeah, I see the 13.12 branch there 2021-07-01 18:59:58 ok, then I'll upgrade gitlab-test as soon as it's ready 2021-07-01 19:00:29 although, wasn't 13.12 the version that Thalheim mentioned? 2021-07-01 19:01:30 3.12.5 definitely works, and 14.x works too if y'all want to upgrade to latest 2021-07-01 19:01:42 ah 2021-07-01 19:01:44 good 2021-07-01 19:28:35 clandmeter: acme-client is failing on git.alpinelinux.org (which updates distfiles.a.o) 2021-07-01 19:28:54 thats the old client? 2021-07-01 19:29:14 i think that one is no l onger supported 2021-07-01 19:29:22 ah ok 2021-07-01 19:29:25 you will need something like uacme 2021-07-01 19:29:36 or use our wildcard 2021-07-01 19:29:42 yeah, was wondering about that 2021-07-01 19:29:50 there is already a script which fetches the certs from cert.a.o 2021-07-01 19:30:09 right, so maybe the certs are already on the server? 2021-07-01 19:31:32 yes 2021-07-01 19:32:00 and up-to-date 2021-07-01 19:33:01 simple life :) 2021-07-01 19:33:11 heh 2021-07-01 19:33:13 easy i mean... 2021-07-01 19:33:36 that's better 2021-07-01 19:33:48 {"days_left": 56, "hours_left": 1350, "end_date": "Aug 27 02:00:13 2021 GMT"} 2021-07-01 19:34:32 :) 2021-07-01 19:34:41 weird that it happens this late 2021-07-01 19:34:51 i think i had htis error on another one few months ago 2021-07-01 19:35:36 probaby just updated right before support was dropped 2021-07-01 19:38:02 distfiles was the only one left not using the wildcard yet 2021-07-01 19:38:33 good that we monitor it :P 2021-07-01 19:38:44 clandmeter: I have bad news for you :P 2021-07-01 19:38:59 ACTION hides 2021-07-01 19:39:05 GitLab Security Release 14.0.2, 13.12.6, and 13.11.6 2021-07-01 19:40:35 cooking 2021-07-01 19:43:53 rsync is hanging on ppc64le 2021-07-01 19:44:21 should we send that email? 2021-07-01 19:46:02 clandmeter: I suppose 2021-07-01 19:48:46 \o/ 2021-07-01 19:52:17 I noticed on the main alpinelinux.org homepage the "Latest Development" that is supposed to show the last few merges to aports hasn't changed since 15th June 2021-07-01 19:52:26 minimal: oh 2021-07-01 19:53:26 I did change something to fix it responding to updates from gitlab, maybe I broke this part 2021-07-01 19:55:42 ikke: the Microsoft approach of a new release bringing new features and also breaking existing stuff? ;-) What he gives with one hand he takes away with the other lol 2021-07-01 19:55:58 right, learn from the best, they say, right? 2021-07-01 19:56:25 so Alpine's going to require a TPM in the next release? lol 2021-07-01 19:57:17 And a CPU that is 14 weeks old 2021-07-01 19:57:20 2 weeks* 2021-07-01 19:57:59 14 weeks old? so like 5 Intel microcode updates behind for security issues? :-) 2021-07-01 19:58:03 :D 2021-07-01 20:01:54 I think it should be fixed now 2021-07-01 20:02:01 waiting for the next commit 2021-07-01 20:05:32 ikke: did you break the internet? 2021-07-01 20:05:44 as usual 2021-07-01 20:05:48 https://build.alpinelinux.org/ 2021-07-01 20:05:57 ssssht 2021-07-01 20:06:16 it means you are not active enough 2021-07-01 20:06:51 thanks for the notice on the 14.0.2 and 13.12.6 releases; we've upgraded to 14.0.2 now. 2021-07-01 20:07:23 Thalheim: You can subscribe to security e-mails from gitlab 2021-07-01 20:07:28 That's how I knew 2021-07-01 20:07:47 i told you, give me the credit :p 2021-07-01 20:07:50 :D 2021-07-01 20:07:56 ACTION applauds both of you 2021-07-01 20:08:32 Thalheim: who is we if i may ask? 2021-07-01 20:09:25 AdΓ©lie 2021-07-01 20:09:28 ah 2021-07-01 20:09:34 aha :) 2021-07-01 20:09:58 do you use omnibus? 2021-07-01 20:10:22 they did before 2021-07-01 20:10:38 I underwent the excruciating process of migrating from a source-built omnibus situation to the dockerized version, needing to hit all minor releases along the way :) 2021-07-01 20:10:56 no more gem incompatibility and LD_PRELOAD hacks 2021-07-01 20:11:02 heh 2021-07-01 20:11:11 you build your own docker images? 2021-07-01 20:11:23 no, using gitlab/gitlab-ce:tag ones 2021-07-01 20:11:36 yeah, I expected, since you updated so quickly 2021-07-01 20:11:48 those are still ubuntu based i guess? 2021-07-01 20:12:01 minimal: fixed, thanks for reporting 2021-07-01 20:12:03 I think so. tbh don't remember. I can share my notes on the transition and all the steps later this evening if that would be helpful. 2021-07-01 20:12:12 We have our own docker images 2021-07-01 20:12:15 alpine based 2021-07-01 20:12:21 we eat our own dogfood :) 2021-07-01 20:12:36 i kill our headaches with paracetamol :) 2021-07-01 20:12:41 lol 2021-07-01 20:12:54 its not that bad 2021-07-01 20:13:11 just some minor breakage from time to time 2021-07-01 20:13:27 the upgrade itself is really simple, but we test it before we deploy it to production 2021-07-01 20:13:44 breaking from our previous policy, dogfood is the goal but not the requirement if it's impeding progress. 2021-07-01 20:14:11 true 2021-07-01 20:14:20 but the nice thing is, you know gitlab a bit better. 2021-07-01 20:14:43 so if shit hits the fan, you know how to take it apart :) 2021-07-01 20:14:46 I like to work on antique cars but I don't want to drive them to work 2021-07-01 20:14:49 and, except for security issues, having the latest and greatest gitlab version is not that important 2021-07-01 20:15:31 i remember adelie was using an old version for a longer time 2021-07-01 20:16:06 but they dont support sec updates for older than few versions 2021-07-01 20:16:18 3 versions 2021-07-01 20:21:55 ikke: gitlab is uplaoded 2021-07-01 20:22:01 clandmeter: thanks 2021-07-01 20:22:18 sadly, the backup restore takes quite a long time for some reason 2021-07-01 20:22:26 its still running? 2021-07-01 20:22:29 yes 2021-07-01 20:22:33 hmm 2021-07-01 20:22:48 I understood they restored things on file level 2021-07-01 20:23:02 maybe too many files? 2021-07-01 20:23:05 yeah 2021-07-01 20:23:13 I already pruned a lot of empty dirs before 2021-07-01 20:24:51 find . -type d -depth -empty | wc -l 2021-07-01 20:24:53 104224 2021-07-01 20:24:55 oof 2021-07-01 20:26:57 I guess we should prune that again 2021-07-01 20:55:51 how far is that backup restore? 2021-07-01 20:56:50 8%? say what? 2021-07-01 20:57:29 blaboon: is it common for the restore to be slow like this? 2021-07-01 22:17:09 that backup job does appear to be going unusually slow, although it is making progress 2021-07-01 22:18:19 sometimes we see this happen with backups that have a very large number of files in them, like on the order of millions 2021-07-02 04:32:45 df -i -> 3.855.578 2021-07-02 07:03:50 ikke: i pushed some changes to gitlab 2021-07-02 07:03:56 specifically the ones you mentioned 2021-07-02 10:57:18 clandmeter: "and display steps": \o/ 2021-07-02 11:20:40 clandmeter: https://gitlab-test.alpinelinux.org is now 13.12.6 2021-07-02 13:04:26 500 https://gitlab-test.alpinelinux.org/alpine/infra/alpine-mksite 2021-07-02 13:04:57 most or all repos are 500 2021-07-02 13:06:57 as well as issues and mr's https://gitlab-test.alpinelinux.org/groups/alpine/-/merge_requests 2021-07-02 13:10:14 maybe because it is just some test bed and not exact copy of orig repo, dunno 2021-07-02 13:58:47 It is 2021-07-02 13:59:23 ^ Thalheim 2021-07-02 13:59:44 I mean, it is a copy 2021-07-02 13:59:50 ah 2021-07-02 14:00:15 then I see 500 too :P 2021-07-02 14:00:20 We restore a snapshot of the gitlab server 2021-07-02 14:00:53 I already saw test failures in our acceptance tests 2021-07-02 15:16:54 seems like grpc issues 2021-07-02 17:21:18 time="2021-07-02T17:21:03Z" level=fatal msg="unsupported Git version: \"2.30.2\"" 2021-07-02 17:21:21 sigh 2021-07-02 17:21:33 gitaly is failing to start 2021-07-02 17:27:03 using git from v3.14 fixes the issue 2021-07-02 17:27:07 clandmeter: ^ 2021-07-02 17:30:26 Huh 2021-07-02 17:30:36 gitaly requires git >2.31.0 2021-07-02 17:30:43 Yes 2021-07-02 17:30:51 We build it ourselves? 2021-07-02 17:30:58 https://gitlab.com/gitlab-org/gitaly/-/tree/13-12-stable 2021-07-02 17:31:06 build what ourselves? 2021-07-02 17:31:11 Git 2021-07-02 17:31:12 no 2021-07-02 17:31:18 the image is using alpine 3.13 2021-07-02 17:31:26 which has git 2.30.2 2021-07-02 17:31:47 I thought w already build it local 2021-07-02 17:32:04 apk add .. git 2021-07-02 17:32:13 Nod 2021-07-02 17:32:26 https://gitlab.alpinelinux.org/alpine/infra/docker/gitlab/-/blob/13.7-stable/overlay/usr/local/bin/setup.sh#L39 2021-07-02 17:32:42 Right 2021-07-02 17:33:31 So do we want to build git? 2021-07-02 17:33:37 But the docs say to build it manually 2021-07-02 17:33:46 I think I mention it before 2021-07-02 17:34:03 Probably, we talked about it with the previous time it came up 2021-07-02 17:34:17 I can see if I can add it in 2021-07-02 17:34:52 clandmeter: do you know which docs? 2021-07-02 17:35:17 Install from source doscs 2021-07-02 17:36:17 https://docs.gitlab.com/ee/install/installation.html#installation-from-source 2021-07-02 17:36:44 https://docs.gitlab.com/ee/install/installation.html#git 2021-07-02 17:36:57 they have their own fork apparently 2021-07-02 17:37:52 the difficulty is that we need git to fetch the source of a lot of things as well, so we need to make sure we build git before that 2021-07-02 17:40:11 I don't think it matters 2021-07-02 17:40:30 what? 2021-07-02 17:40:31 Just uninstall git after 2021-07-02 17:41:48 System git I mean 2021-07-02 17:43:00 We need to change gitlab to point to the built git version 2021-07-02 17:43:17 Yup 2021-07-02 17:43:33 This has changed recently 2021-07-02 17:43:59 ? 2021-07-02 17:44:21 Their own version 2021-07-02 17:44:39 It used to just need very recent 2021-07-02 17:45:21 13.11 changed it to 2.31 2021-07-02 17:46:26 so I move git from runtime to buildtime 2021-07-02 17:47:43 Nod 2021-07-02 17:47:55 Maybe you need some deps to build it 2021-07-02 17:48:23 yes 2021-07-02 17:48:51 I just wonder how actually git gets compiled, I cannot really find it in the gitaly Makefile 2021-07-02 17:50:35 I thought I saw something like that in the logs. Is that's possible? 2021-07-02 17:50:57 `sudo make git GIT_PREFIX=/usr/local` 2021-07-02 17:50:59 really 2021-07-02 17:51:23 clandmeter: like what? 2021-07-02 17:51:41 Libgit 2021-07-02 17:51:48 Could be done gem 2021-07-02 17:51:54 Some 2021-07-02 17:52:01 libgit2 is not git 2021-07-02 17:52:47 I'm on mobile so can't check 2021-07-02 17:53:01 nod 2021-07-02 17:59:47 We need to change the bin_path in gitlab.yml 2021-07-02 19:00:39 Hmm, gitaly is not installing git 2021-07-02 19:01:09 oh, my bad 2021-07-02 19:40:04 got it wokring? 2021-07-02 19:42:44 not yet 2021-07-02 19:42:50 but it's building git now 2021-07-02 20:35:25 clandmeter: btw, the /var/lib/docker directory contains 3m+ files/directories 2021-07-02 20:35:31 wondering why so many 2021-07-02 20:36:00 uhm 2021-07-02 20:36:05 docker volume? 2021-07-02 20:36:39 only 10k 2021-07-02 20:39:48 overlay2 2021-07-02 20:39:58 3652924 2021-07-02 21:06:48 ikke: so the files are in docker but not in the container? 2021-07-02 21:07:04 I don't know 2021-07-02 21:08:24 Currently fixing git in the container 2021-07-02 21:08:29 was missing runtime dependencies 2021-07-02 21:09:37 the script didnt find them? 2021-07-02 21:11:21 I added the build-time dependencies, but they got removed after build 2021-07-02 21:11:30 I'm used to abuild tracing the runtime dependencies :P 2021-07-02 21:13:15 but there is a script included 2021-07-02 21:13:17 it should do that 2021-07-02 21:13:25 ah 2021-07-02 21:13:28 i know 2021-07-02 21:13:39 it only scans ruby path 2021-07-02 21:13:44 :) 2021-07-02 21:13:57 gemdeps 2021-07-02 21:14:11 yes 2021-07-02 21:14:21 well we could rename it and add paths as options 2021-07-02 21:15:15 they are just 3 dependencies 2021-07-02 21:38:44 clandmeter: now it's happy 2021-07-02 21:38:57 Do you think it's worth it to change the script? 2021-07-02 21:44:33 pushed it 2021-07-03 09:57:08 clandmeter: gitlab-test is now running the latest image (with the compiled git version) 2021-07-03 10:19:31 \o/ 2021-07-03 10:22:32 It's so nice to have a test environment to verify these things 2021-07-03 10:39:14 clandmeter: so I think about upgrading gitlab this evening 2021-07-03 10:39:38 fine by me, but im probably not around :) 2021-07-03 10:41:00 ok 2021-07-03 14:01:40 clandmeter: ^ 2021-07-03 14:01:44 we lost usa5 it seems 2021-07-03 15:56:16 hmm, the host is responding 2021-07-03 15:56:55 seems memory issues, do not get a prompt after login 2021-07-03 16:13:35 clandmeter: I had to force reboot usa5. 2021-07-03 16:53:41 sigh 2021-07-03 22:26:58 blaboon: a message from linode about a trusted device was marked as 'could not be verified it came from the sender' in gmail for me (and ended up in the spam folder) 2021-07-03 22:28:20 "Gmail could not verify that it actually came from linode.com. Avoid clicking links, downloading attachments, or replying with personal information." 2021-07-03 22:28:31 With a large orange banner 2021-07-03 23:22:15 hmm, i just got one of those emails myself a few days ago that passed SPF, although i'm not using gmail. and i don't think we've made any recent changes to our policy 2021-07-03 23:22:38 would you mind forwarding that along to support@linode.com? they'll be able to check it and escalate if needed 2021-07-04 03:21:37 hey y'all! quick question: how exactly is https://hub.docker.com/_/alpine?tab=tags&page=1&ordering=last_updated maintained? Specifically the edge tag. What's the schedule of it being pushed? 2021-07-04 06:40:17 rhatr: there is no regular schedule 2021-07-04 06:41:18 rhatr: ncopa needs to crate a snapshot tag, and then make a pull request to get the image updated 2021-07-04 17:13:32 the aarch64 CI seems to have run out of disk space https://gitlab.alpinelinux.org/alpine/aports/-/jobs/429603 2021-07-04 17:13:34 > mkdir: can't create directory 'keys/': No space left on device 2021-07-04 17:13:44 ah, algibot already knows! 2021-07-04 17:14:37 ceph... 2021-07-04 17:16:02 right now 37G is available 2021-07-04 17:16:51 CI has little to no persisted data 2021-07-04 17:17:51 for some reason, ceph likes to use a lot of space to build 2021-07-04 17:22:55 ah! 2021-07-04 18:58:14 ikke: got it! so... what would it take to start publishing riscv64? convincing ncopa: ? 2021-07-04 19:00:02 rhatr: you cannot 2021-07-04 19:00:13 docker does not support rv64 2021-07-04 19:00:22 in their infra 2021-07-04 19:00:31 afaik 2021-07-04 19:01:31 well it very much does everywhere else 2021-07-04 19:02:14 docker needs hardware to support the architecture. 2021-07-04 19:02:17 Mac OS, Windows, Linux -- any docker older than 19.x supports it pretty well (on Linux -- obviously -- it always requires qemu-static -- but that's for everything) 2021-07-04 19:02:29 there is an issue on github about it 2021-07-04 19:02:32 rhatr: docker needs to have hardware before they can publish official images in the library 2021-07-04 19:02:36 you can push images yourself 2021-07-04 19:02:43 but for official images, they need to build it 2021-07-04 19:03:05 wait... that was my entire question to begin with -- who builds it 2021-07-04 19:03:14 rhatr: https://github.com/alpinelinux/docker-alpine/pull/148 2021-07-04 19:03:25 rhatr: docker builds the official image 2021-07-04 19:04:05 ah! so it ain't ncopa: 2021-07-04 19:04:21 He provides the pull requests, but the actual image is built on docker infra 2021-07-04 19:04:24 (given the answer about I thought he physically is in charge of building and pushing the image) 2021-07-04 19:04:47 got it -- it makes it MUCH clear now -- let me talk to a friend of mine who happens to a CTO at Docker Inc. :-) 2021-07-04 19:05:23 :D 2021-07-04 19:07:00 rhatr: https://github.com/docker-library/official-images 2021-07-04 19:07:15 https://github.com/docker-library/official-images/blob/master/library/alpine 2021-07-04 19:11:25 clandmeter: I wonder if we should publish an alpinelinux/alpine image for riscv64 2021-07-04 19:13:58 makes sense 2021-07-04 19:15:05 rhatr: https://github.com/docker-library/official-images/issues/8794 2021-07-04 19:15:05 clandmeter: fyi, forking projects is broken atm 2021-07-04 19:15:12 wut? 2021-07-04 19:15:23 what happend? 2021-07-04 19:15:33 bug? 2021-07-04 19:15:37 appears so 2021-07-04 19:15:41 NoMethodError 2021-07-04 19:15:59 hmm 2021-07-04 19:16:00 https://github.com/docker-library/official-images/blob/master/library/alpine 2021-07-04 19:16:03 https://tpaste.us/Bgeb 2021-07-04 19:16:06 ruby is missing something? 2021-07-04 19:16:11 oh, nilclass 2021-07-04 19:16:42 there is a ServiceCreator that should return a Project 2021-07-04 19:16:49 but apparently it returns nil 2021-07-04 19:25:46 sounds scary 2021-07-04 19:34:03 rhatr: https://twitter.com/clandmeter/status/1411770239637983236 2021-07-04 19:35:23 clandmeter: Could not find anythign in the bugtracker 2021-07-04 20:12:54 Not sure what to do with it 2021-07-04 20:13:08 Not sure how to debug that code 2021-07-04 20:41:42 ikke: i can take a look tomorrow 2021-07-04 20:42:33 if i cant find it maybe ping somebody from gitlab 2021-07-04 20:43:13 nod 2021-07-05 07:18:44 logs are a bit difficult to read when we get hit that much 2021-07-05 07:19:45 I use the json logs with jq 2021-07-05 07:20:25 And tail 2021-07-05 07:20:40 https://tpaste.us/Qr44 2021-07-05 07:20:44 thats it i guess 2021-07-05 07:22:14 Yes, what I already posted yesterday 2021-07-05 07:22:44 @project somehow is nil 2021-07-05 07:27:47 how did you come to that conclusion? 2021-07-05 07:38:28 The error message, combined with the source code 2021-07-05 07:39:02 It tries to call persisted? on a nil object 2021-07-05 07:39:38 NoMethodError (undefined method `persisted?' for nil:NilClass): 2021-07-05 07:39:40 app/services/projects/fork_service.rb:35:in `fork_new_project' 2021-07-05 07:42:30 ah ok you already did some research 2021-07-05 07:45:25 new_project = CreateService.new(current_user, new_fork_params).execute 2021-07-05 07:45:25 return new_project unless new_project.persisted? 2021-07-05 07:45:44 why would CreateService.new(...).execute return nil? 2021-07-05 07:47:59 That's what I was wondering 2021-07-05 07:49:55 ikke: shot in the dark - try changing visibility of the project 2021-07-05 07:50:02 i've seen a vaguely familiar bug before 2021-07-05 07:51:13 related: https://gitlab.com/gitlab-org/gitlab-foss/-/issues/42607 2021-07-05 07:51:15 It happ 2021-07-05 07:51:49 clandmeter: looks like a different issue considering the error message in that one, but it could be related 2021-07-05 07:52:00 yes just related 2021-07-05 07:52:07 could be visibilty related 2021-07-05 07:52:18 i tried to switch, but didnt fix it 2021-07-05 07:52:28 also read something about caching 2021-07-05 07:52:32 i'm firing up emacs to grep around a bit 2021-07-05 07:52:44 maybe we need to clear cache where possible 2021-07-05 07:56:35 danieli: you are still using thelounge? 2021-07-05 07:56:41 clandmeter: yup 2021-07-05 07:57:02 i think the new version will support searching (if i read the commits correctly) 2021-07-05 07:57:12 history search 2021-07-05 07:57:26 that's neat, i've checked third party logs or scrolled up and hit ctrl+f 2021-07-05 07:58:29 ikke: we have some specific restriction settings right? 2021-07-05 08:01:07 it's a shame the traceback says nearly nothing about what's interesting, it's all about the error boiling up through the web stack 2021-07-05 08:02:12 ikke: did you report this upstream? 2021-07-05 08:11:44 Not yet 2021-07-05 09:17:30 ikke: is inbound email still working? 2021-07-05 09:19:21 No idea? 2021-07-05 09:28:55 ikke: https://tpaste.us/ovQn 2021-07-05 09:32:54 does api allow forking? 2021-07-05 09:46:59 ikke, danieli: https://gitlab.com/gitlab-org/gitlab/-/issues/335187 2021-07-05 09:49:14 πŸ‘ 2021-07-05 09:58:36 clandmeter: good 2021-07-05 10:04:08 I hope we get a timely response 2021-07-05 12:03:29 ikke: something is wrong with gitlab pull 2021-07-05 12:04:13 You mean the delay? 2021-07-05 12:04:17 yes 2021-07-05 12:04:31 there was no delay over https 2021-07-05 12:04:45 but now its taking a lot of time to start fetching 2021-07-05 12:07:03 when i noticed the delay over ssh, i switched my pull logic to https 2021-07-05 15:21:26 ikke, ncopa: did you guys see the ppc emails? 2021-07-05 15:21:32 yes 2021-07-05 15:21:35 wonder if they actually fixed anything? 2021-07-05 15:22:18 ikke: i think you already played with MTU 2021-07-05 15:22:27 yes 2021-07-05 15:31:40 ikke: please test it and let me know. ill try to keep pushing them. 2021-07-05 15:31:58 Need to wait until the builder is idle 2021-07-05 15:32:14 Can check 3-14-ppc64le 2021-07-05 15:32:32 nod 2021-07-05 15:33:27 ikke: maybe we should test the fork issue on our test instance 2021-07-05 15:33:47 yes, that's what I was doing 2021-07-05 15:33:56 It fails there as well 2021-07-05 15:34:04 if we dont get feedback, we could try upgrade to 14 2021-07-05 15:34:08 Just wondering how to get debug output 2021-07-05 15:34:23 hmm, all stable builders are stopped 2021-07-05 15:35:25 same for edge 2021-07-05 15:36:03 same what? 2021-07-05 15:36:18 lxc containers stopped 2021-07-05 15:36:42 oh you mean REALLY stopped 2021-07-05 15:37:03 build-3-14-ppc64le STOPPED 2021-07-05 15:37:09 powered off, if thats a thing for comtainers :) 2021-07-05 15:37:17 heh 2021-07-05 15:37:35 in my defense, lxc calls it stopped :P 2021-07-05 15:38:13 i guess they dont auto start on boot? 2021-07-05 15:38:18 yeah 2021-07-05 15:38:46 I guess we need to add autostart 2021-07-05 15:39:16 lxc.start.auto = 1 2021-07-05 15:39:20 that's missing 2021-07-05 15:39:31 i would assume so 2021-07-05 15:39:56 It's the case :P 2021-07-05 15:40:17 bad management of lxc containers ;-) 2021-07-05 15:40:36 agreed 2021-07-05 15:41:44 added it to all containers now 2021-07-05 15:42:16 prefer bad config instead of bad networking... 2021-07-05 15:42:41 bad config we can fix 2021-07-05 15:44:20 'pulling git' takes a long time 2021-07-05 15:46:03 that was same for me 2021-07-05 15:46:28 for me it's finished within 10 secs 2021-07-05 15:47:28 now its also fast for me 2021-07-05 15:47:29 weird 2021-07-05 15:57:26 still showing that 2021-07-05 15:57:43 hmm, even though the process is gone 2021-07-05 18:19:31 clandmeter: ok, doing what rafael said actually helps 2021-07-05 18:19:37 mtu 1450, and git continues 2021-07-05 19:01:29 clandmeter: I've synced 3.14 up now (to get rid of the files in .~tmp~), so we'll see if that comes back 2021-07-05 19:47:12 Nice 2021-07-05 19:47:42 waiting for edge to finish as wel 2021-07-05 19:47:44 well 2021-07-05 20:18:37 not much activity from gitlab 2021-07-06 06:07:49 I freed up space on usa2 2021-07-06 10:19:39 clandmeter: syncing ppc64le community now, and it seems it still hangs randomly 2021-07-06 10:27:05 clandmeter: but tcpdump also does not show a lot of packets going over 2021-07-06 10:42:07 ok 2021-07-06 10:42:40 and now broken pipe 2021-07-06 10:42:59 can you add this info to the email? 2021-07-06 10:43:01 as if it randomly justs stops 2021-07-06 10:43:08 is it a network issue? 2021-07-06 10:43:32 what else could it be? 2021-07-06 10:43:36 let me try tcpdump from both sides 2021-07-06 10:44:01 what happens if you sync to another site? 2021-07-06 10:46:46 ikke: i asked ncopa regarding ap and different repo's. it seems its a bug and should be fixed. 2021-07-06 10:47:22 ah, ok 2021-07-06 10:47:30 which means currently its much harder to find recursive revdeps for testing 2021-07-06 10:47:47 I did see Ariadne commit some bundled changes 2021-07-06 10:47:52 clandmeter: ok, I see that at some point, dl-master just does not receive the packets anymore 2021-07-06 10:47:52 i guess that was a grep fix 2021-07-06 10:48:22 ikke: should we try to setup gbr? 2021-07-06 10:48:28 see what happens if we use that 2021-07-06 10:48:49 the old uk.a.i 2021-07-06 10:48:52 the old uk.a.o 2021-07-06 10:51:19 its weird as the old boxes were in brazil iirc, and the new ones are in europe. 2021-07-06 10:51:28 correct 2021-07-06 11:07:39 ikke: ? 2021-07-06 11:08:33 ikke: do we want to test sycning to another host? 2021-07-06 11:08:40 just to rule out issues with cz.a.o 2021-07-06 11:08:58 yeah, we can test that 2021-07-06 12:42:20 where is the proper place to report apk install returning BAD signature (in this case its riscv46)? 2021-07-06 12:42:42 i think its the rsync problem we have thats causing it, but i dont know 2021-07-06 12:43:32 is https://gitlab.alpinelinux.org/alpine/infra/infra the proper place? 2021-07-06 12:48:49 ncopa: which pkg 2021-07-06 12:57:24 some pkg i had to rebuild locally 2021-07-06 12:57:24 i think i know what the issue is 2021-07-06 12:57:24 let me do a check on all pkgs 2021-07-06 13:15:35 hmm this is harder than i expected 2021-07-06 13:15:35 how do you verify a pkg against the index but dont want to install it 2021-07-06 13:37:54 Fetch? 2021-07-06 13:46:34 does fetch verify the index? 2021-07-06 13:52:18 question was from here: https://github.com/tonistiigi/xx/pull/21#issuecomment-870145130 2021-07-06 13:58:09 infra/infra is a catch-all for infra issues 2021-07-06 14:58:09 ncopa: i already fixed that 2021-07-06 14:58:15 2 days ago iirc 2021-07-06 15:11:02 ncopa: btw, we found out a little bit more about the rsync issue. Apparently it's a known problem when the .~tmp~ files are left behind, others have reported similar rsync issues 2021-07-06 15:18:57 ok. good 2021-07-06 15:22:25 But network issues cause these files to be left behind 2021-07-07 11:13:57 ugh 2021-07-07 20:19:42 If one wants to be a mirror for alpine, what does one need to do? 2021-07-07 20:19:48 We currently run a debian mirror at work. 2021-07-07 20:20:02 (AS42708) 2021-07-07 20:22:46 send an e-mail to alpine-mirrors@alpinelinux.org with details about the mirror 2021-07-07 20:22:59 ok, will do. 2021-07-07 20:23:19 thanks! 2021-07-08 18:04:34 clandmeter: We still have the 2 block storage volumes in equinix for the old arm gitlab runners, I think I can remove them, right? 2021-07-08 18:12:57 clandmeter: ooh, seems like 13.12.8 fixed the forking issue 2021-07-08 18:13:07 running on gitlab-test now 2021-07-08 18:22:55 Huh? 2021-07-08 18:23:29 clandmeter: can you try to fork a project on gitlab-test? 2021-07-08 18:23:37 But no reply on the issue? 2021-07-08 18:23:42 No 2021-07-08 18:23:53 I'm not behind pc 2021-07-08 18:23:56 ok 2021-07-08 18:24:05 I can later 2021-07-08 18:24:36 ooh 2021-07-08 18:24:38 n/m :( 2021-07-08 18:25:27 I already had that project forked 2021-07-08 18:29:40 Removed the restrictions on public repos on -test, and that does not fix it either 2021-07-08 19:58:59 clandmeter: 2021-07-08T19:58:45.611Z: Unable to save project. Error: could not find any valid magic files! 2021-07-08 20:00:11 ? 2021-07-08 20:00:52 That appears in application.log when I try to fork a project 2021-07-08 20:01:38 Seems to have to do with libmagic 2021-07-08 20:08:17 clandmeter: can reproduce it with the ruby-magic gem 2021-07-08 20:10:45 but the `file` command is working without issue 2021-07-08 20:11:35 Aha 2021-07-08 20:12:15 But I checked that log, or is it another one? 2021-07-08 20:12:49 I don't know what you checked :) 2021-07-08 20:12:58 It just stood out now 2021-07-08 20:13:14 and i retried and the message appeared again 2021-07-08 20:13:42 https://tpaste.us/XK6e 2021-07-08 20:15:35 Got some visitors, can check after 2021-07-08 20:22:00 https://tpaste.us/Lm6W 2021-07-08 20:22:46 It tries to look for those files in a non-existing location 2021-07-08 20:37:58 Magic.do_not_auto_load = true 2021-07-08 20:38:14 m = Magic.new 2021-07-08 20:38:24 m.paths 2021-07-08 20:38:32 ["/usr/local/bundle/gems/ruby-magic-0.4.0/ports/x86_64-pc-linux-musl/libmagic/5.39/share/misc/magic"] 2021-07-09 07:58:36 ikke: forking works 2021-07-09 07:58:46 :) 2021-07-09 07:59:01 ‼️ 2021-07-09 07:59:20 What was it? 2021-07-09 07:59:45 i dont tell 2021-07-09 07:59:56 πŸ˜₯ 2021-07-09 08:00:07 too much cleanup :) 2021-07-09 08:00:29 Heh 2021-07-09 08:01:08 im pulling on stable 2021-07-09 08:01:18 on production i mean 2021-07-09 08:02:43 ikke: i guess you dont announce patch releases via gitlab? 2021-07-09 08:03:18 I did yesterday for the security upgrade 2021-07-09 08:03:33 ah its already on the latest? 2021-07-09 08:03:37 so this only has the fix? 2021-07-09 08:04:03 Yes 2021-07-09 08:04:12 ok restarting... 2021-07-09 08:27:02 But it was related to ruby-magic? 2021-07-09 08:28:40 it was related that it rm -rf gems/*/ports directories 2021-07-09 08:28:52 so ruby-magic was affected 2021-07-09 08:29:04 Right 2021-07-09 08:29:15 im not sure why ports was added 2021-07-09 08:29:25 its a bit undocumented what we delete 2021-07-09 08:29:34 some are obvious, some are not. 2021-07-09 08:30:14 not sure if there is some spec about gem filesystem structure 2021-07-09 08:30:51 danieli: maybe you have more experience with this? 2021-07-09 08:35:35 ikke: i closed the issue on gitlab.com 2021-07-09 08:35:53 Ack 2021-07-09 08:36:26 note to ourselves, utilize the logs more carefully next time :D 2021-07-09 08:36:33 ikke: and thx for finding it out 2021-07-09 08:37:17 excuse from my side, damn this thing has a lot of log files... 2021-07-09 08:37:27 Nod 2021-07-09 08:40:39 clandmeter: I also started on 14.0 2021-07-09 08:40:50 :) 2021-07-09 08:40:58 escape route 2021-07-09 09:46:34 congrats with fixing the gitlab fork issue 2021-07-09 09:58:00 algitbot: kick master 2021-07-09 09:58:26 nmeum: ^ :) 2021-07-09 10:27:30 glad it's solved 2021-07-09 12:23:11 clandmeter: about what specifically, ruby gems? 2021-07-09 13:35:55 Euhm.. 2021-07-09 13:41:32 clandmeter: ty, good to know :) 2021-07-09 19:38:04 ikke: how is the riscv64 runner doing? 2021-07-09 19:38:27 https://build.alpinelinux.org/buildlogs/build-edge-riscv64/community/gitlab-runner/gitlab-runner-14.0.1-r0.log :P 2021-07-09 19:38:48 heh 2021-07-09 19:38:51 im pushing a fix 2021-07-09 19:38:57 ah, I was looking at it as wel 2021-07-09 19:39:10 im building it now 2021-07-09 19:39:27 I'm just wondering why go mod edit -replace + go mod download does not work 2021-07-09 19:39:46 wfm 2021-07-09 19:39:47 https://tpaste.us/aZMm vs https://tpaste.us/qPoO 2021-07-09 19:40:58 https://tpaste.us/NM7l 2021-07-09 19:41:32 https://tpaste.us/ypke 2021-07-09 19:42:08 https://tpaste.us/YEvQ 2021-07-09 19:42:56 huh 2021-07-09 19:45:27 your patch fails to build for me :/ 2021-07-09 19:45:35 rm -rf src 2021-07-09 19:45:45 I run clean each time 2021-07-09 19:45:58 hmm 2021-07-09 19:46:02 now it also fails for me 2021-07-09 19:46:25 If you look the 2 diffs, after go mod download, there is one line less added 2021-07-09 19:46:45 github.com/creack/pty v1.1.13 h1:rTPnd/xocYRjutMfqide2zle1u96upp1gm6eUHKi7us= 2021-07-09 19:46:48 this line is missing 2021-07-09 19:47:31 If I change the patch from v1.1.1 to v1.1.5, the patch works 2021-07-09 19:48:42 it builds if you keep the patch :) 2021-07-09 19:48:48 the old one 2021-07-09 19:49:19 right, I guess because that line is still present, then 2021-07-09 19:57:17 clandmeter: go mod tidy afterwards seems to fix it 2021-07-09 19:58:05 though, it does mean the packages are already downloaded during prepare() 2021-07-09 20:01:43 sounds ok 2021-07-09 20:01:51 https://tpaste.us/5nwj 2021-07-09 20:01:58 this works 2021-07-09 20:02:51 i think it also works without the download? 2021-07-09 20:02:57 probably 2021-07-09 20:03:02 it does for me :) 2021-07-09 20:03:08 ko 2021-07-09 20:03:10 ok 2021-07-09 20:05:07 lets just push it and move on :) 2021-07-09 20:05:24 ahuh 2021-07-09 20:05:27 Who will push it? 2021-07-09 20:05:36 you ofc 2021-07-09 20:05:41 you fix it 2021-07-09 20:05:43 :) 2021-07-09 20:10:06 here it goes 2021-07-09 20:14:31 regarding the runner, what do we do with testing? 2021-07-09 20:14:51 I suppose currently all packages with dependencies in testing would fail 2021-07-09 20:35:44 clandmeter: This is a good reference: https://golang.org/ref/mod 2021-07-09 20:43:50 clandmeter: any tips on dealing with the gitlab upgrade? Fixing things and rebuilding the image from scratch each time is going to take a long tim 2021-07-09 20:43:52 time 2021-07-09 20:58:40 It's different then other upgrades? 2021-07-09 21:00:07 No 2021-07-09 21:00:12 though, more patches to rebase 2021-07-09 21:03:54 You mean v14? 2021-07-09 21:26:18 yeah 2021-07-09 21:26:28 I don't know how many 2021-07-09 21:27:03 But I gather you don't have any specific approach 2021-07-09 22:21:19 i did not look at it yet, but if you want i can take care of it. 2021-07-09 22:21:39 its not that bad to build for me here and ill do it in steps. 2021-07-10 11:27:38 ikke: please comment on what to do with v14. if you commit to it, ill not reserve the time for it, but ofc i can help. 2021-07-10 11:28:19 I can continue, was just wondering if you had some tips 2021-07-10 11:29:14 i would hack the dockerfile 2021-07-10 11:31:06 have you checked how many patches currently fail? 2021-07-10 11:31:22 and did you look at the changes in the install from source documentation 2021-07-10 11:31:50 The only way to find out it is to try to apply each patch 2021-07-10 11:44:42 ikke: if you modify dockerfile and RUN each setup step individual, it should cache layers and it should be much faster to debug. 2021-07-10 11:45:23 yes, we would need to splitup setup.sh 2021-07-10 11:46:01 right 2021-07-10 11:47:06 as its already functions you could add some logic to it like what we do with abuild 2021-07-10 11:47:22 right 2021-07-10 11:48:16 i also keep getting ideas about splitting all of it into seperate container. but in the end thinking of it, it will probably provide more headaches. 2021-07-10 11:49:50 we would have to superglue the spaghetti monster together again. 2021-07-10 11:51:10 ikke: oh the setup is not functions 2021-07-10 11:51:16 no, it's not 2021-07-10 11:51:25 entrypoint is :) 2021-07-10 11:51:44 but it should be simple to wrap them up 2021-07-10 11:52:09 I do like the idea of splitting gitlab up :) 2021-07-10 11:54:30 We could do it step by step 2021-07-10 11:56:33 gitaly seems doable 2021-07-10 11:57:04 I guess 2021-07-10 11:58:38 Many things that you have patched seem to have been upstreamed 2021-07-10 12:01:53 oh, I guess they already have been applied 2021-07-10 12:29:41 One challenge splitting with splitting it up is how to get the correct versions of all components 2021-07-10 15:13:46 Is was thinking of using some makefile or similar 2021-07-10 16:29:36 I don't think we want to manually keep track of all the versions 2021-07-10 16:29:58 I guess we could just fetch the files from gitlab to get the correct version 2021-07-10 17:28:52 Ofc 2021-07-10 18:59:52 clandmeter: https://gitlab.alpinelinux.org/alpine/infra/docker/gitlab/-/jobs/433762 2021-07-10 19:04:13 clandmeter: [ACTION REQUIRED]: Block Storage Depreciation [FINAL NOTICE]: Block Storage Depreciation 2021-07-10 19:14:52 ikke: where did you get that notice from? 2021-07-10 19:15:09 e-mail 2021-07-10 19:15:29 You received it as well 2021-07-10 19:15:39 strange i dont see it 2021-07-10 19:15:42 when? 2021-07-10 19:15:52 3 days ago 2021-07-10 19:15:55 Juli 7th 2021-07-10 19:16:09 It's the old arm ci volumes, they still exist 2021-07-10 19:16:24 I think we can just remove them, right? 2021-07-10 19:17:55 ah its tagged as spam 2021-07-10 19:23:31 yes lets kill them 2021-07-10 19:23:35 if you dont use them anymore 2021-07-10 19:24:08 weird that im missing equinix emails. seems like DMARC fails. 2021-07-10 19:25:32 so you are now building the gitlab contain from gitlab 2021-07-10 19:25:49 ? 2021-07-10 19:26:06 curl -s https://gitlab.com/api/v4/projects/278964/repository/files/GITALY_SERVER_VERSION\?ref\=v14.0.2-ee | jq -r .content | base64 -d 2021-07-10 19:26:09 ah wait you alraedy did taht 2021-07-10 19:26:16 just not pushing it 2021-07-10 19:26:32 I'm not following :) 2021-07-10 19:26:49 you pasted a job here 2021-07-10 19:26:54 right 2021-07-10 19:27:13 so what should i look at? 2021-07-10 19:27:18 It's 14.0.2 2021-07-10 19:27:31 nice 2021-07-10 19:27:47 did you need to fix a lot of patches? 2021-07-10 19:27:51 no 2021-07-10 19:27:55 just one in the end 2021-07-10 19:28:06 i guess it would not be taht much 2021-07-10 19:28:21 i think the __va or whatever it was got upstreamed 2021-07-10 19:28:36 but im not sure we already pull that in 2021-07-10 19:28:55 bundle config set --global build.google-protobuf --with-cflags=-D__va_copy=va_copy 2021-07-10 19:28:57 that I guess 2021-07-10 19:29:09 yes, but there is also a patch 2021-07-10 19:29:51 and the link in it, mentions something about it being fixed upstream 2021-07-10 19:30:12 not sure what the relationship with that patch and this config set is. 2021-07-10 19:30:22 ruby-fix-cflags.patch 2021-07-10 19:31:35 https://github.com/protocolbuffers/protobuf/commit/9abf6e2ab0d8847059edfc1720e6697c7282f527 2021-07-10 19:31:54 https://github.com/protocolbuffers/protobuf/commit/9abf6e2ab0d8847059edfc1720e6697c7282f527#diff-814c9eb0903e070d523bb5ca2fc2caba7cade74087d6b13d9a883b6f46813924 2021-07-10 19:34:17 btw, why 14.0.2 when 0.5 is out? 2021-07-10 19:34:42 hmmm, good question :) 2021-07-10 19:34:56 I guess I did not verify that latest version 2021-07-10 19:35:21 https://about.gitlab.com/releases/2021/07/08/gitlab-14-0-5-released/ 2021-07-11 09:55:10 ikke: did you test your build? 2021-07-11 09:58:42 not yet 2021-07-11 17:28:49 I have a multi-stage docker file for gitaly 2021-07-11 17:28:55 Still need to finish it 2021-07-11 17:43:27 Hmm, need gitlab-shell as well 2021-07-12 10:44:16 clandmeter: afaik, we do not do that yet, but I noticed we can save a lot of space if we strip ruby gems 2021-07-12 10:44:23 grpc is a big one 2021-07-12 10:44:39 for gitaly, it went from 500M to 200M 2021-07-12 11:00:08 I'm quite anoyed that many ruby projects (and also python projects) require you to compile each dependency all the time 2021-07-12 11:00:17 well, go as well, but at least that's fast 2021-07-12 11:01:03 they usually don't, given that prebuilt binaries are available 2021-07-12 11:01:34 If you are on the blessed platform 2021-07-12 11:01:39 yes, bingo 2021-07-12 11:01:46 if you aren't, you need to compile stuff allllll the damn time 2021-07-12 11:01:58 We do have most libraries pre-compiled 2021-07-12 11:02:11 but then the require version >1.2.3 <1.2.4 2021-07-12 11:02:22 (exegerated) 2021-07-12 11:02:31 yeah.. that's why they're often stored upstream in e.g. pypi 2021-07-12 11:03:01 For blessed platforms 2021-07-12 11:44:26 did I miss messages? 2021-07-12 11:44:26 ikke: do not do what yet? 2021-07-12 11:44:48 Sorry, strip binaries 2021-07-12 11:45:25 i think you can apply configure options to use external libraries 2021-07-12 11:45:54 for some 2021-07-12 11:46:43 but i wonder how much we need to tweak to not shoot ourselves in the foot all the time. 2021-07-12 11:46:56 Nod 2021-07-12 11:47:28 stripping should be ok 2021-07-12 11:47:45 I would assume so 2021-07-12 11:58:14 danieli: https://gitlab.alpinelinux.org/alpine/infra/docker/gitlab/-/blob/13.12-stable/overlay/usr/local/bin/setup.sh#L233 2021-07-12 11:58:27 this is what i was referring to 2021-07-12 11:58:52 clandmeter: uhm, what is the context/ 2021-07-12 11:58:53 ? 2021-07-12 11:59:05 my previous question 2021-07-12 11:59:46 if there is some logic in rubys directory structure 2021-07-12 11:59:48 some spec 2021-07-12 16:49:26 clandmeter: Am I missing something, or can gemdeps.sh be mostly replaced with: `scanelf -BRF '%n#p' -E ET_DYN,ET_EXEC . | tr , '\n' | sort -u | awk '!/libruby/ { print "so:" $1 }'` 2021-07-12 16:55:39 7s vs 0.01s 2021-07-12 18:07:45 https://tpaste.us/7Vvo 2021-07-12 19:11:29 ikke: dunno 2021-07-12 19:11:38 if you say so :) 2021-07-12 19:11:47 Output is the same 2021-07-12 19:12:24 fun fact: for stripping binaries, I use scanelf -BARF :) 2021-07-12 19:19:28 clandmeter: https://gitlab.alpinelinux.org/alpine/infra/docker/gitlab/-/compare/14.0-stable...split-gitaly?from_project_id=356 2021-07-12 19:21:47 It's not complete yet, but this is what I have so far 2021-07-12 19:29:22 why do you use the api to fetch versions? 2021-07-12 19:30:04 Because that's the only thing I need from that repo 2021-07-12 19:30:19 outside of the docker containers 2021-07-12 19:30:47 you cant just curl https://gitlab.com/gitlab-org/gitlab/-/raw/v14.0.5-ee/GITALY_SERVER_VERSION ? 2021-07-12 19:31:00 Yeah, somehow did not think of that :D 2021-07-12 19:31:21 like not using -R with scanelf? :p 2021-07-12 19:31:26 :D 2021-07-12 19:31:57 tbf, i think i copied that from some example which was very slow and i added some logic to make it faster. 2021-07-12 19:32:08 and i didnt see the -R switch i guess :) 2021-07-12 19:32:15 and -E 2021-07-12 19:32:32 but its been a long time 2021-07-12 19:32:40 it's forgiven :) 2021-07-12 19:32:58 thx 2021-07-12 19:37:21 I now use that also to trace the git deps 2021-07-12 19:37:53 yes i saw 2021-07-12 19:38:28 But I think I need to add git-shell to there as well (maybe copy it from another image) 2021-07-12 19:40:40 you might want to move all dockerfiles into its own subdirs 2021-07-12 19:43:31 I did for gitaly 2021-07-12 19:45:04 its kind of best practice as docker build will add all deeper located files to its recipe or however you call it. 2021-07-12 19:46:14 build context 2021-07-12 20:18:35 clandmeter: better so? https://gitlab.alpinelinux.org/alpine/infra/docker/gitlab/-/compare/14.0-stable...split-gitaly?from_project_id=356#014d9d2e9ed07256aa29a172be69002b25fc644d_0_12 2021-07-12 20:23:54 much better! :) 2021-07-12 20:24:10 why not add sodeps to setup? 2021-07-12 20:29:10 no particular reason 2021-07-12 21:07:32 disadvantage of using a single setup file is that all stages have to be rebuilt if you change it 2021-07-12 21:28:11 ikke: this has to do with sodeps? 2021-07-12 21:29:04 no 2021-07-12 21:29:22 this has to do with having to rebuild everything just because you just a small step at the end 2021-07-12 21:29:56 even though things are separated in stages that are independent (except for the last and the first) 2021-07-12 21:30:47 ok but in general you dont modify setup 2021-07-12 21:31:16 i added a sep file so it could exec it self 2021-07-12 21:31:31 which is not needed anymore iiuc 2021-07-12 21:31:39 correct 2021-07-12 21:31:55 that's what I gathered 2021-07-12 21:32:38 but you can also split the whole file into sep files if you move everything into its own container 2021-07-12 21:32:51 and let it source some shared fucntions 2021-07-12 21:33:16 yes, but I'm now talking about the setup of a single container (gitaly), with multiple stages 2021-07-12 21:36:06 I'm wondering how we can share functionality with separate build contexts 2021-07-12 21:36:18 symlinks do not work 2021-07-12 21:36:23 symlinks? 2021-07-12 21:36:31 :P 2021-07-12 21:36:47 hardlinks dont work? 2021-07-12 21:37:01 hardlinks get broken 2021-07-12 21:37:54 you will need to copy it from toplevel makefile 2021-07-12 21:38:05 or whatever script you will use 2021-07-12 21:46:13 ikke: are you going to use sockets or tcp? 2021-07-12 21:46:52 first attempt will be sockets 2021-07-13 15:12:36 https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/configuration/README.md#gitlab-shell2 2021-07-13 15:12:38 https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/configuration/README.md#gitlab-shell 2021-07-13 15:12:53 So it seems it only needs the code 2021-07-14 07:18:33 i deleted some files in distfiles and cleaned up my container(s) on usa9-dev1 2021-07-14 07:20:58 thanks, I've also cleaned up nld9 yesterday 2021-07-14 07:21:12 fyi: ~/.cache/yarn tends to grow big 2021-07-14 07:21:22 and ~/.cache/go-build as well 2021-07-14 12:48:27 heya, is lists.alpinelinux.org down? 2021-07-14 12:48:35 it keeps 404 to me :( 2021-07-14 12:48:39 not for me 2021-07-14 12:48:42 eletrotupi: works for me 2021-07-14 12:49:07 wtf, https://lists.alpinelinux.org? 2021-07-14 12:49:15 that's weird 2021-07-14 12:49:29 we have spam problems https://lists.alpinelinux.org/~alpine/aports/%3Cd08475fea13b918021b1bca14e2beb97%4010.57.86.120%3E 2021-07-14 12:49:52 can't open that link either 2021-07-14 12:50:21 were my IP blackholed? 2021-07-14 12:50:22 What IP does it resolve to for you? 2021-07-14 12:50:46 147.75.101.119 2021-07-14 12:51:32 that's correct 2021-07-14 12:52:57 eletrotupi: would you mind sharing me your IP address in private? 2021-07-14 12:55:54 yeah 2021-07-14 16:10:05 i get emails about storage on equinix metal 2021-07-14 16:10:13 Per our previous notifications, Equinix Metal began deprecating our Elastic Block Storage service on June 1st, 2021. I have noticed you still have unattached volumes in the following project(s): 2021-07-15 04:53:59 oh boy 2021-07-15 05:15:57 host seems down 2021-07-15 06:54:03 πŸ˜• 2021-07-15 06:54:15 so we effectively lost s390x 2021-07-15 06:56:15 ah, finally good news ;) 2021-07-15 10:01:51 mps: too bad ^ :P 2021-07-15 10:02:55 ikke: you understand that I'm just kidding 2021-07-15 10:03:16 yes, so I continue your joke :) 2021-07-15 10:03:26 I see 2021-07-17 18:08:09 clandmeter: I split the build files for each component of gitaly now. Now changing one thing does not cause everything to be rebuilt :) 2021-07-17 18:14:20 but it will introduce multiple layers? 2021-07-17 18:14:38 Not in the final image 2021-07-17 18:14:55 ok you are using copy from 2021-07-17 18:14:58 yes 2021-07-17 18:14:59 i guess 2021-07-17 18:15:14 be sure to copy everything :) 2021-07-17 18:15:18 ahuh :) 2021-07-17 18:15:33 I'm looking at setup.sh and the existing gitlab instance for reference 2021-07-17 18:15:42 testing running gitaly now 2021-07-17 18:15:47 ran into __va_copy now 2021-07-17 18:18:35 ok, gitaly is running now :) 2021-07-17 18:18:48 I'm just wondering what we need to do with ssh 2021-07-17 18:19:05 ssh <-> gitlab-shell <-> gitaly 2021-07-17 18:30:13 clandmeter: https://github.com/protocolbuffers/protobuf/pull/7773 2021-07-17 19:01:39 clandmeter: with DOCKER_BUILDKIT, it means it can build several stages in paralel 2021-07-17 21:31:33 clandmeter: do you happen to know what triggers .ssh/authorized_keys to be written / generated? 2021-07-17 21:43:56 Yes 2021-07-17 21:44:27 It's in entrypoint 2021-07-17 21:47:28 rake gitlab:shell:setup 2021-07-18 08:51:27 Keep having rsync issues with ppc64le. Sent another e-mail about it 2021-07-19 07:13:34 ikke: did you see if syncing builders to gbr1 makes any difference? 2021-07-19 10:15:36 clandmeter: no, not yet 2021-07-19 10:27:34 clandmeter: No issues atm, but it seems that syncing to dl-master.a.o is not having issues either 2021-07-19 10:27:35 atm 2021-07-19 12:26:22 but you are removing --delayed-updated? 2021-07-19 12:27:54 yes 2021-07-19 12:28:10 That's a separate issue 2021-07-19 12:28:35 without --delayed-updates, it does try to update files, but for larger files, it hangs 2021-07-19 12:57:14 why is that a sep issue? 2021-07-19 12:57:38 "for large files it hangs" sounds like a network issue 2021-07-19 12:57:52 Because we still have this issue without --delayed-updates 2021-07-19 12:58:09 But that network issue does play into account with --delayed-updates 2021-07-19 14:29:04 can we reproduce this issue and see if it differs on these two mirrors? 2021-07-19 14:29:51 The rsync --delayed-updates issue we can reproduce 2021-07-19 14:30:12 https://github.com/WayneD/rsync/issues/192 2021-07-19 14:30:15 right, but thats not something we can really fix right now. 2021-07-19 14:30:19 No 2021-07-19 14:30:28 The network issue is intermittent 2021-07-19 14:30:36 and so far, I only encountered it on ppc64le 2021-07-19 14:30:48 i was wondering if we could try rsyncing large files 2021-07-19 14:30:55 lets say 10GB or something 2021-07-19 14:31:17 maybe that can reproduce it 2021-07-19 14:32:25 One thing that I did notice is that it was usually the -dbg packages that hung 2021-07-19 14:32:37 mesa-dbg for example 2021-07-19 14:33:30 And it would hang after 1 or 2 seconds 2021-07-19 14:33:44 different amounts of data transfered 2021-07-19 14:48:54 do we have zabbix installed? 2021-07-19 14:49:18 on the builders? 2021-07-19 14:50:25 Not on the ppc64le hosts 2021-07-19 14:50:41 and master 2021-07-19 14:50:56 i wonder if we could graph network quality between them? 2021-07-19 14:51:09 No, I asked before and at least at the time, you and ncopa didn't think it was a good idea 2021-07-19 14:51:42 i guess we can run it in a container? 2021-07-19 14:52:51 an agent in a container does not make a lot of sense 2021-07-19 14:53:09 at least, if you want to monitor the host 2021-07-19 14:53:16 if you want to monitor the container, then, yes 2021-07-19 14:56:19 btw, I've noticed that gitlab-shell has 4 identical binaries about ~20M 2021-07-19 14:56:32 so we can save ~60M 2021-07-19 14:57:19 ok, kill them 2021-07-19 14:57:30 I've added rdfind in the build process 2021-07-19 14:57:54 which automatically turns them into symlinks (which are preserved across build stages / copies) 2021-07-19 15:01:17 So I have a gitaly container now, and working on a gitlab-shell container which will probably run sshd 2021-07-20 10:47:28 docs was updated in git this weekend (OFTC, link to git repository) but the page is not yet generated. what needs to be done to regenerate the docs? https://docs.alpinelinux.org/user-handbook/0.1a/index.html 2021-07-20 10:48:54 It should respond to mqtt messages 2021-07-20 10:49:52 I guess we need to update the topic it listens to 2021-07-20 10:55:03 updated 2021-07-20 10:55:12 still need to update the topic 2021-07-20 16:15:26 regarding foss video conferencing software for the tsc: I have used both jitsi and bigbluebutton at work, they both work fine bigbluebutton maybe better suited for our purpose though 2021-07-20 16:15:43 would probably be nice to have a video conferencing tool running on alpine infra, would also allow other teams to use it for coordination 2021-07-20 16:16:21 Would be nice if at least the host OS could be alpine, but not sure if feasible 2021-07-20 16:17:33 we probably need to experiment with jitsi/bigbluebutton then and see which one is easier to host I guess? 2021-07-20 16:18:27 might be a bit ambitious to do that in two weeks, would it be possible to use Ariadne's bbb instance again as a fallback for the tsc meeting in two weeks or will it be gone by then? 2021-07-20 16:18:36 yes 2021-07-20 16:18:51 BBB server is at equinix, so i have more time for it 2021-07-20 16:19:02 the larger issue is my s390x machine :( 2021-07-20 16:19:06 need to find a new home for it 2021-07-20 16:20:26 The basic install of jitsi is not too bad, got it running in one morning 2021-07-20 16:20:29 in docker 2021-07-20 16:20:35 same with BBB 2021-07-20 16:20:38 you just install ubuntu 2021-07-20 16:20:42 and run an install script 2021-07-20 16:20:48 and then it installs docker and does a bunch of stuff 2021-07-20 16:20:48 lol 2021-07-20 16:20:55 hehe 2021-07-20 16:21:01 how much does it tinker on the host OS? 2021-07-20 16:21:02 I thought the goal was hosting it on alpine ;) 2021-07-20 16:21:13 but we could also just use ubuntu for now and migrate to alpine later 2021-07-20 16:21:22 i dont think it does much tinkering 2021-07-20 16:21:47 I personally wouldn't mind hosting the video conferencing system on ubuntu tbh 2021-07-20 16:27:40 i think there is still some salt about the numerous times canonical has tried to come after us 2021-07-20 16:27:58 with FUD 2021-07-20 16:28:21 i try to avoid ubuntu for that reason, anyway 2021-07-20 16:30:28 fair 2021-07-20 16:30:57 ikke: if you have prior experience with jitsi, would you be able/intersted in trying to get it running on alpine infra? 2021-07-20 16:31:11 i think either solution is fine 2021-07-20 19:59:49 ping 2021-07-20 19:59:54 weird 2021-07-20 20:00:13 i was under the impression jitsi didnt need a server side? 2021-07-20 20:00:55 if you use the cloudservice, but you can run your own instance 2021-07-21 04:47:14 ikke: ping 2021-07-21 04:48:25 pong 2021-07-21 04:49:42 we have gitlab issues 2021-07-21 04:51:07 it appears so. Anything more specific? 2021-07-21 04:51:16 disk 2021-07-21 04:51:22 looks like pg is not happy 2021-07-21 04:51:46 /dev/sda 315G 298G 715M 100% / 2021-07-21 04:52:21 shared is 154G 2021-07-21 04:52:40 artifacts 2021-07-21 05:07:17 there is 60GB worth of docker images 2021-07-21 05:07:25 `system docker df` 2021-07-21 05:18:47 are you doing something atm? 2021-07-21 05:25:51 38G available now 2021-07-21 05:54:10 clandmeter: I've fixed the agent config on deu2 as well, so now we should get proper disk usage alerts 2021-07-21 05:55:00 no 2021-07-21 05:55:05 im not doing anything 2021-07-21 05:55:09 showering :) 2021-07-21 05:55:13 :) 2021-07-21 05:56:49 can we do something to keep artifacts down a bit? 2021-07-21 05:58:41 I though artifacts should only be kept for 24h 2021-07-21 05:59:44 /dev/sda 315G 243G 56G 82% / 2021-07-21 05:59:53 current istuation 2021-07-21 06:00:14 its 153Gb 2021-07-21 06:00:29 that does not sound like 24h :) 2021-07-21 06:01:18 there are 2 dirs very large 2021-07-21 06:03:22 There are circumstance where GL will keep artifacts 2021-07-21 06:03:35 The latest artifacts for a branch 2021-07-21 06:13:41 ikke: in any case its too much 2021-07-21 06:13:49 we will bump into issues if that grows 2021-07-21 06:13:53 https://tpaste.us/Qrnz 2021-07-21 06:14:03 yes, I agree, need to look into it 2021-07-21 06:14:19 are thise also buildlogs? 2021-07-21 06:14:23 these* 2021-07-21 06:15:26 not sure, just some simple du on some larger dirs 2021-07-21 06:15:47 https://tpaste.us/KMyn 2021-07-21 06:15:52 thats the root of artifacts 2021-07-21 06:16:05 those 2 dirs look suspicious 2021-07-21 06:16:16 nod 2021-07-21 06:16:24 not sure what the logic is behind the naming 2021-07-21 06:17:16 does it follow the hashed repo structure? 2021-07-21 06:17:59 yes 2021-07-21 06:18:13 shared/artifacts/6b/86/6b86b273ff34fce19d6b804eff5a3f5747ada4eaa22f1d49c01e52ddb7875b4b/ 2021-07-21 06:18:18 @hashed/6b/86/6b86b273ff34fce19d6b804eff5a3f5747ada4eaa22f1d49c01e52ddb7875b4b.git 2021-07-21 06:18:21 that's aports 2021-07-21 07:01:13 ikke: did you check docker compose logs? 2021-07-21 07:02:37 for postgres? 2021-07-21 07:30:21 chromium is building for two architectures, aarch64 and x86_64. not sure if that is related 2021-07-21 08:51:00 ikke: yes 2021-07-21 08:51:11 or are those errors still related to that openstanding issues? 2021-07-21 08:53:09 I'm not sure 2021-07-21 09:23:30 does spf softfail result in dmarc fail? 2021-07-21 09:25:00 looks like it :{ 2021-07-21 10:42:28 clandmeter: It's strange, shared/artifacts/6b/86/6b86b273ff34fce19d6b804eff5a3f5747ada4eaa22f1d49c01e52ddb7875b4b/2021_06_16/416494/854755/MR23042_x86_64.zip points to an MR that was created one week ago 2021-07-21 10:43:19 But the directory mentions 2021_06_16 2021-07-21 10:49:48 ikke: which mr do you think this is? 2021-07-21 10:51:17 https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/23042 2021-07-21 10:51:34 but the artifacts are mariadb 2021-07-21 10:51:54 right 2021-07-21 10:54:18 so that's strange 2021-07-21 10:55:00 so the mr id is not from aports 2021-07-21 10:55:08 or not from official aports 2021-07-21 10:56:53 https://gitlab.alpinelinux.org/alpine/aports/-/blob/master/.gitlab-ci.yml#L46 2021-07-21 11:01:08 I, [2021-07-21T11:00:59.174851 #28558] INFO -- : [DRY RUN] Processed 0 job artifact(s) to find and cleaned 0 orphan(s). 2021-07-21 11:01:22 yes i know that part, but something does not compute 2021-07-21 11:02:32 I think we both agree on that 2021-07-21 11:02:48 thats nice for a change ;-) 2021-07-21 11:03:03 πŸ˜† 2021-07-21 11:05:21 i found 2 related MRs' 2021-07-21 11:05:41 https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/21244 2021-07-21 11:05:49 https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/21243 2021-07-21 11:05:58 these match the artifacts 2021-07-21 11:14:52 yup 2021-07-21 11:15:00 the 21244 resolves to that file 2021-07-21 11:15:31 "These artifacts are the latest. They will not be deleted (even if expired) until newer artifacts are available." 2021-07-21 11:16:19 This used to be different, but at some point, gitlab changed that 2021-07-21 11:16:39 where do you read that text? 2021-07-21 11:17:21 https://gitlab.alpinelinux.org/alpine/aports/-/jobs/416494 2021-07-21 11:19:51 ok 2021-07-21 11:23:37 ikke: none of the ID's match 2021-07-21 11:23:40 its seems 2021-07-21 11:25:31 ah 2021-07-21 11:25:49 you need CI_MERGE_REQUEST_IID 2021-07-21 11:26:10 CI_MERGE_REQUEST_ID is gitlab global ID 2021-07-21 11:26:21 IID is per project 2021-07-21 11:26:37 Ah, that explains 2021-07-21 11:27:08 olk one issue solved :) 2021-07-21 11:28:16 https://docs.gitlab.com/ee/ci/pipelines/job_artifacts.html#keep-artifacts-from-most-recent-successful-jobs 2021-07-21 11:28:19 And those ids in the path are also global instead of project specific? 2021-07-21 11:28:20 i guess we need to change this 2021-07-21 11:28:46 im not sure what they are 2021-07-21 11:28:51 ok 2021-07-21 11:29:02 I guess we cannot toggle that per project 2021-07-21 11:29:27 ah https://gitlab.com/gitlab-org/gitlab/-/issues/241026 2021-07-21 11:30:01 https://i.imgur.com/iKt7QfH.png 2021-07-21 11:30:04 So we need to disable that 2021-07-21 11:30:15 But not sure if that needs to be done for all forks as well 2021-07-21 11:30:31 i think global id's are no problem per se 2021-07-21 11:30:37 No 2021-07-21 11:30:50 maybe even nicer 2021-07-21 11:30:53 But it makes it easier to find the corresponding MR 2021-07-21 11:30:54 in some cases 2021-07-21 11:31:04 And can cause confusion 2021-07-21 11:31:13 but im not sure how to search with global ids 2021-07-21 11:31:15 It's also the name of the file that you get as a download 2021-07-21 11:31:26 probably though gitlab shell 2021-07-21 11:35:27 did you read https://docs.gitlab.com/ee/user/admin_area/settings/continuous_integration.html#keep-the-latest-artifacts-for-all-jobs-in-the-latest-successful-pipelines 2021-07-21 11:35:34 its a global setting 2021-07-21 11:35:55 It is, but they later also added a per-project setting 2021-07-21 11:36:04 https://gitlab.alpinelinux.org/alpine/aports/-/settings/ci_cd 2021-07-21 12:52:08 i bisected rsync and found the commit that introduces the issue 2021-07-21 12:55:32 nice! 2021-07-21 13:13:03 https://github.com/WayneD/rsync/issues/192#issuecomment-884163578 2021-07-21 13:13:08 im writing a test for the testsuite now 2021-07-21 13:13:36 What version was that introduced? 2021-07-21 13:13:50 ah, 3.2.3 2021-07-21 13:14:11 no, 3.2.0 was the first 2021-07-21 14:39:44 ncopa: nice! 2021-07-21 14:46:59 so everything done except the fix :) 2021-07-21 14:50:47 awesome 2021-07-21 15:42:35 clandmeter: I disabled 'keep latest artifacts' for aports 2021-07-21 16:07:23 ncopa: CI failed, no route to host πŸ˜‘ 2021-07-21 17:25:10 ikke: πŸ‘ 2021-07-21 17:27:42 I don't see a decrease in disk usage though (the task should run every 7 minutes) 2021-07-21 18:48:25 clandmeter: I have gitlab running now with gitaly / sshd separated 2021-07-21 18:48:27 locally 2021-07-21 18:48:36 Still need to verify if everything is working 2021-07-21 18:51:30 Nice 2021-07-21 18:51:39 If it works 2021-07-21 18:51:57 I did have to copy gitlab-shell into both gitlab and gitally 2021-07-21 18:52:08 as they both still have dependencies on it 2021-07-21 18:55:29 I do see this warning: s6-svwait: fatal: unable to subscribe to events for /run/s6/web: No such file or directory 2021-07-21 18:55:40 but I do not see /etc/s6/web being defined 2021-07-21 18:56:20 I see that nginx waits for it 2021-07-21 19:20:41 "Unexpected error" "Internal API unreachable" .. 2021-07-21 19:24:26 apparently gitaly also needs the workhorse socket 2021-07-21 20:17:22 Just the gitlab socket, forget to add the volume 2021-07-21 20:31:07 Now I get "not authorized" 2021-07-21 21:25:41 ikke: there is another socket for auth iirc 2021-07-21 21:26:08 I have 2 sockets, one for gitaly, one for gitlab 2021-07-21 21:26:15 is there another one? 2021-07-21 21:26:26 i dont know anymore 2021-07-21 21:26:30 but i remember something like that 2021-07-21 21:27:44 It's too late now, will look tomorrow 2021-07-22 07:43:33 regarding video conferencing: should I create an issue in the infra gitlab group to further discuss this or how do we want to proceed with this? :) 2021-07-22 07:46:13 sounds good 2021-07-22 08:07:44 https://gitlab.alpinelinux.org/alpine/infra/infra/-/issues/10726 2021-07-22 09:32:54 nmeum: TPC? 2021-07-22 09:33:15 i guess you mean TSC? 2021-07-22 09:33:19 oops yes 2021-07-22 09:33:20 typo 2021-07-22 09:33:28 double typo :D 2021-07-22 09:33:40 fixed :p 2021-07-22 09:35:21 i looked briefly at both solutions 2021-07-22 09:35:35 but it seems not that easy to transform it to alpine 2021-07-22 09:35:39 yeah, i thought so 2021-07-22 09:35:59 i think we (infra) should spend our time in different places. 2021-07-22 09:36:30 more closely to the core of our product, but thats my opinion of course 2021-07-22 09:36:53 i personally think it would be fine to just use ubuntu where both bbb and jitsi are probably a lot easier to install 2021-07-22 09:36:58 if somebody wants to take on this task to build it on top of alpine thats wonderful :) 2021-07-22 09:37:42 jitsi is mostly java if i understand correctly 2021-07-22 09:38:20 yep 2021-07-22 10:26:55 clandmeter: i agree about the maintaining focus on the project itself part 2021-07-22 10:39:57 clandmeter: [DRY RUN] Processed 156807 job artifact(s) to find and cleaned 56823 orphan(s). 2021-07-22 10:40:22 bb ionice does not accept -c best-effort, so it would never find orphaned files 2021-07-22 10:40:52 on gitlab-test, I installed util-linux, and now it finds a lot of files 2021-07-22 10:59:31 ok interesting 2021-07-22 10:59:49 ionice makes a difference? 2021-07-22 10:59:53 in what context? 2021-07-22 11:01:23 They run the find command with ionice 2021-07-22 11:01:32 [DRY RUN] find command: '/bin/ionice -c best-effort find -L /home/git/gitlab/shared/artifacts -mindepth 6 -maxdepth 6 -type d 2021-07-22 11:01:34 ionice: invalid number 'best-effort' 2021-07-22 11:02:05 ah it just fails to start 2021-07-22 11:02:51 Fixing that should result in quite a bit of diskspace recovery 2021-07-22 11:03:04 looks like bb does support it 2021-07-22 11:03:12 but you need to provide the number 2021-07-22 11:03:16 And if I look the disk usage graph, I suppose that keep artifact toggle only affects new jobs 2021-07-22 11:03:49 yes, it's specifically 'best-effort' that is not supported 2021-07-22 11:04:09 2:best-effort 2021-07-22 11:04:12 so -c 2 2021-07-22 11:05:09 so we could patch it instead of adding another dep. 2021-07-22 11:05:13 maybe even upstream it 2021-07-22 11:05:51 nod 2021-07-22 11:06:18 nice catch :) 2021-07-22 11:06:30 I decided to actually read the error message for once :P 2021-07-22 11:06:54 makes sense 2021-07-22 11:33:44 i think i have a fix for rsync --delay-updates 2021-07-22 11:34:04 That would be nice 2021-07-22 11:34:24 I regularly manually sync the repos for ppc64le 2021-07-22 11:34:29 https://github.com/WayneD/rsync/pull/204 2021-07-22 11:34:36 lets see what upstream says 2021-07-22 16:02:31 clandmeter: for now, should I just install util-linux in the gitlab container and run that cleanup job? 2021-07-22 16:02:40 nod 2021-07-22 16:08:54 hmm, not a lot of space gained 2021-07-22 16:09:10 4GB :/ 2021-07-23 15:24:57 clandmeter: ping 2021-07-23 15:25:05 pong 2021-07-23 15:25:18 trying to fix that unauthorized issue with gitlab 2021-07-23 15:25:27 ok 2021-07-23 15:25:35 I've found this page: https://docs.gitlab.com/ee/administration/gitaly/troubleshooting.html 2021-07-23 15:25:46 It mentions this specific issue 2021-07-23 15:25:58 "You need to sync your gitlab-secrets.json file with your GitLab application nodes" 2021-07-23 15:26:18 But as far as I understand, that specific file does not exist anymore 2021-07-23 15:27:30 there is a secrets.yml 2021-07-23 15:28:07 tbh, i dont know that file 2021-07-23 15:28:17 only secrets.yml 2021-07-23 15:28:22 right 2021-07-23 15:28:37 But as far as I can tell, gitaly does not refer to that spefic file 2021-07-23 15:28:55 at least, I did not find any configuration or documentation 2021-07-23 15:29:13 ah 2021-07-23 15:29:17 thats omnibus 2021-07-23 15:29:21 nod 2021-07-23 15:29:23 we do from source 2021-07-23 15:29:28 For Omnibus: 2021-07-23 15:29:28 For installation from source: 2021-07-23 15:29:32 oops 2021-07-23 15:29:46 https://docs.gitlab.com/ee/raketasks/backup_restore.html#storing-configuration-files 2021-07-23 15:30:09 that json is siumilar like the yml 2021-07-23 15:30:22 omnibus used json 2021-07-23 15:30:29 But I understand that it's only the web front-end that uses it? 2021-07-23 15:30:29 and will probably generate the yml files 2021-07-23 15:31:10 i dont know that many details 2021-07-23 15:31:12 too long ago 2021-07-23 15:31:39 So gitaly _does_ refer to a secrets file 2021-07-23 15:32:37 from what i know 2021-07-23 15:32:54 most of that data is encrypted with keys in secrets.yml 2021-07-23 15:33:08 "secret_file = "/home/git/gitlab-shell/.gitlab_shell_secret" 2021-07-23 15:33:15 that's in the gitaly ocnfig 2021-07-23 15:33:37 isnt that auto genereated? 2021-07-23 15:34:21 I don't know? What should generate it? 2021-07-23 15:34:27 For now I generated the seret myself 2021-07-23 15:36:49 ah 2021-07-23 15:36:52 i remember now 2021-07-23 15:37:04 https://tpaste.us/LmYp 2021-07-23 15:37:05 i think that file is just a file that need to be available on both sides 2021-07-23 15:37:10 right 2021-07-23 15:37:17 so I shared it with 3 containers 2021-07-23 15:37:22 gitlab, gitlab-shell and gitaly 2021-07-23 15:37:26 whatever the secret doesnt matter 2021-07-23 15:37:42 that's what I assumed 2021-07-23 18:47:23 clandmeter: hmm, disk usage on deu2 seems to be slowly decreasing 2021-07-23 18:49:57 https://imgur.com/a/QM28Slr 2021-07-23 18:51:09 https://pasteboard.co/Kcxz9s8.png 2021-07-24 18:02:38 clandmeter: oh, apparently the .gitlab directory has a .gitlab_shell_secret as well.. 2021-07-24 18:02:44 /home/git/gitlab 2021-07-24 18:21:08 So apparently the secret is generated during build.. 2021-07-24 19:21:33 YES! 2021-07-24 19:40:34 So committing via the webif succeeds now 2021-07-25 13:09:13 pushing over ssh does not work yet 2021-07-25 14:08:19 what broke it? 2021-07-25 14:08:44 Oh, I'm working on separating parts from the monolithic docker image 2021-07-25 14:09:02 This is not about production :) 2021-07-25 14:09:13 ah I see, how's that going in general? 2021-07-25 14:09:45 Quite good, I have gitaly and gitlab-shell separated 2021-07-25 14:09:52 gitlab is running 2021-07-25 14:10:14 i suppose you had to mess about with mounting a lot of directories in various places? 2021-07-25 14:10:38 Not that much 2021-07-25 14:10:54 except gitlab-shell is used all over the place 2021-07-25 14:11:13 I embedded the binaries in each image 2021-07-25 14:11:38 But the rest is just 2 sockets 2021-07-25 14:11:53 I see, it makes sense that it would be everywhere given that it manages SSH keys and git over ssh sessions 2021-07-25 14:12:09 They are working on limiting it to just gitaly 2021-07-26 10:51:12 clandmeter: https://gitlab.alpinelinux.org/alpine/infra/linode-tf/-/merge_requests/7 2021-07-26 10:52:00 I've added a redirect on nld3 for dl-2 to dl-cdn 2021-07-26 11:01:14 it would be nice if pkgs.a.o indexed the riscv64 repos 2021-07-26 11:01:28 i looked around to see if i could figure out how to implement that 2021-07-26 11:04:45 Need to add it to the configuration, and manually run the import once 2021-07-26 11:51:14 is testing complete now? 2021-07-26 11:51:23 not yet 2021-07-26 11:52:19 clandmeter: do you agreen with that MR ^? 2021-07-26 11:54:16 yup 2021-07-26 11:54:25 but 2021-07-26 11:54:33 what about rsync? 2021-07-26 11:55:35 is dl-2 an rsync source? 2021-07-26 11:55:59 from what i remember most dl-x are rsync backed 2021-07-26 11:56:09 except cdn ofc 2021-07-26 12:00:14 they get the data from rsync, but do they also provide data to other mirrors? 2021-07-26 13:40:02 they shouldn't but they probably do in reality 2021-07-26 13:41:05 At least I don't think any of the other mirrors on mirrors.a.o 2021-07-26 13:41:40 Hmmm, maybe linorg.usp.br? 2021-07-26 13:42:16 Oof: https://mirrors.alpinelinux.org/#mirror39 2021-07-26 13:42:24 we should remove that mirror 2021-07-26 13:50:30 ouch, that looks pretty dead 2021-07-26 13:51:53 foobar.turbo.net.id looks a bit stale too 2021-07-26 13:54:30 According to Zabbix that was just fixed 2021-07-26 13:54:45 the last updated seems fine 2021-07-26 13:54:54 Time since last mirror update 2021-07-26 13:54:56 2021-07-26 15:42:403h 42m 40s 2021-07-26 13:55:15 It used to be 3 months 2021-07-26 13:55:29 5 months even 2021-07-26 13:56:52 ah i missed the message here 2021-07-26 13:56:57 the solved one from algitbot 2021-07-27 04:22:14 -- β”‚ ChanServ has unset topic for #alpine-infra 2021-07-27 04:22:16 huh? 2021-07-27 06:14:38 wut 2021-07-27 06:14:48 did some oper accidentally run a bad command? 2021-07-27 07:15:29 hmm, chanserv changed topics in a few other channels i'm in around midnight 2021-07-27 16:10:44 i think services does that when they reboot it 2021-07-27 16:11:29 But not on all channels? 2021-07-27 16:11:51 idk 2021-07-28 15:27:43 clandmeter: What do you think we should do with dl-2? The mirror owner says they will no longer provide a public mirror 2021-07-28 15:28:28 and btw, sync issues might be related to dl-master, ppc64le seems to sync fine with gbr1 2021-07-28 15:29:05 It gets stuck on a file to dl-master, but it syncs entire repos to gbr1 2021-07-28 15:44:43 can we remove dl-2? i guess we no longer need it? 2021-07-28 15:45:56 Yes, the question is if we are going to redirect it somewhere else 2021-07-28 15:46:03 A cname I guess? 2021-07-28 15:46:46 do we have some other dl-*? 2021-07-28 15:46:48 dl-4? 2021-07-28 15:52:14 dl-1, dl-6, dl-7, dl-8 all resolves to something but does not work 2021-07-28 15:52:23 i think we should remove those dns names 2021-07-28 15:52:57 dl-4 and dl-5 points both to nl3.alpinelinux.org 2021-07-28 15:53:37 https://gitlab.alpinelinux.org/alpine/infra/linode-tf/-/merge_requests/1 2021-07-28 15:53:58 dl-3 points to nld3-dev1.alpinelinux.org which does a 301 redirect to dl-cdn 2021-07-28 15:55:42 nld3-dev1 probably runs nginx with a vhost set up for it, so adding another one there makes sense 2021-07-28 15:56:28 curl -v --resolve dl-2.alpinelinux.org:443:147.75.101.119 https://dl-2.alpinelinux.org/alpine/ :-) 2021-07-28 16:05:30 looks like there is movement in upstream rsync finally! 2021-07-28 16:07:43 https://github.com/WayneD/rsync/pull/204/commits/15ec7de5503c57860fb73ea6e4a349f1e70b72db 2021-07-28 16:25:41 nice, merged 2021-07-28 18:30:57 https://www.linode.com/docs/guides/how-to-use-nftables 2021-07-28 18:56:07 nftables is awesome 2021-07-28 18:56:23 i've been experimenting a ton with it lately 2021-07-28 18:57:45 Need to still look into it 2021-07-28 18:58:40 yes, nftables is a lot better than iptables 2021-07-28 19:00:02 but I need to 'invent' some of higher level framework for it to use in production 2021-07-28 19:01:09 not necessarily, but a higher level abstraction is nice 2021-07-28 19:01:14 like iptables-nft or firewall-cmd 2021-07-28 19:02:36 s/firewall-cmd/firewalld/ 2021-07-28 19:02:36 danieli meant to say: like iptables-nft or firewalld 2021-07-28 19:02:45 yes, I'm thinking to make simple shell script to revert changes in case I made mess on remote systems 2021-07-29 06:35:30 good morning 2021-07-29 06:35:52 morning 2021-07-29 06:36:05 ikke: lets clean those up 2021-07-29 06:36:29 and yes, i think making uk the new master sounds good to me 2021-07-29 06:36:55 it has enough space iirc 2021-07-29 06:37:45 we can also ask them to solve it, but that sounds like another can of worms. 2021-07-29 06:37:58 danieli: hi 2021-07-29 06:38:00 hows life? 2021-07-29 06:38:20 could be better, had a rough couple of days, other than that it's pretty decent 2021-07-29 06:39:06 me too, but just tired. 2021-07-29 06:39:11 that will solve itself 2021-07-29 06:39:42 for me it's related to some pretty severe issues in my family 2021-07-29 06:40:05 oh, sorry to hear that. 2021-07-29 06:40:42 it is what it is, it'll be alright eventually 2021-07-29 06:40:44 at least this time :) 2021-07-29 07:13:28 danieli: sorry hear 2021-07-29 07:13:51 issues with my mother, and my grandfather had a heart attack 2021-07-29 07:14:00 :-( 2021-07-29 07:14:00 but they're both alive and doing 'okay' so it's not a major concern 2021-07-29 07:14:32 ouch :( 2021-07-29 07:15:00 clandmeter: what is your suggestion regarding dl-2? Do we keep a cname for the time being? 2021-07-29 07:15:05 There are people using it apparently 2021-07-29 07:28:50 Sounds good 2021-07-29 08:55:51 i pushed rsync fix to 3.14-stable and 3.13-stable. we can re-enable --delay-updates 2021-07-29 08:56:05 did we disable it? 2021-07-29 09:00:07 dunno 2021-07-29 09:01:28 I did not at least, and given that I regularly had to rsync and noticed that .~tmp~ files were removed, I assume we did not 2021-07-29 09:01:41 ok 2021-07-29 09:01:43 good 2021-07-29 09:03:08 hopefully you will no longer need to regularily rsync 2021-07-30 12:03:22 ncopa: strange, still having issues on ppc64le 2021-07-30 12:03:47 I suppose we need to update rsync on dl-master? 2021-07-30 12:26:01 Updated rsync on dl-master to -r4 2021-07-30 13:28:21 yeah, problem was at receiver side 2021-07-30 13:29:40 the network issues are stil there, though 2021-07-30 19:37:26 ^^^^^^^^ 2021-07-30 19:37:40 ncopa: clandmeter Ariadne 2021-07-30 20:21:27 :) 2021-07-31 01:33:18 lol very good uptime 2021-07-31 01:35:16 ohhh 2021-07-31 01:35:18 that’s funny 2021-07-31 01:35:45 they gave us our box back and then scheduled deletion of the VMs 2021-07-31 01:35:52 but i guess it deleted immediately 2021-07-31 01:43:07 i pinged rafael 2021-07-31 16:18:12 can I post mail from alpinelinux.org somehow. I posted bug report to kernel but one of the recipients server refuses mail from my smtp server 2021-07-31 16:18:52 isn't it easier to get a throwaway gmail or something for that? 2021-07-31 16:19:11 i don't see why it has to be a specific domain or self-hosted mail server :) 2021-07-31 16:19:38 For bug reports it does not matter, but gmail mangles patches 2021-07-31 16:19:50 well, this is solution I know, but solution which I dislike 2021-07-31 16:20:29 gmail, yahoo .... no thanks 2021-07-31 16:24:33 you don't *have* to go for any of the conglomerates, it was just an example 2021-07-31 16:25:01 i use protonmail myself, but the issue there is that i have to run an application locally to get SMTP and IMAP 2021-07-31 16:26:58 I thought about protonmail but decided to not use it 2021-07-31 16:28:22 and similar problems do not happen often, very rarely actually and I ignore such recipients/server 2021-07-31 16:28:58 in this case bug is related to alpine kernels and because that I asked 2021-07-31 16:34:03 just curious, how come it refuses mail from your server? 2021-07-31 16:35:12 554-kundenserver.de (mxeue010) Nemesis ESMTP Service not available 554-No SMTP service 554-Bad DNS PTR resource record. 2021-07-31 16:36:00 Ah, no PTR record? 2021-07-31 16:36:09 or at least not matching 2021-07-31 16:36:19 you should be able to adjust PTR / rDNS through your server provider 2021-07-31 16:37:06 it is in my local macine 2021-07-31 16:37:15 machine* 2021-07-31 16:37:25 aha, then it's most likely going to refuse it if you're hosting it from home and can't control PTR 2021-07-31 16:37:58 right that