2020-06-01 07:27:30 didstopia should resolve soon 2020-06-01 07:40:21 ikke: morning 2020-06-01 07:40:59 morning 2020-06-01 07:41:04 ikke: im looking into gitlab issue 2020-06-01 07:41:07 Ok 2020-06-01 07:41:18 i think i found it 2020-06-01 07:41:20 I wonder if it's some DB index issue 2020-06-01 07:41:21 oh, ok 2020-06-01 07:41:22 but i am not sure 2020-06-01 07:41:30 try running this on both hosts 2020-06-01 07:41:36 sync && time sh -c "dd if=/dev/zero of=testfile bs=100k count=10k && sync" 2020-06-01 07:41:48 you will be surprised 2020-06-01 07:41:59 so IO difference 2020-06-01 07:42:13 yes and not by a little diff 2020-06-01 07:42:32 1s vs 4s 2020-06-01 07:42:50 now 1s vs 1s 2020-06-01 07:43:28 https://tpaste.us/DDl0 2020-06-01 07:43:46 but its a stupid test 2020-06-01 07:43:56 time dd if=/dev/zero of=/mnt/testf bs=1M count=1024 oflag=direct 2020-06-01 07:44:13 i dont think dd in bb supports it 2020-06-01 07:44:18 echo 1 >/proc/sys/vm/drop_caches 2020-06-01 07:44:34 i.e. I found oflag=direct useful for such tests 2020-06-01 07:45:27 ah, true bb dd doesnt't support it 2020-06-01 07:45:34 only skip_bytes 2020-06-01 07:45:40 seek_bytes* 2020-06-01 07:45:49 i think we could try some other io test 2020-06-01 07:45:57 ikke: check compose log 2020-06-01 07:46:06 on test? 2020-06-01 07:46:07 it also complains about op 2020-06-01 07:46:10 io 2020-06-01 07:46:26 redis_1 | 1:M 31 May 2020 14:44:19.098 * Asynchronous AOF fsync is taking too long (disk is busy?). Writing the AOF buffer without waiting for fsync to complete, this may slow down Redis. 2020-06-01 07:46:42 ah, redis 2020-06-01 07:47:06 the *real* instance is running on dedicated hw 2020-06-01 07:47:12 nod 2020-06-01 07:47:17 kind of :) 2020-06-01 07:47:28 still vm ofc 2020-06-01 07:47:30 But never noticed this before 2020-06-01 07:47:37 At least, did not recall 2020-06-01 07:47:43 maybe its also on the real one this notice 2020-06-01 07:47:56 but it triggered my senses to try io 2020-06-01 07:48:12 is there some tool we can try to run some tests? 2020-06-01 07:48:17 yes, also on prod 2020-06-01 07:48:26 mps: good morning to you too ;-) 2020-06-01 07:48:29 redis_1 | 1:M 31 May 2020 03:18:42.040 * Asynchronous AOF fsync is taking too long (disk is busy?). Writing the AOF buffer without waiting for fsync to complete, this may slow down Redis. 2020-06-01 07:48:38 right 2020-06-01 07:49:03 clandmeter: what do you want to test? 2020-06-01 07:49:07 clandmeter: yes, goeden morgen (uh) 2020-06-01 07:49:26 performance of the disk between the two 2020-06-01 07:49:31 clandmeter: it's mostly postgres that is taking longer according to gl stats 2020-06-01 07:49:47 well that could also be io bound? 2020-06-01 07:49:50 sure 2020-06-01 07:50:01 hm, 'goede' 2020-06-01 07:50:08 mps: correct :) 2020-06-01 07:50:16 yes, its dutch 2020-06-01 07:50:22 you will make mistakes all the time. 2020-06-01 07:50:43 isn't today holiday in NL? 2020-06-01 07:50:52 yes it is 2020-06-01 07:51:00 i wear sunglasses 2020-06-01 07:51:06 :D 2020-06-01 07:51:32 i want to copy those two disks/io 2020-06-01 07:51:33 so, go out somewhere you both, and enjoy holiday :) 2020-06-01 07:51:40 if those simple stats really add up 2020-06-01 07:52:05 lol 2020-06-01 07:52:11 clandmeter: why not apk add cmd:dd and test with io direct? 2020-06-01 07:52:12 s/copy/compare 2020-06-01 07:52:12 clandmeter meant to say: i want to compare those two disks/io 2020-06-01 07:52:52 cause dd is not the best io test for such cases. 2020-06-01 07:53:45 true, but is helpful as 'first aid' 2020-06-01 07:57:05 clandmeter: we could also try 13.0 like you suggested 2020-06-01 07:59:09 ikke: yes but i would prefer to first rule out vm issues 2020-06-01 07:59:17 ikke: is this a new vm? 2020-06-01 07:59:21 yes 2020-06-01 07:59:32 and you delted the old one? 2020-06-01 07:59:35 yes 2020-06-01 07:59:38 ok 2020-06-01 07:59:43 I also did some cleaning 2020-06-01 07:59:52 i got an invoice and it was getting high 2020-06-01 07:59:58 ok 2020-06-01 07:59:59 close to our limit 2020-06-01 08:00:03 nod 2020-06-01 08:06:29 we could use someting like fio to run some tests? 2020-06-01 08:07:50 ehm, we don't bonnie+ in repo 2020-06-01 08:08:37 but fio is also nice, knowing who is author 2020-06-01 08:08:45 clandmeter: we could check and see if 12.9 is also slower 2020-06-01 08:08:56 heh, I was looking at fio as well :( 2020-06-01 08:08:57 :) 2020-06-01 08:09:17 there is also ioping 2020-06-01 08:11:04 ikke: yeah we could try that 2020-06-01 08:11:15 12.9 2020-06-01 08:13:51 ikke: when you restore you could afterwards make a tarball of srv/docker/gitlab, that would speedup doing some tests between versions. 2020-06-01 08:15:30 ikke: did you compare results from gitlab performance bar with both versions? 2020-06-01 08:18:13 i even get an exclamation mark on the bar 2020-06-01 08:33:10 I compared prod vs testing 2020-06-01 08:36:41 clandmeter: So recreate the test vm? 2020-06-01 08:36:55 yes please 2020-06-01 08:37:21 lets compare versions on the same host 2020-06-01 08:37:26 makes more sense 2020-06-01 08:37:38 nod 2020-06-01 08:37:46 the bar is weird 2020-06-01 08:37:52 it shows gitlab.a.o on both 2020-06-01 08:38:27 test is gone :) 2020-06-01 08:38:29 the sql queries are also very different 2020-06-01 08:39:02 ikke: maybe select the same host spec? 2020-06-01 08:39:11 dedi hw 2020-06-01 08:39:17 just for the tests 2020-06-01 08:39:17 ok, will do 2020-06-01 08:39:30 so we shouldnt keep it running for too long. 2020-06-01 08:39:59 ikke: did your issue get merged? 2020-06-01 08:40:03 or solved 2020-06-01 08:40:07 in gitlab 2020-06-01 08:40:20 "Additional verification is required to add this service. Please open a support ticket. 2020-06-01 08:40:23 " 2020-06-01 08:40:35 clandmeter: no, did not see any activity on the issue 2020-06-01 08:40:47 ah you cannot create such instance? 2020-06-01 08:40:52 apparently not 2020-06-01 08:41:02 maybe i can try. 2020-06-01 08:43:53 same thing 2020-06-01 08:44:10 ok, just use a normal instance then? 2020-06-01 08:44:10 ok lets just try the other one 2020-06-01 08:44:16 its just to compare 2020-06-01 08:44:18 you or me? 2020-06-01 08:44:23 i can do it 2020-06-01 08:44:25 ok 2020-06-01 08:44:37 fyi: I created a gitlab-test A record 2020-06-01 08:44:46 so we can update that 2020-06-01 08:45:37 cool 2020-06-01 08:45:40 low ttl? 2020-06-01 08:45:43 yes 2020-06-01 08:45:45 5m 2020-06-01 08:45:49 nice 2020-06-01 08:45:54 172.105.64.166 2020-06-01 08:45:56 :) 2020-06-01 08:47:10 updated 2020-06-01 08:48:12 the slowdown feels a lot like we had in the beginning 2020-06-01 08:50:48 Now wait a few hours 2020-06-01 08:54:00 :) 2020-06-01 08:54:29 lets not forget to make that tarball to skip this step. 2020-06-01 09:13:26 heh 2020-06-01 09:13:56 reminds me, didn't you add a step to make a database export before migrating? 2020-06-01 09:20:55 could be 2020-06-01 09:21:06 ikke: also please remember i removed the backup volume 2020-06-01 09:21:14 i dont think it was really used 2020-06-01 09:21:37 I was not even aware of a backup-volume :) 2020-06-01 09:21:53 it was mounted in the backup location 2020-06-01 09:22:05 before we did linode backups 2020-06-01 09:22:12 ok 2020-06-01 09:22:51 btw, are the backups tied to the instance? What if we by accident (I sure hope it does not happen) delete the gitlab prod instance 2020-06-01 09:23:58 good question 2020-06-01 09:24:12 i dont see a backup option 2020-06-01 09:24:17 only from within the instance 2020-06-01 09:24:21 yeah 2020-06-01 09:25:45 the upgrade step does not show a db export 2020-06-01 09:25:52 or backup 2020-06-01 09:26:42 we could add a gitlab backup just before the upgrade 2020-06-01 09:26:51 but im not sure if that makes sense 2020-06-01 09:27:28 I'm not sure we want to restore a full backup to roll-back 2020-06-01 09:27:33 especially if it takes >1h 2020-06-01 09:27:50 looks like there is dump_db 2020-06-01 09:28:06 you could do that manually 2020-06-01 09:28:35 Yes, which is what I provisionally did 2020-06-01 09:28:47 but then you said you were going to add that the script itself 2020-06-01 09:28:58 haha 2020-06-01 09:29:00 i did? 2020-06-01 09:29:02 yes 2020-06-01 09:29:05 you have such good memory? 2020-06-01 09:29:12 or such good backlog? 2020-06-01 09:29:17 memory 2020-06-01 09:29:20 I guess :) 2020-06-01 09:29:54 i guess making a dump doesnt hurt 2020-06-01 09:30:01 its not really a backup 2020-06-01 09:31:23 i think doing a periodic gitlab backup to something like minio could be usefull to work around massive stupidity by clicking delete this instance. 2020-06-01 09:43:23 ikke: what is your advise to make an MR for gitlab? 2020-06-01 09:44:09 We'd first have to find out where the issue is 2020-06-01 09:44:50 hehe 2020-06-01 09:44:56 i mean our project 2020-06-01 09:44:56 And then find out what the policy is regarding MRs 2020-06-01 09:45:05 hmm 2020-06-01 09:45:12 i want to make an MR for our gitlab instance 2020-06-01 09:45:18 should i fork? 2020-06-01 09:45:34 I don't think forking is necessary 2020-06-01 09:45:46 You can just push a branch and then create an MR against master 2020-06-01 09:45:48 so it would introduce a feature branch? 2020-06-01 09:45:52 yes 2020-06-01 09:46:11 forking is only necessary when you don't have permissions to push to the original project, or aports, because we want to keep it clean 2020-06-01 10:02:26 ok i did it from the webif :) 2020-06-01 10:03:47 ah thats cool 2020-06-01 10:03:54 its not a protected branch 2020-06-01 10:03:59 so it does not push to github 2020-06-01 10:04:05 so no trigger 2020-06-01 11:46:35 The instance is available 2020-06-01 11:47:02 It's booting 2020-06-01 12:14:42 nice 2020-06-01 12:29:22 ok, I have a tarbal from /srv/docker/gitlab 2020-06-01 12:29:38 20G atm 2020-06-01 12:52:35 clandmeter: even with 12.9 it's slow, so I guess it has to do with the instance, and not a performance regression 2020-06-01 12:52:52 It's higly variable 2020-06-01 16:47:14 ikke: ok 2020-06-01 16:47:32 you feel safe to do the upgrade? 2020-06-01 16:53:36 If we can roll-back, then yes 2020-06-01 18:33:46 (I just added that) 2020-06-01 18:37:18 funny that uk.a.o is hosted in NLD :) 2020-06-01 18:37:27 Should we still list that as a mirror? 2020-06-01 18:37:28 not that far 2020-06-01 18:37:43 The south american and specifically portuguese runescape servers are hosted in New Jersey :D 2020-06-01 18:39:02 It used to be in the uk, but we decommissioned the mirror there 2020-06-01 19:10:02 I'll restrain myself from the comment in my mind 2020-06-01 20:00:36 ikke: i think we should remove uk.a.o from list 2020-06-01 20:01:07 seems mirrors is not updated anymore 2020-06-01 20:01:18 Like more things :) 2020-06-01 20:01:31 We changed mqtt topic, but did not verify everything was updated 2020-06-01 20:04:36 mirrors is periodic i think 2020-06-01 20:05:05 ok 2020-06-01 20:05:19 That explains why it was showing ok 2020-06-01 20:05:29 the status.json was is up-to-date I think 2020-06-02 07:16:49 ikke: i think i know why mirrors didnt update 2020-06-02 07:17:12 the status.json was okish 2020-06-02 07:17:40 my guess is that one of the mirrors has a repo not existing on the master. 2020-06-02 09:38:47 Could you maybe have a look at https://gitlab.alpinelinux.org/alpine/infra/infra/-/issues/10692 so I can generate appstream data for 3.12? :) 2020-06-02 11:51:06 Hello folks! Hopefully I'm in the right channel. I've started seeing failures when running `apk add` for example `ERROR: https://dl-4.alpinelinux.org/alpine/v3.10/main: temporary error (try again later)` it's been happen since around 8:00 UTC for me. I also sometimes see an error with a `Segmentation fault (core dumped)`. This only happens when I 2020-06-02 11:51:07 try to build a docker image that runs `apk add` and the build is running inside Docker in Docker. Are there any known issues at the moment, this seem to have been working fine yesterday. 2020-06-02 13:36:14 I was disconnected earlier, so apologise if you responded to me. But I've found out that use the `http` mirrors instead of the `https` ones seem to be working: 2020-06-02 13:36:37 For example `http://dl-4.alpinelinux.org/alpine/v3.10/main/aarch64/APKINDEX.tar.gz` 2020-06-02 13:40:21 steveazz: these CDN mirrors use http, not https 2020-06-02 13:41:26 http://dl-cdn.alpinelinux.org/alpine/v3.12/main 2020-06-02 13:53:30 mps thank you for that information, it helps! 2020-06-02 13:57:56 np 2020-06-02 14:00:41 This might be a stupid question, but why would one prefer https over http for a mirror? 2020-06-02 14:02:56 well, privacy. people don't like that their ISP spy on them what packages they install 2020-06-02 14:31:46 And the reason why the CDN's don't have https is because of time constrains or is something that won't be solved? 2020-06-02 14:33:39 You have to use a different URL for https 2020-06-02 14:33:46 See mirrors.alpinelinux.org 2020-06-02 14:36:43 But when I use `https` I get `ERROR: https://uk.alpinelinux.org/alpine/v3.12/main: temporary error (try again later)` 2020-06-02 14:37:05 with bugs like this, you might not even be able to get the fix via a https mirror: https://gitlab.alpinelinux.org/alpine/aports/-/issues/11607 2020-06-02 15:25:40 bind mount doesn't work in out developers lxc containers.? tried to setup armhf chroot but can't mount some needed dirs 2020-06-02 15:25:51 s/out/our/ 2020-06-02 15:25:51 mps meant to say: bind mount doesn't work in our developers lxc containers.? tried to setup armhf chroot but can't mount some needed dirs 2020-06-02 15:46:05 where was ca-certificates supposed to move to? github? 2020-06-02 15:46:08 Fetching https://git.alpinelinux.org/ca-certificates/snapshot/ca-certificates-20191127.tar.xz 2020-06-02 15:46:14 curl: (22) The requested URL returned error: 404 2020-06-02 16:57:13 Who broke itC? ^ 2020-06-02 16:58:46 uuh 2020-06-02 16:59:16 something is down 2020-06-02 16:59:18 including netbox :-/ 2020-06-02 16:59:25 oh no, netbox is there 2020-06-02 17:04:44 traefik is not accepting any connections again 2020-06-02 17:04:51 disk is not full 2020-06-02 17:08:09 restarting traefik fixed it.. 2020-06-02 17:09:06 strace showed constant operation timeouts, while after the restart, they are not there 2020-06-02 17:09:27 well, they are there, but more other activity as well 2020-06-02 17:10:32 ncopa: I think ca-certificate should be on gitlab as well, not? 2020-06-02 17:17:28 ikke: maybe pull a new image 2020-06-02 17:22:37 done 2020-06-02 17:25:05 Can someone with the necessary bits mention on https://wiki.alpinelinux.org/wiki/Alpine_Linux:Releases that the 2 years of support only apply for main/ ? 2020-06-02 17:55:33 And I think PPC64LE is stuck on OpenJDK, could someone kill it please? 2020-06-02 18:41:34 Cogitri: re releases, done 2020-06-02 18:48:53 Merci 2020-06-02 18:50:36 killed openjdk as well :) 2020-06-02 20:05:50 going to be taking down mips64 builder momentarily to try new build of BSP kernel to see if that fixes go 2020-06-02 20:05:59 \(y) 2020-06-02 20:06:16 i rebased the vendor config based on our linux-octeon package, so maybe with luck it will work 2020-06-02 20:06:42 treefort:~/source/kernel$ grep OCTEON3_ETHERNET .config 2020-06-02 20:06:42 # CONFIG_OCTEON3_ETHERNET is not set 2020-06-02 20:06:43 ffs 2020-06-02 20:34:09 ok 2020-06-02 20:34:12 now doing the thing 2020-06-02 20:36:47 reading vmlinux.64 2020-06-02 20:36:47 ** Unable to read file vmlinux.64 ** 2020-06-02 20:36:48 hmm 2020-06-02 20:36:51 one momento 2020-06-02 20:37:15 well, thankfully i copied backup kernel 2020-06-02 20:37:33 sooooo 2020-06-02 20:42:33 haha 2020-06-02 20:42:39 extracted to /root 2020-06-02 20:42:41 instead of / 2020-06-02 20:42:49 oh lol 2020-06-02 20:43:50 extracted to the wrong root ;) 2020-06-02 20:44:53 omg 2020-06-02 20:44:58 i built it without CONFIG_NF_NAT 2020-06-02 20:45:20 well before i waste more time on this 2020-06-02 20:45:24 lets see if it fixes go 2020-06-02 20:45:54 answer: no 2020-06-02 20:46:41 sadface 2020-06-02 20:47:10 so i am just putting back the original kernel 2020-06-02 20:49:40 conclusion: we just keep go disabled until 5.4 kernel is working on this machine fully imo 2020-06-02 20:51:01 ok, so disable docker and syncthing? 2020-06-02 20:57:01 yeah for right now :/ 2020-06-02 21:00:40 oh, this is frustrating 2020-06-02 21:00:48 we have the driver working for 10G ports 2020-06-02 21:00:51 but not for 1G 2020-06-02 21:03:52 oh well, the reverse engineering continues 2020-06-02 22:40:01 So I had an idea. 2020-06-02 22:40:16 Its pretty cursed but I think it can solve things 2020-06-02 22:40:27 What if we run 5.4 kernel in kvm 2020-06-03 06:11:38 Ugh, OpenJDK14 is now stuck on ppc64le (I think 13 was stuck before) 2020-06-03 08:59:04 script to purge latest-releases urls from cache: https://gitlab.alpinelinux.org/ncopa/purge-dlcdn-cache 2020-06-03 09:24:20 i have ran it from norway, brazil and usa now 2020-06-03 10:14:14 ncopa: can you elaborate about this? 2020-06-03 10:14:29 why not do a wildcard purge? 2020-06-03 10:15:08 a wildcard purge would purge everything, even if not needed? 2020-06-03 10:15:23 why would it not be needed? 2020-06-03 10:15:41 because version changed since last release 2020-06-03 10:16:03 yes so latest-stable would only have pkgs for 3.11? 2020-06-03 10:16:15 but we it links now to 3.12 2020-06-03 10:17:01 if v3.11 had foo-1.0 and v3.12 updated it to foo-1.1, then latest-stable will have foo-1.1 and does not need to be purged from cache 2020-06-03 10:17:26 if both has foo-1.0, it needs to be purged 2020-06-03 10:17:53 i guess i could have used a wildcard purge, but i didnt 2020-06-03 10:25:50 Does it matter from where you purge an entry? 2020-06-03 10:37:33 i dont think it matters 2020-06-03 10:37:45 at least i could not find any ref that it does 2020-06-03 10:43:10 ncopa: on new release you could do something like this https://docs.fastly.com/en/guides/wildcard-purges#via-the-api 2020-06-03 11:01:11 clandmeter: thanks 2020-06-03 12:47:56 ncopa: would it be an option to add a few of the release steps to gitlab CI? 2020-06-03 12:48:02 not sure which ones you currently make 2020-06-03 12:59:48 the release steps are documented on wiki.alpin.pw 2020-06-03 12:59:58 http://wiki.alpin.pw/releng/release-checklist.html 2020-06-03 13:00:16 i guess it should move to somewhere 2020-06-03 13:09:51 ok i think we can make an issue with that list and see what we can automate 2020-06-03 13:17:57 ncopa: you normally tag from the new branch i guess? 2020-06-03 13:18:44 i mean first branch than tag 2020-06-03 13:26:23 no. opposite 2020-06-03 14:55:05 hum.... we new location to store ca-certificates tarballs https://build.alpinelinux.org/buildlogs/build-3-10-aarch64/main/ca-certificates/ca-certificates-20191127-r2.log 2020-06-03 15:16:32 ncopa: Can we move / host it on gitlab? 2020-06-03 15:19:04 clandmeter: I created this based on that list: https://gitlab.alpinelinux.org/alpine/aports/-/issues/11591 2020-06-03 15:23:39 Nice 2020-06-03 17:15:41 ppc64le is stuck again :c 2020-06-04 06:08:59 ikke: do we have another not version copy of: https://gitlab.alpinelinux.org/alpine/aports/-/issues/11591 2020-06-04 06:09:36 It is not complete 2020-06-04 06:09:53 We should add thinks like pkgs.a.o 2020-06-04 08:00:37 under what namespace should have host ca-certificates? alpine/ca-certificates? It is kind of alpine specific. 2020-06-04 08:00:56 I would say so, yes 2020-06-04 08:09:57 moved 2020-06-04 08:10:02 nice 2020-06-04 08:12:45 question about the download tarballs from https://gitlab.alpinelinux.org/alpine/ca-certificates/-/tags 2020-06-04 08:12:52 will they have the same checksum? 2020-06-04 08:13:14 they seem to have same checksum now, but what heppens when gitlab is updated? 2020-06-04 08:16:49 i wonder if we should have a source archive like https://dev.alpinelinux.org/archive/ where we can upload dist tarballs 2020-06-04 08:17:11 or if gitlab's download by tag is enough 2020-06-04 08:32:11 download by tag should be enough 2020-06-04 08:33:41 We've been using that already 2020-06-04 08:34:19 ok 2020-06-04 08:34:36 You can also create releases in gitlab 2020-06-04 09:02:27 ugh.. i ran out of diskspace on armv7 2020-06-04 09:05:41 i cleaned up distfiles 2020-06-04 10:34:50 clandmeter: No, there is not, we should store a template somewhere 2020-06-04 11:36:17 ncopa: I copied distfiles to another location 2020-06-04 11:36:45 Should we move it and add logic for builders to use and update it? 2020-06-04 11:58:23 um 2020-06-04 11:58:38 i dont think its a priority 2020-06-04 11:59:07 the problem was on armv7 distfiles 2020-06-04 11:59:56 ncopa: that is exactly what it would solve 2020-06-04 12:01:15 we would still want a local cache? 2020-06-04 12:01:34 a local shared cache 2020-06-04 12:02:06 https://gitlab.alpinelinux.org/alpine/infra/infra/-/issues/10658 2020-06-04 12:02:36 i dont see the adv of having long lived cached distfiles on builders 2020-06-04 12:03:05 storage is a limitation we are fighting all the time, bw is not. 2020-06-04 12:03:16 right, but we still want a short lived local cache 2020-06-04 12:03:22 yes 2020-06-04 12:03:36 a week or month is fine with me 2020-06-04 12:03:44 its tuneable 2020-06-04 12:04:04 the problem is that we dont clean up current cache 2020-06-04 12:04:38 i rsynced distfiles to a diff location. its around 250GB 2020-06-04 12:04:41 the disk is 1TB 2020-06-04 12:05:29 we'd still need to clean it regularily 2020-06-04 12:05:41 where? builders you mean? 2020-06-04 12:05:46 everywhere 2020-06-04 12:06:05 builders would auto clean 2020-06-04 12:06:12 keep cache for x days 2020-06-04 12:06:20 weeks months whatever 2020-06-04 12:06:41 not sure how fast distfiles would grow 2020-06-04 12:07:02 but its only a single place we need to monitor. 2020-06-04 12:07:07 the immediate problem can be solved with a daily cron job on builders doing: find /var/cache/distfiles -mtime +7 -delete 2020-06-04 12:07:16 builder hosts 2020-06-04 12:07:27 yes but we would lose some src tarballs 2020-06-04 12:07:33 thats the idea of distfiles i guesS? 2020-06-04 12:09:21 brb. lunch 2020-06-04 12:45:20 on the builders it does not matter if they are deleted 2020-06-04 12:45:38 but we need to keep the sources for stable releases longterm 2020-06-04 12:46:01 we dont need to do that on every builder, so thats why we have distfiles.a.o 2020-06-04 12:46:11 but we dont need to keep everytying for edge 2020-06-04 12:46:51 but its more complicated to delete from distfiles.alpinelinux.org 2020-06-04 12:50:47 im more worried about the central release info 2020-06-04 12:50:52 how and where and what 2020-06-04 12:51:07 where do we want it? 2020-06-04 12:51:24 i guess either alpinelinux.org/releases.json or on the mirrors 2020-06-04 12:52:17 or both 2020-06-04 12:54:00 as i understand the major problem we want to solve is: what release branches are currently supported and what architectures do we have for each release branch 2020-06-04 12:56:57 i think www would go good 2020-06-04 12:58:14 regrading distfiles we could generate a list of files from aports and clean whatever is not used in $source anymore. 2020-06-04 12:58:31 do that based on supported versions 2020-06-04 12:58:44 correct 2020-06-04 12:58:48 which we fetch from releases.json ;-) 2020-06-04 13:02:14 we also have mirrors.a.o on which you could host it, but i think i would prefer a.o 2020-06-04 14:10:46 i think a.o makes most sense 2020-06-04 19:06:58 ikke: new patch releases out for gitlab 2020-06-04 19:07:18 https://cloud.drone.io/alpinelinux/alpine-docker-gitlab/ 2020-06-04 19:07:26 clandmeter: ok! 2020-06-05 05:38:35 morning 2020-06-05 05:39:01 I got this in a private message: I was looking into secdb.alpinelinux.org lately - and its seems like it wan't updated for a while since the 7th of May. 2020-06-05 05:50:31 the mqtt topic needs to be adjusted 2020-06-05 05:55:09 i've adjusted it to gitlab/push/alpine/aports 2020-06-05 05:56:01 hmm, but that's not enough 2020-06-05 05:56:06 it expects the branchname in the topic 2020-06-05 06:49:40 ikke: did you update the topic on secdb? 2020-06-05 07:22:39 i think git.a.o is having issues 2020-06-05 07:22:54 probably overloaded 2020-06-05 07:24:25 aha 2020-06-05 07:24:41 that explains things :) 2020-06-05 07:31:04 I did, but then I noticed the script is looking for the branch name in the topic 2020-06-05 07:31:29 the topic was correct 2020-06-05 07:31:45 we kept that topic for bw compat 2020-06-05 07:32:02 so the builders didnt need an update (and others) 2020-06-05 07:32:15 the problem is it could not fetch from git.a.o 2020-06-05 07:32:25 because cgit runs local... 2020-06-05 07:52:00 Aha, ok 2020-06-05 07:54:43 heh 2020-06-05 07:55:00 ncopa: we really need that releases.json 2020-06-05 07:55:13 seems your secdb scripts also needs manual fixes 2020-06-05 08:16:09 ncopa: you can mention secdb is up2date again after some patching. 2020-06-05 10:00:13 ok 2020-06-05 10:00:26 anyone knows what happened with https://alpinelinux.org/posts/Alpine-3.11.6-released.html in git repo? 2020-06-05 10:00:53 i cannot find any traces of the posts/Alpine-3.11.6-released.md 2020-06-05 10:01:58 i suspect there was a push --force or similar 2020-06-05 10:03:04 oh 2020-06-05 10:03:13 looks like i pushed to git-old 2020-06-05 10:03:16 https://git-old.alpinelinux.org/alpine-mksite/ 2020-06-05 10:04:02 but commits are missing in gitlab: https://gitlab.alpinelinux.org/alpine/infra/alpine-mksite/-/commits/master 2020-06-05 11:29:56 hum 2020-06-05 11:30:03 wwwtest.a.o is not auto updated 2020-06-05 11:30:20 I suppose I need to fix the mqtt topic it listens to? 2020-06-05 11:30:46 I never updated it during 3.12 release 2020-06-05 11:39:40 according to clandmeter the original topics should still be anounced 2020-06-05 11:42:19 wwwtest.alpinelinux.org/releases.json 2020-06-05 11:42:46 this should now be the source of truth 2020-06-05 11:55:20 I can't see it on mobile 2020-06-05 16:34:11 i pushed to alpine/infra/alpine-mksite production branch 2020-06-05 16:34:24 but https://alpinleinux.org does not seem to be updated 2020-06-05 16:34:52 https://alpinelinux.org/downloads/ should have mips64 on minirootfs 2020-06-05 16:35:25 once that is fixed there should also be a https://alpinelinux.org/releases.json 2020-06-05 18:09:14 clandmeter: neostrada is out-of-date for more then 14 days and no response on the e-mail 2020-06-05 18:29:09 ncopa: i see mips but not releases.json 2020-06-05 18:29:23 ikke: ok we can disable it i guess 2020-06-05 18:35:42 ikke: we only have bw compat topic for aports 2020-06-05 18:35:50 all others need to adjust to new format 2020-06-05 18:38:52 clandmeter: aha, ok 2020-06-05 18:40:42 i upated the topic on www to gitlab/push/alpine/infra/alpine-mksite 2020-06-05 18:40:59 i dont think it matters that we dont listen for specific branch 2020-06-05 18:47:35 looks like it was also missing cjson 2020-06-05 18:49:57 looks like its working now 2020-06-05 18:50:29 would be nice if darkhttp would accept json as mimetype 2020-06-05 18:52:35 done 2020-06-05 18:53:22 https://alpinelinux.org/releases.json 2020-06-05 18:54:03 ncopa: Muito obrigado 2020-06-05 20:01:29 so for the online conference, lets say we go with OpenCFP for that, should i prepare a docker image for that or what? i can host it myself if that is easier 2020-06-05 20:50:09 Hosting it should not be an issue 2020-06-05 20:51:04 I was already thinking about making a docker compose project myself 2020-06-07 10:25:23 clandmeter: ping 2020-06-07 12:13:23 ikke: o/ 2020-06-07 12:35:29 I just pushed the db dump commit to 12.10 2020-06-07 12:37:07 Going to run that on the test server 2020-06-07 12:37:14 If that works, then deploying in prod 2020-06-07 12:51:59 clandmeter: https://gitlab-test.alpinelinux.org/ 2020-06-07 12:55:27 Ok. I'm not home 2020-06-07 12:55:37 ok 2020-06-07 12:55:51 One test is failing, but might be related to an API change 2020-06-07 13:08:24 aha, found a better way 2020-06-07 13:14:52 clandmeter: do you agree upgrading gitlab tonight to 12.10? 2020-06-07 13:28:00 Yes please 2020-06-07 16:06:17 Done, succesfully upgraded 2020-06-07 16:06:24 And response time is good 2020-06-07 16:22:20 Nice :) 2020-06-07 16:23:15 ikke: did you add the announcement? 2020-06-07 16:23:34 yes 2020-06-07 16:23:48 :) 2020-06-07 19:38:31 oh this is the version with fancy fiel/folder icons 2020-06-07 19:39:19 yes 2020-06-07 20:16:42 clandmeter: I've just created a zabbix go agent plugin for openrc services 2020-06-07 20:17:22 what does that mean? 2020-06-07 20:17:43 https://tpaste.us/6Vlr 2020-06-07 20:18:59 clandmeter: It means you can add 'native' items to the agent without having to rely on scripts 2020-06-07 20:19:57 Just wanted to see what it takes to create a plugin 2020-06-07 20:20:26 There already exists a plugin for systemd, but not for openrc 2020-06-07 20:20:41 I just created that (uses rc-status -f ini -a) 2020-06-07 20:27:14 i still dont understand what it actually does? 2020-06-07 20:27:23 what will you monitor? 2020-06-07 20:27:42 the status of a service? 2020-06-07 20:27:48 It can discover what services are present on a host and it can return the sate 2020-06-07 20:27:50 state 2020-06-07 20:27:52 yes 2020-06-07 20:28:15 i didnt know about -f 2020-06-07 20:28:18 https://tpaste.us/V4xJ 2020-06-07 20:28:41 I just ran rc-status -h :) 2020-06-07 20:28:55 me2 but it didnt show -f :) 2020-06-07 20:29:09 looks like a more recent addition 2020-06-07 20:29:38 aha, ok 2020-06-07 20:29:41 what is recent? 2020-06-07 20:29:53 well my local server is 3.8 :) 2020-06-07 20:29:56 too lazy to upgrade 2020-06-07 20:32:22 That is anoying though 2020-06-07 20:39:17 https://github.com/OpenRC/openrc/commit/427a1ce2995b376ed6d112c5c5b422217f815fbb 2020-06-07 20:39:48 0.41 2020-06-07 20:44:33 so >= 3.10 2020-06-08 07:35:46 Did the host to SSH into for dev.a.o change? 2020-06-08 07:35:58 Ah, it's git-old 2020-06-08 08:15:29 ? 2020-06-08 08:44:23 ikke: interesting stats: https://gitlab.alpinelinux.org/admin/dashboard/stats 2020-06-08 08:49:43 clandmeter: I tried to ssh into git.a.o to get a tarball hosted on dev.a.o, but I had to ssh into git-old.a.o 2020-06-08 08:50:03 why not ssh into dev.a.o? 2020-06-08 08:50:06 You can use dev.a.o as host directly 2020-06-08 08:53:56 Oh :D 2020-06-08 08:54:04 clandmeter: it shows 0 for me everywhere. For you as well? 2020-06-08 08:54:15 yup 2020-06-08 08:54:22 it was sarcasm :) 2020-06-08 09:14:48 Aha, my sarcasm detector is broken 2020-06-08 10:18:30 yeah sarcasm is difficult to detect properly. probably need strace or gdb. :) 2020-06-08 10:23:00 I learned to detect it by looking at nick ;) 2020-06-08 10:24:29 aha, a machine learning model 2020-06-08 10:25:15 mps: :D 2020-06-08 10:25:21 ikke: yes :D 2020-06-08 10:28:21 mps: your methods are so old school 2020-06-08 10:29:01 clandmeter: You don't want him to deploy some kind of adverserial network :P 2020-06-08 10:29:26 hehe, again easy to detect sarcasm looking at nick :P 2020-06-08 10:29:41 clandmeter: ^ 2020-06-08 10:30:13 you just confirmed my method works :) 2020-06-08 10:37:25 ikke: if ppl complain they lost their account, i just removed a lot of spam 2020-06-08 10:38:20 clandmeter: nod 2020-06-08 10:50:39 I've been so exhausted lately my sarcasm-o-meter is out of order 2020-06-08 11:57:35 im adding a version or timestamp to generated secdb 2020-06-08 11:57:52 any opinion on the format? 2020-06-08 11:58:24 to make it simple we could let it be epoc (seconds since 1 jan 1970) 2020-06-08 11:58:27 timestamp, imo 2020-06-08 11:59:06 alternatively it could be "yyyy.mm.dd.ssss" 2020-06-08 11:59:26 ISO 8601 2020-06-08 11:59:38 yyyy-mm-dd 2020-06-08 11:59:53 but '-' could be problem for versioning 2020-06-08 12:00:08 yyyymmddssss 2020-06-08 12:00:09 ? 2020-06-08 12:00:31 just thinking aloud 2020-06-08 12:00:53 if it is for parsing by machine, i guess epoch is the simplest 2020-06-08 12:01:03 but it may be nice to make it somewhat human readable too 2020-06-08 12:01:10 sure 2020-06-08 12:01:35 2020-06-08T09:08:54Z 2020-06-08 12:01:37 so, maybe your idea "yyyy.mm.dd.ssss" is best 2020-06-08 12:01:53 i think using an established standard is not bad idea at all 2020-06-08 12:02:26 this '2020-06-08T09:08:54Z' is parseable with some (most?) tool and libs 2020-06-08 12:02:42 yeah 2020-06-08 12:04:20 actually, I prefer this, but for 'show' it to humans usually add some script/method to display it in something like "yyyy.mm.dd.ssss" 2020-06-08 12:10:13 and to add, unix timestamp is fine for internal use but not as fine for showing to humans 2020-06-08 12:39:46 hmm, '* Updated cty.dat 20200418 (cty-3007) 2020-06-08 12:39:59 http://xlog.nongnu.org/xlog.changelog 2020-06-08 12:40:13 about timestamp for sec db 2020-06-08 13:13:18 oops 2020-06-08 13:13:22 i pushed the wrong button 2020-06-08 13:22:57 What did that wrong button do? 2020-06-08 13:24:54 i have no idea 2020-06-08 13:25:07 i pushed the spam report button for an entire issue 2020-06-08 13:25:12 instead of the comment 2020-06-08 13:25:42 i dont understand why it doesnt have a rn-rf button directly 2020-06-08 13:25:49 for the comment that is 2020-06-08 13:26:08 That used might now be bugged with captchas 2020-06-08 13:26:34 could be 2020-06-08 13:26:47 algitbot will have a headache 2020-06-08 13:27:43 The spamlog is useless as well 2020-06-08 13:28:01 no way to see the complete comment / content 2020-06-08 13:28:26 yes its a mess 2020-06-08 13:28:33 i tried looking for my report 2020-06-08 13:28:40 if it would be included 2020-06-08 13:28:44 but i gave up 2020-06-08 13:29:43 tbh i dont even know what it really does, the topic spam button. 2020-06-08 13:29:58 i guess its different than report this is spam feature 2020-06-08 15:29:13 I have added an MR that should use releases.json for secdb. https://gitlab.alpinelinux.org/alpine/infra/docker/secdb/-/merge_requests/1 2020-06-08 15:29:29 it will automatically add 'edge' and future new releases (given thye are added to releases.json) 2020-06-08 15:29:51 oh 2020-06-08 15:30:35 i forgot. we should add a README.md to that project that tells that downstream users should use https://secdb.alpinelinux.org 2020-06-08 15:39:19 Do we know who is behind that secdb scanner? Maybe we can contact them to point that out 2020-06-09 03:28:15 I really need to setup certfp auth so I don't bounce off of channels like this one 2020-06-09 08:00:55 ikke I think it is this one: https://github.com/quay/clair 2020-06-09 08:01:55 they have a list of issues about alpine: https://github.com/quay/clair/issues?q=is%3Aissue+is%3Aopen+alpine 2020-06-09 08:03:16 github search does not reveal that they clone secdb from git.a.o 2020-06-09 08:08:53 ah: https://github.com/quay/clair/blob/c39101e9b8206401d8b9cb631f3aee47a24ab889/ext/vulnsrc/alpine/alpine.go#L38 2020-06-09 08:09:53 So apparently they already use github, but I guess older versions still use git.a.o 2020-06-09 08:10:06 https://github.com/quay/clair/blob/master/ext/vulnsrc/alpine/alpine.go#L38 2020-06-09 08:10:40 yeah, just fond it 2020-06-09 08:10:43 we should send them a feature request to switch to https://secdb.alpinelinux.org 2020-06-09 08:11:14 im working on the gnutls secfix now 2020-06-09 08:11:24 The one about the tls session ticket? 2020-06-09 08:11:32 i think so yes 2020-06-09 08:11:44 https://gitlab.alpinelinux.org/alpine/aports/-/issues/11627 2020-06-09 10:46:25 ncopa: question. I want to use the new zabbix go agent, which supports native plugins. But you have to build a dedicated version of the agent with those plugins added. Do so any issues adding a dedicated zabbix-agent2-alpine package for that? 2020-06-09 10:46:28 to aports 2020-06-09 10:46:53 it builds from the same source as zabbix itself, but just builds the agent with the plugins 2020-06-09 10:47:21 https://tpaste.us/LYlK 2020-06-09 10:50:08 ikke: are we no longer using 80 chars line limit for aports? 2020-06-09 10:51:31 I think we still are, though, some proposed to either remove it, or make it more lenient 2020-06-09 10:52:19 what are thouse plugins? 2020-06-09 10:52:35 why cant we enable them in zabbix-agent package? 2020-06-09 10:53:38 ncopa: You can add your own custom plugins. In this case for example, I've added a plugin to add native support for monitoring openrc services (requires >openrc-41.0). 2020-06-09 10:54:03 The advantage is that you don't need to deploy external scripts with all necessary dependencies 2020-06-09 10:55:38 We could build / host it internally, but then we need to make sure we can build it for all arches 2020-06-09 10:56:24 We could add the plugins to the already existing agent, but then that agent is no longer 'vanilla' 2020-06-09 10:58:11 those plugins are build-time plugins? 2020-06-09 10:58:14 not runtiem lugins 2020-06-09 10:58:18 yes, correct 2020-06-09 10:58:18 runtime* 2020-06-09 10:58:20 ok 2020-06-09 10:59:01 are there any reason to not include them in the "vanilla" package, as distro patch? 2020-06-09 10:59:46 i guess we shoudl shipe an zabbix-agent-alpine build, with the alpine specific plugins enabled 2020-06-09 11:00:34 The plugins themselves will mostly be relevant to anyone who uses alpine, so in that sense, it would make sense to include them 2020-06-09 11:00:51 that is what im thinking 2020-06-09 11:01:10 meeting. brb 2020-06-09 11:01:30 ok 2020-06-09 11:50:36 back 2020-06-09 11:51:43 so the only drawback i can see for including alpine specific plugins to zabbix is if we introduce any vulnerabilities 2020-06-09 12:00:20 yes 2020-06-09 12:13:59 i think im fine with either way 2020-06-09 12:14:12 ok 2020-06-09 12:14:33 the major benefit of keeping it separate apkbuild is that you can update the plugin without needing to rebuild everything 2020-06-09 12:14:59 but the cost is that you need to remember to update the plugin everytime zabbix is updated 2020-06-09 12:15:14 yes, I'm well aware 2020-06-09 12:15:19 can be fixed with relgroups once that is implemented 2020-06-09 13:09:50 I haven an idea for another alpine zabbix plugin, that compares installed packages with secdb 2020-06-09 13:10:15 and generate alert or similar when tehre is something that needs to be updated 2020-06-09 15:01:53 ncopa: what updates secdb? is it just a seperate repo that maintainers keep track of? 2020-06-09 15:03:45 maldridge: https://gitlab.alpinelinux.org/alpine/infra/docker/secdb 2020-06-09 15:22:22 ah, so it parses the APKBUILDs 2020-06-09 15:42:50 yes 2020-06-09 16:23:43 ncopa: I think I will first add a separate agent to testing to get a bit more experience / feeling with it and it allows me to iterate more quickly (without having to push a lot of pkgrel bumps to zabbix) 2020-06-09 18:50:33 is the gitlab runner for ppc64le down or just busy? 2020-06-09 18:51:04 most likely busy 2020-06-09 18:51:09 but it can be stuck at some jobs 2020-06-09 18:51:16 it started now 2020-06-09 18:51:22 ok, good 2020-06-09 18:51:34 yeah, I don't see any jobs that are stuck 2020-06-09 18:51:43 but it only handles 2 concurrent jobs 2020-06-09 18:53:20 ah 2020-06-09 18:54:12 10 jobs running, 11 pending 2020-06-09 18:57:07 backport MRs to stable (3.11 and 3.12) pushed as well for intal-ucode 2020-06-09 18:57:32 O do not usually merge in stable... 2020-06-09 18:57:34 I 2020-06-09 18:58:19 not sure what the rules are (I mostly merge in edge testing and commuity) 2020-06-09 18:58:40 rules are quite simple actually :) 2020-06-09 19:00:52 If it's just a security patch, then you just go ahead and push it 2020-06-09 19:01:28 cpu firmware ofcourse is a bit different 2020-06-09 19:02:06 yes its just blobs 2020-06-09 19:03:42 should I do v3.10 as well 2020-06-09 19:03:57 and I guess also 3.9 2020-06-09 19:03:58 thats the first release we shipped intel-ucode 2020-06-09 19:04:01 ah 2020-06-09 19:04:03 ok 2020-06-09 19:06:16 should we backport the package to v3.9? 2020-06-09 19:06:26 I don't know 2020-06-09 19:06:26 is it in main/ ? 2020-06-09 19:06:41 i.e. move from testing to community and then update? 2020-06-09 19:06:53 no its just in community 2020-06-09 19:07:12 sounds like something that belongs in main 2020-06-09 19:08:00 yes I think so too 2020-06-09 19:09:26 But users still need to do something to load it, or is having it in /boot enough? 2020-06-09 19:10:03 I have patched grub to look for it and add it if installed 2020-06-09 19:10:20 thats in the alpine grub package 2020-06-09 19:10:39 ok 2020-06-09 19:11:03 But I guess that was also added in v3.10 2020-06-09 19:11:38 for xen 4.13 and later it can be loaded at runtime. for the older versions i add it first in initramfs 2020-06-09 19:13:12 I added it "Thu May 16 2019" when was v3.9? 2020-06-09 19:13:37 2019-01-29 2020-06-09 19:13:56 git tag --contains gives v3.10.0 2020-06-09 19:14:12 yes, so possibly to big for v3.9 2020-06-09 19:14:56 I have no v3.9 left that boots with grub 2020-06-09 20:24:32 Are there special accounts for bots in Gitlab? 2020-06-09 20:29:01 Under user statics there is an item for bots, but I haven't found how to create them. 2020-06-09 20:31:23 Apparently according to an open ticket there is no special bot user type 2020-06-09 20:31:54 Ah okie 2020-06-09 20:32:17 I'll just make a new normal user for testing my bot thingie once I get to making that then 2020-06-10 06:52:09 ncopa: did you recently remove 2.x from master mirror? 2020-06-10 06:53:22 not recently 2020-06-10 06:53:24 some time ago 2020-06-10 06:59:55 ok 2020-06-10 07:01:39 ikke: the ci templates, did something change? 2020-06-10 07:02:29 it should only work for the branch that is selected right? 2020-06-10 07:02:51 branch as in ref 2020-06-10 09:42:50 ACTION slaps ikke with a CI pipeline 2020-06-10 09:45:44 ouch 2020-06-10 09:45:52 "branch that is selected"? 2020-06-10 12:11:51 ikke: yes in the yml 2020-06-10 12:11:58 It set a ref 2020-06-10 12:12:10 I guess it acts on that ref only 2020-06-10 12:12:35 With something like `only:` or `rules:`? 2020-06-10 12:37:40 ikke: https://gitlab.alpinelinux.org/ncopa/secdb/-/tree/releases-json 2020-06-10 12:37:58 why does it run ci when the yml targets master? 2020-06-10 12:39:35 ref: master there means the master branch of alpine/infra/gitlab-ci-templates 2020-06-10 12:39:43 meaning, that's where it will look for the file to include 2020-06-10 12:39:54 It does not mean only run on master 2020-06-10 12:40:03 It's part of the include statement 2020-06-10 12:40:11 ah ok, sorry. 2020-06-10 13:06:51 ikke: should be exclude PR's for that type of ci? 2020-06-10 14:06:36 The default mode is to operate on branch pushes, which is what we want in most of the cases 2020-06-10 14:23:22 so if we create a PR it will be triggered it seems. 2020-06-10 14:23:44 Usually it's already triggered before the PR runs 2020-06-10 14:23:48 ie, when you push the branch 2020-06-10 14:24:09 ok 2020-06-10 14:24:12 so can we limit that? 2020-06-10 14:24:26 You only want it to happen on merge requests? 2020-06-10 14:24:37 no only on pushes to master i guess 2020-06-10 14:24:47 this ci is to generate a docker image 2020-06-10 14:25:05 we could build it, but not push it. 2020-06-10 14:25:41 I only see verify and build jobs 2020-06-10 14:25:45 that's already what happens 2020-06-10 14:25:49 push is limited to master 2020-06-10 14:26:16 https://gitlab.alpinelinux.org/alpine/infra/gitlab-ci-templates/-/blob/master/docker-image.yml#L135 2020-06-10 14:26:28 ah nice 2020-06-10 14:26:41 so it only errors cause no runners are assigned 2020-06-10 14:26:55 yes 2020-06-10 14:27:12 need to enable shared runners 2020-06-10 14:27:46 https://gitlab.alpinelinux.org/ncopa/secdb/-/settings/ci_cd 2020-06-10 14:27:56 Under runners, there is a green button: enable shared runners 2020-06-10 14:28:04 nod 2020-06-10 14:40:40 Should I enable that for secdb? 2020-06-10 18:54:06 I just did :) 2020-06-11 14:41:02 Is the s390x Gitlab CI runner down? 2020-06-11 14:41:47 nope, just busy 2020-06-11 16:45:12 is something changed with dev.a.o, I cannot ssh there 'ps@dev.alpinelinux.org: Permission denied (publickey)' 2020-06-11 16:45:28 mps* 2020-06-11 18:12:17 anyone ^ 2020-06-11 18:18:41 should not 2020-06-11 18:18:48 let me check 2020-06-11 18:20:41 I only see these messages: "Connection closed by authenticating user mps port 43416 [preauth]" 2020-06-11 18:21:53 hmm, I use same key which I use for gitlab and lxc containers 2020-06-11 18:23:21 The key there is different than you have on gitlab 2020-06-11 18:28:09 all sorted out :) 2020-06-11 18:29:35 yes, thanks 2020-06-11 20:44:46 ikke: do you smell gitlab? 2020-06-12 00:56:37 ikke: can you make https://gitlab.alpinelinux.org/kaniini/autoconf-policy public 2020-06-12 04:48:30 clandmeter: Yeah, i do think I smell something 2020-06-12 04:52:01 Ariadne: done 2020-06-12 04:52:09 thx 2020-06-12 06:17:36 any problem with the builders for 3.10? 2020-06-12 06:19:04 last upload to 3.10 seems to be nextcloud, but several commits after that https://gitlab.alpinelinux.org/alpine/aports/-/commits/3.10-stable/ 2020-06-12 06:28:22 algitbot: retry 3.10-stable 2020-06-12 06:48:24 ikke: thx 2020-06-12 06:48:57 Is it alright now? 2020-06-12 06:50:45 looks like https://build.alpinelinux.org/buildlogs/build-3-10-x86_64/main/xen/ 2020-06-12 06:51:03 yeah, I saw it building xen 2020-06-12 06:51:58 ok, the mirror is now copletely gone apparently 2020-06-12 06:52:01 time to remove it 2020-06-12 06:54:10 the "Read more" link here https://alpinelinux.org/ returns empty 2020-06-12 06:54:30 under "ALPINE NEWS" 2020-06-12 07:27:16 HRio: could you create an issue here? https://gitlab.alpinelinux.org/alpine/infra/alpine-mksite 2020-06-12 07:43:29 HRio: less is more ;-) 2020-06-12 07:55:38 issue #1 :-) 2020-06-12 21:39:40 can we get newedge builders uploading their build logs? 2020-06-13 05:28:50 clandmeter: s390x-ci host appears down / unreachable 2020-06-13 05:29:23 and the builder as well 2020-06-13 05:30:07 tmhoang: ping 2020-06-13 12:38:29 ikke: both are still down? 2020-06-13 12:39:03 yes 2020-06-13 12:43:28 email send 2020-06-13 12:43:39 thanks 2020-06-13 13:33:47 ah, great 2020-06-13 13:34:31 The builder is not yet back, thoguh 2020-06-13 13:50:08 ikke: ping 2020-06-13 13:51:45 pong 2020-06-13 13:52:27 clandmeter: ^ 2020-06-13 13:53:50 builder is still down 2020-06-13 13:54:06 Did you see the mail? 2020-06-13 13:54:14 Should be up 2020-06-13 13:54:16 I have not seen a response 2020-06-13 13:54:25 ah, spam.. 2020-06-13 13:55:23 --- s390x.alpinelinux.org ping statistics --- 2020-06-13 13:55:25 12 packets transmitted, 0 received, 100% packet loss, time 11137ms 2020-06-13 13:56:33 Maybe fw? 2020-06-13 13:56:45 Ssh also down? 2020-06-13 13:57:59 I don't see a notification here about it being down 2020-06-13 13:58:09 d'oh, I can ssh into it 2020-06-13 13:58:11 just ping 2020-06-13 13:58:58 :) 2020-06-13 13:59:15 It happens to the best of us 2020-06-13 13:59:36 But the builder is not active for some reason either 2020-06-13 13:59:45 Oh, ok, I see it in the logs now 2020-06-13 14:00:50 Ok I'll send email 2020-06-13 14:00:54 I just did :) 2020-06-13 14:01:26 Ok 2020-06-13 14:01:47 Happy campers 2020-06-13 14:01:49 wondering why it's not responding to ping though 2020-06-13 14:03:15 Monitoring is bliss :) 2020-06-13 14:03:29 Heh, I'm working on monitoring 2020-06-13 14:03:50 but it would have reported s390x as down :) 2020-06-13 14:04:20 I don't see anything being dropped in dmesg 2020-06-13 14:04:46 But I see the packets arriving with tcpdump 2020-06-13 14:06:32 It's using nflog 2020-06-13 14:07:30 ah, type 0 and 8 are not allowed... 2020-06-13 14:15:50 ok, fixed ping 2020-06-13 14:16:10 So anoying when ping is blocked by firewalls :) 2020-06-13 14:19:34 yes 2020-06-13 14:20:04 admins don't know they can rate limit it instead of block 2020-06-13 14:57:52 In this case someone most likely just forgot to allow it 2020-06-13 15:02:03 But some times it's also because of a false sense of security 2020-06-13 15:25:03 this is last one is most often reasoning to not allow ICMP 2020-06-13 15:25:18 yes 2020-06-13 15:28:30 in my professional work I meet not small number of admins with some diplomas who do not understand basics 2020-06-13 15:28:59 but we are OT (I fear maxice8 is lurking :) ) 2020-06-13 15:29:27 I only moderate #alpine-linux atm 2020-06-13 15:30:13 anyway, for me is to be 'better safe than sorry' 2020-06-13 15:30:38 maxice8: sorry for joking, I hope you don't take this seriosly 2020-06-13 15:30:46 Like I already said, we should not be too strict about being off-topic, as long as it does not disturb on-topic conversations 2020-06-13 15:31:41 and I think that is good maxice8 accepted that tedious and unwelcomed position 2020-06-13 15:32:41 working with bad people could be unpleasant task 2020-06-13 15:33:14 hmm, duty not task 2020-06-13 15:42:59 @ikke that is how I moderated a telegram group I had created for Gentoo 2020-06-13 15:52:42 huh 2020-06-13 16:11:48 clandmeter: we lost s390x-ci again :( 2020-06-13 16:40:24 Hmm 2020-06-13 16:40:45 Do you want to mail? 2020-06-13 16:40:57 I'm having a 🍻 2020-06-13 16:41:11 That's not a good combo 2020-06-13 16:44:28 I wonder if it has to do with workload 2020-06-13 18:30:38 :( 2020-06-14 05:20:19 s390x-ci: "ç11812631.616312Ÿ illegal operation: 0001 ilc:1 ç#1Ÿ SMP" 2020-06-14 05:25:48 clandmeter: do we want to upgrade it to alpine 3.12? 2020-06-14 06:10:50 what is the hardware that's backing the s390x machine? 2020-06-14 06:13:36 I'm not entirely sure 2020-06-14 06:14:17 I was mainly curious if its an actual System Z machine 2020-06-14 06:14:20 IBM/s390 is all I know 2020-06-14 06:14:20 or something a bit more modest 2020-06-14 07:46:43 iirc, tmhoang told it is one of System Z machines, but I forgot which one 2020-06-14 08:06:43 Morning 2020-06-14 08:07:02 Looks like pho kills it? 2020-06-14 08:07:07 Php 2020-06-14 08:08:43 I had that suspicion as well 2020-06-14 08:09:50 Let's upgrade it 2020-06-14 08:12:34 Hardware name: IBM 8561 LT1 400 (z/VM 6.4.0) 2020-06-14 08:12:45 maldridge: ^ 2020-06-14 09:00:59 Who will do the upgrade? 2020-06-14 09:20:44 i have some time now, so i can do it. 2020-06-14 09:20:53 did you turn of the runner? 2020-06-14 09:26:29 No 2020-06-14 09:52:41 ok stopping docker now 2020-06-14 09:55:35 apk is being difficult. ive seen this behavior before. 2020-06-14 09:55:54 cant upgrade when having cmd:xxx in world 2020-06-14 09:56:11 Oh yeah, that's anoying 2020-06-14 09:56:32 is there an outstanding issue for htis that you may know about? 2020-06-14 10:00:20 What cmd; 2020-06-14 10:00:28 Is in there 2020-06-14 10:01:39 sort 2020-06-14 10:01:45 coreutils 2020-06-14 10:02:33 the upgrade fails 2020-06-14 10:03:04 interesting 2020-06-14 10:03:14 (68/149) Replacing py3-cached-property (1.5.1-r1 -> 1.5.1-r1) 2020-06-14 10:03:14 ERROR: py3-cached-property-1.5.1-r1: failed to rename usr/lib/python3.8/site-packages/.apk.2465692beef1da516c1ad2b814a994306f8c2fec7f49e628 to usr/lib/python3.8/site-packages/cached_property-1.5.1-py3.8.egg-info. 2020-06-14 10:03:56 apk fix does fix it. 2020-06-14 10:05:44 ok rebooting it 2020-06-14 10:05:58 More users reported these kinds of issues 2020-06-14 10:06:43 ill try later to reproduce 2020-06-14 10:07:12 cross fingers and hope it returns 2020-06-14 10:07:48 I hope for you as well :-) 2020-06-14 10:10:00 Linux gitlab-runner-s390x 5.4.43-1-lts #2-Alpine SMP Thu, 28 May 2020 20:13:48 UTC s390x Linux 2020-06-14 10:16:32 cool 2020-06-14 10:21:20 So now what, try to run the php CI on s390x again and see if it crashes again? 2020-06-14 10:29:52 i would wait till monday if thats possible 2020-06-14 10:29:57 not sure we have support atm 2020-06-14 10:30:56 ikke: when did you do the sqlite check for pkgs? 2020-06-14 10:32:20 This morning 2020-06-14 10:32:35 did you see it? 2020-06-14 10:33:03 i saw the issue 2020-06-14 10:33:09 and i saw its still broken 2020-06-14 10:33:16 and i saw you did something in history 2020-06-14 10:33:20 aha ok 2020-06-14 10:33:25 thats about all i know 2020-06-14 10:33:33 db seems corrupt 2020-06-14 10:33:50 https://tpaste.us/KyYP 2020-06-14 10:33:50 database disk image is malformed 2020-06-14 10:33:54 This is the output 2020-06-14 10:34:03 yup 2020-06-14 10:35:01 not sure how to read it probably 2020-06-14 10:35:07 sounds like index is corrupt or so 2020-06-14 10:35:27 So either find out how to repair the db, or regenerate it 2020-06-14 10:35:34 which is going to take some time 2020-06-14 10:35:51 yes but we have 2 instances 2020-06-14 10:35:55 so we could regen on the second 2020-06-14 10:36:08 where is the 2nd instance? 2020-06-14 10:36:23 its a bit obvious ;-) 2020-06-14 10:36:44 aports-turbo-test 2020-06-14 10:36:46 :) 2020-06-14 10:37:00 but im not sure what the status is 2020-06-14 10:37:20 i think we can just copy the db to the test, del edge and regen 2020-06-14 10:38:27 and then copy back? 2020-06-14 10:38:32 nod 2020-06-14 10:38:52 but i need to check how the relation with flagged db is 2020-06-14 10:39:20 I'm also kind of interested to see if this is fixable 2020-06-14 10:41:08 right, thats also fine. 2020-06-14 10:41:13 ill stay of it for now. 2020-06-14 10:41:25 i copied the db to /root 2020-06-14 10:42:58 you could regen the index 2020-06-14 10:43:18 https://www.sqlite.org/lang_reindex.html 2020-06-14 10:43:41 Let's try 2020-06-14 10:46:53 just repsonds with the same error 2020-06-14 10:47:12 https://stackoverflow.com/a/18260642/20261 2020-06-14 10:47:51 heh 2020-06-14 10:47:56 you are reading same page as me 2020-06-14 10:48:00 im already dumping the db 2020-06-14 10:48:08 ok 2020-06-14 10:48:27 Then I'm doing nothing :) 2020-06-14 10:48:36 go ahead :) 2020-06-14 10:48:51 im just doing it on the one in /root 2020-06-14 10:49:10 probably recreating the index is faster 2020-06-14 10:50:34 you mean drop index + create index? 2020-06-14 10:51:12 new.db is 0 bites 2020-06-14 10:51:17 bytes 2020-06-14 10:51:19 :) 2020-06-14 10:51:24 at least it's not corrupt :P 2020-06-14 10:51:37 what about `.recover`? 2020-06-14 10:52:06 well i guess if you dump to an txt file and back to sqlite problem should be solved. 2020-06-14 10:52:19 if its just the index 2020-06-14 10:53:16 let me dump it to txt first 2020-06-14 10:56:35 that does not look good :) 2020-06-14 10:56:51 last line is ROLLBACK; 2020-06-14 10:56:55 due to errors 2020-06-14 10:57:14 but sqlite does not mention anything itself 2020-06-14 10:58:18 heh 2020-06-14 10:58:30 .recover is not available ;-) 2020-06-14 10:58:44 With Sqlite 3.29.0 2020-06-14 10:58:50 and ofc we have .28 2020-06-14 10:59:08 Bummer 2020-06-14 10:59:22 3.10 2020-06-14 10:59:26 we could... 2020-06-14 10:59:38 or i could just run a container :) 2020-06-14 11:00:59 Yes, sounds a lot easier 2020-06-14 11:07:50 ok now the txt is 8+ gb 2020-06-14 11:12:03 doesnt look like its ok 2020-06-14 11:12:13 lost and found is 8GB+ 2020-06-14 11:15:54 ikke, i dont think i can recover it, if you have ideas go ahead. 2020-06-14 11:16:38 i have moved the tmp db into tmpdb dir for use in docker. 2020-06-14 11:40:20 no, no other ideas, so lets just regen 2020-06-14 11:45:15 its not the indexes that are the issue 2020-06-14 11:48:21 I wonder what caused the corruption 2020-06-14 11:50:02 i need to go soon, if you can regen thats nice, else i need to check it later. 2020-06-14 11:53:08 I can 2020-06-14 11:53:18 What was the plan for that? 2020-06-14 11:53:26 In -test remove the edge db, regen, and copy back? 2020-06-14 12:23:12 i would copy the current db dir to test 2020-06-14 12:23:16 then remove edge 2020-06-14 12:23:31 else it will need to add a lot of missing things 2020-06-14 12:24:17 or comment out all other branches. 2020-06-14 12:25:28 looks like its using a diff tag 2020-06-14 12:25:42 and uses the live db 2020-06-14 12:27:32 you could copy the db dir to some place and just run docker directly and mount that db in the container. rm edge and update. 2020-06-14 12:27:52 you dont need http for updating 2020-06-14 12:30:09 right 2020-06-14 12:30:15 so just the update container 2020-06-14 12:30:41 clandmeter: You did not start it again I suppose? 2020-06-14 13:02:53 Running the update script now in a separate temporary container 2020-06-14 13:04:19 (in a tmux session) 2020-06-14 14:09:09 clandmeter: I can no longer start the runner on s390x with docker-compose 2020-06-14 14:09:18 "Cannot start service gitlab-runner: read unix @->@/containerd-shim/3fedff5d13348b7e084ab6f4a5dd59a191d7d9b4fda6633b89ae1a7106d4eac1.sock: read: connection reset by peer: unknown" 2020-06-14 14:18:33 Cannot start any container with docker 2020-06-14 14:21:42 Docker is running? 2020-06-14 14:21:53 yes 2020-06-14 14:22:30 Anything in docker daemon log? 2020-06-14 14:22:56 I'm checking /var/log/docker.log now 2020-06-14 14:23:02 But more of the same 2020-06-14 14:23:12 level=error msg="failed to kill shim" error="read unix @->@/containerd-shim/bb1a474ba0fd3e538e8e48b9e9884509c061f52a8a6bfaeca02edcb4b120c8a4.sock: read: connection reset by peer: unknown" 2020-06-14 14:23:18 level=error msg="stream copy error: reading from a closed fifo" 2020-06-14 15:04:26 ikke: do you know how to raise the log level of containerd? 2020-06-14 15:07:18 probably in daemon.json 2020-06-14 15:07:20 I guess? 2020-06-14 15:07:40 cant find it 2020-06-14 15:08:19 hmmm 2020-06-14 15:08:26 i can only find https://success.docker.com/article/how-to-enable-debug-mode-containerd-without-docker-engine 2020-06-14 15:08:45 You can specify -l to dockerd 2020-06-14 15:08:58 in /etc/conf.d/docker 2020-06-14 15:09:13 "-l, --log-level string Set the logging level ("debug"|"info"|"warn"|"error"|"fatal") 2020-06-14 15:09:15 " 2020-06-14 15:09:38 for docker? 2020-06-14 15:09:41 or containerd 2020-06-14 15:11:09 Don't know 2020-06-14 15:11:56 I hope docker applies this to containerd as well 2020-06-14 15:12:51 yeah it does 2020-06-14 15:12:57 but its not obbious 2020-06-14 15:13:18 looks like not much improvement in info 2020-06-14 15:15:40 Hmm, ok 2020-06-14 15:19:06 I've tried an strace, but there was a lot of output 2020-06-14 15:19:38 a strace on docker? 2020-06-14 15:19:40 dockerd 2020-06-14 15:23:55 dockerd yes 2020-06-14 15:23:59 it's in the root dir 2020-06-14 15:24:10 /root/docker-strace/ 2020-06-14 15:27:11 do you strace the pid? 2020-06-14 15:27:24 it doesnt show much for me 2020-06-14 15:29:03 yes, with -f 2020-06-14 15:29:43 what do you start? 2020-06-14 15:30:07 I attach to an existing process 2020-06-14 15:30:23 i mean container 2020-06-14 15:30:26 strace -f -p 6005 2020-06-14 15:30:26 jsut docker run? 2020-06-14 15:30:34 alpinelinux/alpine-gitlab-ci 2020-06-14 15:30:37 yes 2020-06-14 15:30:39 docker run --rm -it alpine sh 2020-06-14 15:30:43 should work 2020-06-14 15:31:00 i tried to strace it but i got only a few lines 2020-06-14 15:31:08 did you add -f? 2020-06-14 15:31:11 does strace work correctly on golang? 2020-06-14 15:32:06 no i didnt add -f 2020-06-14 15:32:45 otherwise it will only show the process you attach to, not any subprocesses 2020-06-14 17:45:07 Did you find anything (just back again) 2020-06-14 18:49:33 no i didnt 2020-06-14 18:49:56 hmm, and now? 2020-06-14 18:50:26 im not sure where its happening 2020-06-14 18:50:47 dockerd, containerd, or something different 2020-06-14 18:51:11 yeah, hard to figure out 2020-06-14 18:51:28 we could build an older version of docker and deps 2020-06-14 18:51:43 but maybe its golang related... 2020-06-14 18:51:49 so its a stab in the dark 2020-06-14 18:52:15 yup, it certainly is, but you have to start somewhere 2020-06-14 18:52:19 and we dont really have a place to test 2020-06-14 18:52:55 s390x-ci.. :) 2020-06-14 18:52:58 or downgrade 2020-06-14 18:53:01 it's kind of doing nothign anyway 2020-06-14 18:53:09 yup, that's also an option 2020-06-14 18:53:52 but downgrading and making it work does not fix the long term problem 2020-06-14 18:54:51 we could try play with qemu 2020-06-14 18:56:06 do you remember on which alpine version we were? 2020-06-14 18:56:39 I might still have it in the history 2020-06-14 18:57:04 nope, docker-compose logs overwrote everything :-/ 2020-06-14 18:57:31 it was 10 or 11 2020-06-14 18:57:42 I think 3.11 2020-06-14 18:57:52 we had issues before 2020-06-14 18:58:00 yes, so we decided to upgrade to 3.11 2020-06-14 18:58:02 so i upgraded it not long ago 2020-06-14 18:58:05 heh 2020-06-14 19:00:57 ok im installing qemu and see if i can get something running 2020-06-14 19:01:21 ok 2020-06-14 19:02:41 is it me or is netbox slow 2020-06-14 19:03:53 looks like it's you 2020-06-14 19:43:03 looks like it does not like 3.11 2020-06-14 19:43:08 at least not in qemu 2020-06-14 20:04:56 ikke: 3.11 is ok 2020-06-14 20:04:59 3.12 not 2020-06-14 20:05:27 ok, good to know 2020-06-14 20:05:37 so should we downgrade the ci server again? 2020-06-14 20:06:39 i think we should try to find out where the problem is 2020-06-14 20:06:42 which part 2020-06-14 20:06:49 so maybe build the packages 2020-06-14 20:07:04 docker containerd and runc 2020-06-14 20:07:11 i guess those are sep pkgs 2020-06-14 20:07:39 build them on 3.12 based on 3.11 versions 2020-06-14 20:08:37 zabbix upgraded to 5.0 :) 2020-06-14 20:09:00 so latest version on older alpine, right? 2020-06-14 20:09:17 2https://pkgs.alpinelinux.org/package/edge/community/x86_64/containerd 2020-06-14 20:09:22 https://pkgs.alpinelinux.org/package/edge/community/x86_64/runc 2020-06-14 20:09:55 i think we could try with versions from 3.11 (which we know works) 2020-06-14 20:10:13 maybe its possible to install them already 2020-06-14 20:10:14 on 3.12? 2020-06-14 20:10:16 right 2020-06-14 20:10:20 yes 2020-06-14 20:10:31 apk add -X <3.11-repo> docker? 2020-06-14 20:10:32 its running in tmux 2020-06-14 20:10:47 docker= 2020-06-14 20:11:13 clandmeter: this page is new in zabbix 5.0: https://zabbix.alpinelinux.org/zabbix.php?action=host.view 2020-06-14 20:11:27 perhaps not perfect, but it's kind of what you were missing 2020-06-14 20:11:46 ok nice 2020-06-14 20:12:05 i need to fix something else now. you can try to run those pkgs in qemu 2020-06-14 20:12:11 ok 2020-06-14 20:12:11 there is also a script to restart it 2020-06-14 20:12:22 restart the vm? 2020-06-14 20:12:26 yes 2020-06-14 20:12:28 well start 2020-06-14 20:12:29 ok 2020-06-14 20:12:38 shutdown -f :) 2020-06-14 20:12:43 heh 2020-06-14 20:12:46 in qemu ;-) 2020-06-14 20:12:52 :D 2020-06-14 20:13:01 would be painful if you run it on the host 2020-06-14 20:13:11 (kind of funny, because the host is a vm on it's own)_ 2020-06-14 20:17:14 clandmeter: fyi, only patch-level version difference 2020-06-14 20:17:21 19.03.5 vs 19.03.11 2020-06-14 20:21:41 Hmm, it keeps using the latest version 2020-06-14 20:22:05 apk add -X <3.11-repo> docker=19.03.5-r1 does not work 2020-06-14 20:22:12 and also using a pinned repo does not work 2020-06-14 20:40:08 clandmeter: so it's containerd 2020-06-14 20:40:21 1.3.2 works, 1.3.4 is broken 2020-06-14 20:40:55 Let's see if newer versions of docker work with 1.3.2 2020-06-14 20:41:55 yes 2020-06-14 20:42:08 so only using 3.11 version of containerd fixes the issue 2020-06-14 20:43:22 fixed it on the host as well 2020-06-14 20:43:45 Now the question if it's only s390x, or if it happes on other arches as well 2020-06-14 21:14:30 ikke: I believe it should be all arches that have this problem 2020-06-14 21:14:50 docker/containerd interface is mostly unspecified from my understanding. I know there are specific version pairs for x86 2020-06-14 21:15:28 maldridge: yes, that was my expectation, but strange that we did not get any reports about this yet though 2020-06-14 21:15:40 it would mean docker is generally broken on 3.12 2020-06-14 21:17:25 this does mirror my observations 2020-06-14 21:18:01 ok, so you have issues as well? 2020-06-14 21:19:21 yeah, at work we're running docker-in-docker, and we see similar issues with anything past 18.x 2020-06-14 21:20:23 I wonder if 1.3.3 has this issue as well 2020-06-14 21:20:33 Nice 2020-06-14 21:20:48 That you found it 2020-06-14 21:21:00 teamwork :) 2020-06-14 21:21:20 It cross my mind it's all arches 2020-06-14 21:21:37 if that's the case, we should revert containerd, I guess 2020-06-14 21:21:44 but it's so strange that we did not get any complaints 2020-06-14 21:21:47 But as you said. I would expect some reports 2020-06-14 21:21:51 yea 2020-06-14 21:22:34 I think after bad experiences wiht docker in the past folks may be more cautious about updating 2020-06-14 21:22:51 I think work is actually still on 3.11, with some old hosts on 3.9 still 2020-06-14 21:34:23 clandmeter: edge regen is stull running btw 2020-06-14 21:34:34 edge/testing/aarch64 2020-06-14 21:34:36 Yes 2020-06-14 21:34:51 It's very slow via http 2020-06-14 21:35:14 But portable 2020-06-14 21:35:29 We just let it run 2020-06-14 21:35:40 as long as it's stable 2020-06-15 04:38:43 clandmeter: ok, it finished 2020-06-15 05:48:41 Nice 2020-06-15 05:49:02 well done 2020-06-15 05:49:35 should I just copy the edge db back? 2020-06-15 06:55:31 i would first stop it :) 2020-06-15 06:55:58 yes, sounds like a good plan 2020-06-15 06:56:21 also remove releated files, i guess thats logical 2020-06-15 06:56:43 the tmp files related to that db 2020-06-15 06:56:45 indeed 2020-06-15 06:57:03 yes i think those are wall 2020-06-15 08:05:04 clandmeter: done 2020-06-15 08:05:31 nice 2020-06-15 08:05:33 all is ok? 2020-06-15 08:05:48 it's working, I did not verify the corrupted queries yet 2020-06-15 08:07:30 works 2020-06-15 08:07:33 :) 2020-06-15 08:07:35 cool 2020-06-15 08:07:47 and flags? 2020-06-15 08:08:15 its empty :( 2020-06-15 08:08:26 hmm 2020-06-15 08:08:32 weird 2020-06-15 08:08:42 I kept a copy of the old db 2020-06-15 08:08:47 i think it uses pkgname 2020-06-15 08:09:09 its not a big issue though 2020-06-15 08:10:19 ah flagged is just a table 2020-06-15 08:10:25 i thought it was a sep db 2020-06-15 08:10:33 oh ok 2020-06-15 08:10:38 so we would need to copy that 2020-06-15 08:11:46 yeah i guess just export that table and insert again 2020-06-15 08:13:18 Do you have time for that? 2020-06-15 08:42:13 not now, im busy @work 2020-06-15 08:42:55 me too 2020-06-15 10:58:07 clandmeter: ping 2020-06-15 10:58:10 pong 2020-06-15 10:58:20 i just put focus on this screen :) 2020-06-15 10:58:23 clandmeter: there seems to be also a table called flagged.flagged? 2020-06-15 10:58:25 hah 2020-06-15 10:59:00 but it's empty it seems 2020-06-15 11:00:56 right 2020-06-15 11:01:02 i just dumped flagged table 2020-06-15 11:01:17 seems we both have some time availabe :) 2020-06-15 11:01:20 flags are back 2020-06-15 11:01:48 nice 2020-06-15 11:02:37 case closed i guess 2020-06-15 11:03:41 issue closed as well :P 2020-06-15 11:14:37 ikke did you scale the app? 2020-06-15 11:15:22 I just stopped the containers 2020-06-15 11:15:25 and started them again 2020-06-15 11:15:53 ok but with scale? 2020-06-15 11:16:25 It just starts the existing containers again 2020-06-15 11:16:29 so it will use the same 'scale' 2020-06-15 11:16:35 ok 2020-06-15 11:16:42 it was loading a bit slow 2020-06-15 11:17:12 https://tpaste.us/b46e 2020-06-15 11:18:15 seems like the DB is not complete yet 2020-06-15 11:18:39 last update is from yesterday, even though I manually ran the update 2020-06-15 12:11:14 ikke: looks like wal is not commited to db 2020-06-15 12:11:16 not sure why 2020-06-15 12:11:38 ah 2020-06-15 12:11:44 i think i know why 2020-06-15 12:11:49 permissions? 2020-06-15 12:12:11 forgot -a with cp 2020-06-15 12:12:22 yup 2020-06-15 12:12:25 root:1000 2020-06-15 12:12:41 I used rsync -a for the initial copy, so there it was good 2020-06-15 12:12:43 but its weird it does now show an error 2020-06-15 12:12:46 yup 2020-06-15 12:12:59 next sync is in 3 min 2020-06-15 12:13:02 ok 2020-06-15 12:13:29 will sqlite automatically apply the wal? 2020-06-15 12:14:26 lets wait and see :) 2020-06-15 12:14:35 could be it just ignores whats inside atm 2020-06-15 12:14:37 heh 2020-06-15 12:15:10 .db-shm is now being written to 2020-06-15 12:15:18 and the wal as well 2020-06-15 12:17:03 ok. wal is empty now 2020-06-15 12:17:19 success 2020-06-15 12:17:26 its still running 2020-06-15 12:17:31 now its ready :) 2020-06-15 12:17:46 heh 2020-06-15 12:17:50 looks ok 2020-06-15 12:18:59 hmm, last push was for keepalived, but no updates from builders 2020-06-15 12:19:28 I have the idea that the builders are always one update behind 2020-06-15 12:20:28 not exactly like that, but they are not always getting the latest update 2020-06-15 12:26:24 ikke: i think that sounds sane 2020-06-15 12:26:34 i bet they are still pulling from git.a.o 2020-06-15 12:26:45 and when the update comes git.a.o is not yet ready 2020-06-15 12:27:23 but at 13:30 was the last update message from the builder, at 13:40 a new commit was pushed, but nothing happened until I manually retried 2020-06-15 12:28:05 i dont follow 2020-06-15 12:28:15 look back in the #alpine-commit logs 2020-06-15 12:28:38 keepalived was pushed when the builders were (presumably) idle 2020-06-15 12:28:42 but they didn't start building it 2020-06-15 12:28:50 so, that matches my logic? 2020-06-15 12:29:11 push to gitlab, builders pull but update is not yet pushed to git.a.o 2020-06-15 12:29:22 so no update to build 2020-06-15 12:29:31 ah, like that 2020-06-15 12:29:33 1 second later update is pushed to git.a.o 2020-06-15 12:29:38 yeah 2020-06-15 12:29:44 they need to pull from gitlab 2020-06-15 12:29:47 yeah 2020-06-15 12:30:16 best to update it in the pkg 2020-06-15 12:30:31 aports-build? 2020-06-15 12:30:34 else you need to update the config everywhere 2020-06-15 12:31:02 ah you probably need to manually update it 2020-06-15 12:31:24 i guess it needs a remote update 2020-06-15 12:32:00 ok, and use https:// instead of git:// 2020-06-15 17:49:01 So what is the status of docker? 2020-06-15 17:49:17 ikke: ^ 2020-06-15 17:49:31 good question 2020-06-15 17:49:49 I'll install docker in my x86_64 vm and see if has the same issue 2020-06-15 17:50:56 There is no reference to versions that work together? 2020-06-15 17:51:30 Or is it not a compat issue? 2020-06-15 17:52:04 runs fine on my VM 2020-06-15 17:52:14 so I would say no 2020-06-15 17:52:38 clandmeter: Maybe see if we can test containerd separately 2020-06-15 17:52:42 Hmm 2020-06-15 17:52:51 What did you test 2020-06-15 17:53:01 3.12¿ 2020-06-15 17:53:03 yes 2020-06-15 17:53:23 containerd 1.3.4, which gave issues on s390x 2020-06-15 17:53:38 Maybe check diff arches 2020-06-15 17:53:54 mps: can you try on arm? 2020-06-15 17:53:56 heh :D 2020-06-15 17:54:01 Was thinking the same 2020-06-15 17:54:24 Or did you run back to x86 ;-) 2020-06-15 17:56:22 clandmeter: you mean to run containerd on armv7? 2020-06-15 17:56:34 native 2020-06-15 17:56:35 Yes 2020-06-15 17:56:47 Docker 2020-06-15 17:57:00 apk add docker; rc-service docker start; docker run -it --rm alpine 2020-06-15 17:57:01 ok, will try when I come back home, in about hour or two 2020-06-15 17:57:22 Ok thx 2020-06-15 17:57:38 np 2020-06-15 17:57:41 If it's trouble we can also use qemu 2020-06-15 17:58:16 Although it's not working without kvm on s390x 2020-06-15 17:58:19 no, my arm boxes are ready, just now in shutdown state 2020-06-15 17:58:52 and I can't power-on them remotely 2020-06-15 18:00:42 though I can now do that on aarch64, is that also needed to test or only arm 32bit 2020-06-15 18:01:45 no, aarch64 is fine 2020-06-15 18:01:58 ok 2020-06-15 18:50:00 clandmeter: ikke: looks like it works on 3.12 2020-06-15 18:50:19 nice 2020-06-15 18:50:21 thx 2020-06-15 18:50:27 OK, so works on x86_64 2020-06-15 18:50:32 On aarch64 2020-06-15 18:50:37 Not on s390x 2020-06-15 18:50:54 how can I test it more 2020-06-15 18:51:15 It's fairly self-evident 2020-06-15 18:51:29 The issue is that we cannot start any container 2020-06-15 18:51:49 So if you successfully ran a container, it works 2020-06-15 18:51:55 that is what I see on console https://tpaste.us/evRW 2020-06-15 18:52:06 armv7 2020-06-15 18:52:15 Looks good 2020-06-15 18:53:00 hehe, this box have: 'apk version musl' => musl-1.2.0-r0 2020-06-15 18:53:29 ncie 2020-06-15 19:16:21 I guess we should open an issue on github 2020-06-15 19:16:30 so they can help us debug it 2020-06-15 19:17:07 yeah 2020-06-15 19:17:46 there is an example on the containerd website how to interact with it (via go), might be worth a try to see if we can see if it's an issue with containerd 1.3.4 on s390x in general 2020-06-15 19:20:47 if you are feeling lucky :) 2020-06-15 19:22:21 I'm kind of trying to get all servers in Zabbix 2020-06-15 19:40:36 that sounds like a good investment of spare time :) 2020-06-15 19:41:32 It has been on my mental to-do list for ages :) 2020-06-15 19:41:58 yes, i didnt want to mention it again ;-) 2020-06-15 21:26:18 https://zabbix.alpinelinux.org/items.php?filter_set=1&filter_hostids%5B0%5D=10368 2020-06-15 21:27:01 clandmeter: ^ 2020-06-15 21:27:31 Discovered by the new agent2 plugins 2020-06-16 04:45:00 clandmeter: ikke I have grown more competent at using ctr than I'd like 2020-06-16 04:45:06 what are you trying to do? 2020-06-16 04:46:09 ctr? 2020-06-16 04:46:39 exactly my question :) 2020-06-16 04:47:15 its the CLI toolset for containerd 2020-06-16 04:47:24 ah 2020-06-16 04:47:40 maldridge: we have an issue with containerd 1.3.4 on s390x 2020-06-16 04:47:50 I've noticed 2020-06-16 04:48:02 where it seems to reset the socket connection when docker tries to talk on it 2020-06-16 04:48:27 hrm 2020-06-16 04:48:32 so we wanted to see if we can reproduce it with containerd in isolation 2020-06-16 04:48:41 well, docker does a lot with containerd 2020-06-16 04:48:50 do you know where in the transaction its failing? 2020-06-16 04:48:59 early on when starting containers 2020-06-16 04:50:04 early enough that its not even getting through setup? 2020-06-16 04:50:42 "containerd-shim/bb1a474ba0fd3e538e8e48b9e9884509c061f52a8a6bfaeca02edcb4b120c8a4.sock: read: connection reset by peer: unknown" 2020-06-16 04:50:50 sorry, that's incomplete 2020-06-16 04:51:08 "read unix @->@/containerd-shim/3fedff5d13348b7e084ab6f4a5dd59a191d7d9b4fda6633b89ae1a7106d4eac1.sock: read: connection reset by peer: unknown" 2020-06-16 04:51:46 maldridge: what do you consider setup? 2020-06-16 04:52:32 later than that 2020-06-16 04:52:44 ok 2020-06-16 04:52:54 setup to me is where it begins to stream instructions to containerd for layering filesysetms 2020-06-16 04:52:57 I have a vm on s390x with docker installed 2020-06-16 04:53:06 this looks like docker may be out of sync with containerd 2020-06-16 04:56:02 clandmeter: the tmux session on s390x-ci should have the VM you created, right? Right now it seems to just have /bin/sh as pid1 2020-06-16 04:56:24 ah, ofcourse 2020-06-16 04:56:34 inception :P 2020-06-16 04:56:47 ? 2020-06-16 04:57:00 it was in a docker container 2020-06-16 04:57:31 not following, docker runs in qemu kvm 2020-06-16 04:57:51 when I attached to tmux, it was already running in a docker container 2020-06-16 04:57:59 on the qemu vm 2020-06-16 04:58:05 yup 2020-06-16 04:58:07 on the alpine vm 2020-06-16 04:58:36 maldridge: the same version seems to work on other arches 2020-06-16 04:58:49 docker-19.03.11-r0 2020-06-16 05:00:13 ikke: have you tried to build it locally? 2020-06-16 05:00:16 i mean in the vm 2020-06-16 05:01:18 no 2020-06-16 05:04:11 would be nice to know if it happens with 1.3.3 2020-06-16 05:11:02 I don't think I can connect to the VM via ssh, right? (the serial console is kind of limiting) 2020-06-16 05:13:55 you can but not in this network setup 2020-06-16 05:14:15 right 2020-06-16 05:14:52 im trying to build containerd 2020-06-16 05:15:01 with build-base 2020-06-16 05:15:03 container 2020-06-16 05:15:15 ah ok 2020-06-16 05:15:16 but it seesm we dont ship stable versions of it 2020-06-16 05:16:47 what do you mean? 2020-06-16 05:16:57 there is only edge? 2020-06-16 05:17:16 ah, for build-base 2020-06-16 05:17:44 yeah, something todo still 2020-06-16 05:18:21 the setup script for alpine-ci was already setup to downgrade 2020-06-16 05:19:48 ikke: I don't have solid advice then. Usually containerd problems arise from mismatched versions since the API is somewhat considered internal 2020-06-16 05:20:16 understood, thanks for your help 2020-06-16 05:20:42 I wouldn't be surprised though if on s390x there's some underlying primitive that isn't behaving as expected 2020-06-16 05:20:48 $work has these problems on power 2020-06-16 05:21:05 it could also be golang update 2020-06-16 05:21:14 yup 2020-06-16 05:21:15 which versions are you on? 2020-06-16 05:21:18 I can compare against void 2020-06-16 05:21:21 1.14 2020-06-16 05:21:41 .3? 2020-06-16 05:21:56 for containerd? 2020-06-16 05:22:06 for go 2020-06-16 05:22:09 3.12. is go 1.13 2020-06-16 05:22:13 we noticed some build breakage at 1.14.3 2020-06-16 05:22:17 .11 2020-06-16 05:22:42 edge is 1.14.3 2020-06-16 05:22:49 but this happened on 3.12 2020-06-16 05:22:50 ah 2020-06-16 05:23:14 we're at 1.3.4 (containerd); 19.03.9 (docker) 2020-06-16 05:23:20 yes, same 2020-06-16 05:23:24 no observed instability, but we also don't build for s390x 2020-06-16 05:23:36 3.11 is 1.13.10 btw 2020-06-16 05:23:41 so only one minor version bump 2020-06-16 05:23:47 or patch rather 2020-06-16 05:29:28 hmm maybe if i want to reproduce something i should start a container with --rm :D 2020-06-16 05:29:38 *not* 2020-06-16 05:29:41 heh 2020-06-16 05:36:23 1.3.3 same issue 2020-06-16 05:40:28 1.3.2 same... 2020-06-16 05:40:46 lots of fun :) 2020-06-16 05:41:15 heh 2020-06-16 05:41:28 ok this makes it a bit more compicated 2020-06-16 05:41:55 yup 2020-06-16 05:43:12 are we actually sure its containerd? 2020-06-16 05:43:51 I only pinned containerd from 3.11, and that fixed it 2020-06-16 05:45:12 ok lets assume its containerd for now :) 2020-06-16 05:48:40 git.a.o is pissing me off 2020-06-16 05:48:44 its slow as... 2020-06-16 06:08:54 its probably not golang 2020-06-16 06:14:54 so docker works with containerd from 3.11, but it does not work with containerd build with same version as 3.11 with same version of golang. 2020-06-16 07:35:40 maybe some other dependency 2020-06-16 07:36:31 Can you downgrade completely to 3.11 and try to build then? 2020-06-16 07:36:41 or did you already do that? 2020-06-16 08:03:27 ncopa: ^ 2020-06-16 08:03:32 hi 2020-06-16 08:04:00 docker on 3.12 works with containerd from 3.11 2020-06-16 08:04:55 i have tried to build the 3.11 version on 3.12 but that does not fix it 2020-06-16 08:05:35 so i tried to build golang from 3.11 on 3.12 and build containerd with it, but does not solve it 2020-06-16 08:07:22 i removed the libseccomp dependency to rule it out 2020-06-16 08:08:25 and i even saw some patches in musl so i build musl from 3.11 2020-06-16 08:09:16 ikke: if i downgrade completely it will be just 3.11? 2020-06-16 08:09:24 and it should probably work 2020-06-16 08:09:44 good to rule out.. 2020-06-16 08:10:01 rule out what exactly? 2020-06-16 08:10:19 that we have a known baseline that works 2020-06-16 08:10:43 i already tried it on 3.11 i believe 2020-06-16 08:12:11 i guess we need to debug the error. 2020-06-16 08:12:15 but i have no clue how 2020-06-16 08:13:38 maldridge: you said you had experience with containerd/ctr/ 2020-06-16 08:13:42 yes 3.11 is already tested. you can run the qemu script with 3.11 or 3.12 2020-06-16 08:14:50 podman on alpine would be fun 2020-06-16 08:15:19 I like it more - especially the rootless feature 2020-06-16 08:16:31 saidly that gitlab doesn't come with native podman support 2020-06-16 08:16:41 :( 2020-06-16 08:16:53 sadly* 2020-06-16 08:17:33 isnt podman a dropin replacement? 2020-06-16 08:18:13 According to this: https://docs.gitlab.com/runner/executors/custom.html 2020-06-16 08:18:24 yoneed to run a custom executor instead of the built-in docker executor 2020-06-16 08:18:50 ah i remember again 2020-06-16 08:18:55 there is some long stranding issue 2020-06-16 08:23:33 does this only affect s390x? 2020-06-16 08:23:40 yes 2020-06-16 08:23:49 we tried 3 other arches 2020-06-16 08:24:08 and mips64 does not work yet i guess 2020-06-16 08:24:15 that i dont know 2020-06-16 08:24:17 which is the only other big-endian arch 2020-06-16 08:25:13 and this is the error? > 2020-06-16 08:25:14 "read unix @->@/containerd-shim/3fedff5d13348b7e084ab6f4a5dd59a191d7d9b4fda6633b89ae1a7106d4eac1.sock: read: connection reset by peer: unknown" 2020-06-16 08:25:19 yes 2020-06-16 08:25:20 yes sir 2020-06-16 08:26:08 and we have a stable reproducer? 2020-06-16 08:26:15 so it happens every time, not just occationally 2020-06-16 08:26:18 yes 2020-06-16 08:26:21 good 2020-06-16 08:26:55 and if you copy containerd from 3.11 it works 2020-06-16 08:27:07 We installed it from the 3.11 repo, and then it works 2020-06-16 08:27:42 on s390x-ci.a.o itself, it's working now with containerd 1.3.2 2020-06-16 08:27:51 but not if you rebuild the same version on 3.12 it does not work 2020-06-16 08:28:06 nope 2020-06-16 08:28:45 so my idea was that it could be either the app itself, golang or anything other linked to it. 2020-06-16 08:29:46 makes sense 2020-06-16 08:30:36 oh 2020-06-16 08:30:58 one thing that changed from alpine 3.11 -> 3.12 was the GOFLAGS 2020-06-16 08:31:15 ah yes 2020-06-16 08:31:21 and its not defined in the apkbuild 2020-06-16 08:31:30 its in abuild somewhere 2020-06-16 08:31:47 export GOFLAGS="-buildmode=pie" 2020-06-16 08:32:03 try build it on 3.12 but do: unset GOFLAGS 2020-06-16 08:32:30 does it happend with containerd built on edge as well? 2020-06-16 08:32:55 also, do we have the details recorded in an issue? 2020-06-16 08:33:08 may be good in case we need report something upstream 2020-06-16 08:33:20 so we dont need to re-read 500 lines of irc logs 2020-06-16 08:33:20 Not yet, good idea 2020-06-16 08:33:37 crap my container hangs 2020-06-16 08:33:45 and its started with --rm :| 2020-06-16 08:34:03 DOesn't seem to hang for me 2020-06-16 08:34:10 thats not a container 2020-06-16 08:34:12 thats a vm 2020-06-16 08:34:15 ok 2020-06-16 08:34:18 :p 2020-06-16 08:34:25 try ctrl p + q 2020-06-16 08:35:12 hmm 2020-06-16 08:35:57 docker attach :D 2020-06-16 08:38:06 yes 2020-06-16 08:40:57 doesnt look like it works 2020-06-16 08:41:14 if that doesn't work, you can try to exec another shell in the container 2020-06-16 08:41:32 i mean new containerd 2020-06-16 08:41:39 ah ok 2020-06-16 08:43:08 ncopa: any way i can verify that the unset has worked? 2020-06-16 08:44:28 https://tpaste.us/8Xky 2020-06-16 08:59:40 bah 2020-06-16 08:59:47 how do i build hello world in go 2020-06-16 09:01:57 clandmeter: you can use scanelf 2020-06-16 09:02:08 $ scanelf hello 2020-06-16 09:02:09 TYPE FILE 2020-06-16 09:02:09 ET_DYN hello 2020-06-16 09:02:25 if type is ET_DYN, then its a pie build 2020-06-16 09:02:55 $ scanelf hello 2020-06-16 09:02:55 TYPE FILE 2020-06-16 09:02:55 ET_EXEC hello 2020-06-16 09:03:08 ET_EXEC means its a non-pie build 2020-06-16 09:05:04 ET_DYN /usr/bin/containerd 2020-06-16 09:06:06 containerd is linked to both libc and libseccomp 2020-06-16 09:06:25 not mine 2020-06-16 09:06:34 i removed libseccomp as my patch shows 2020-06-16 09:06:36 but its linked to libc, right? 2020-06-16 09:06:39 yes 2020-06-16 09:06:45 so its still dynamically linked 2020-06-16 09:08:02 and it seems to be same version of runc 2020-06-16 09:10:59 someone had similar experience: https://github.com/moby/moby/issues/38742 2020-06-16 09:22:58 ncopa: i dont understand what changed now? 2020-06-16 09:23:11 if i uset it it should fall back to previous behaviour? 2020-06-16 09:23:21 btw, it hink we should move this to -devel 2020-06-16 11:08:36 clandmeter: the gitlab host is still alpine 3.10. Do we want to upgrade it at some point? 2020-06-16 11:12:27 maybe a good idea to do it with the v13 upgrade 2020-06-16 11:12:42 I can do a test upgrade on test instance 2020-06-16 11:13:06 Do we still want to wait for 13.1 before going to v13? 2020-06-16 11:16:52 hmm 2020-06-16 11:24:09 its not a rule, so you are free to upgrade 2020-06-16 11:24:24 we are already on a few patch releases 2020-06-16 11:24:40 next week it will be 13.1 2020-06-16 16:27:15 mps: https://www.bloomberg.com/news/articles/2020-06-09/apple-plans-to-announce-move-to-its-own-mac-chips-at-wwdc 2020-06-16 16:27:41 You can safely switch to macos now 2020-06-16 17:02:50 clandmeter: why do you think I'm hipster 2020-06-16 17:03:02 :) 2020-06-16 18:37:21 i wanted to share it in offtopic but you are MIA 2020-06-16 18:48:05 yes, sorry. I left alpine-offtopic about week ago because it become more offtopic than alpine-offtopic and quite unpleasant for me to follow it 2020-06-16 19:02:41 ppc64 runner network is slow :( 2020-06-17 10:36:53 clandmeter: ok if I recreaste gitlab-test to get a fresh snapshot? 2020-06-17 10:37:01 recreate* 2020-06-17 10:45:36 Sure 2020-06-17 18:06:34 ikke: its probably best those jobs don't get picked up 2020-06-17 18:06:48 it appears as though this would use my branch to push to real repos without review 2020-06-17 18:08:47 Yes, you are right. It's missing rules on when to run what job 2020-06-17 18:09:32 clandmeter: ^ 2020-06-17 18:14:14 ? 2020-06-17 18:15:48 clandmeter: I was able to send an MR that if it were to go through, would push containers for dabuild 2020-06-17 18:16:07 pipeline 23111 2020-06-17 18:16:23 ci needs an update 2020-06-17 18:18:19 anything I can do to assist with that update? I'm trying to update a bunch of internal packages fro 3.12 2020-06-17 18:20:10 would very much like to remove the remaining 3.8 from my infrastructure 2020-06-17 23:37:25 oh, make_images has bashisms 2020-06-18 05:49:47 removed the neostrada mirror 2020-06-18 12:05:27 The Gitlab CI runners seem to be broken 2020-06-18 12:06:16 PureTryOut[m]: I can look at it later 2020-06-18 12:07:44 Thanks. I was working on the Qt 5.15 upgrade but broken CI makes it hard 😛 2020-06-18 12:08:30 "fatal: unable to access 'https://gitlab.alpinelinux.org/alpine/aports/': error setting certificate verify locations:" 2020-06-18 12:09:30 PureTryOut[m]: I think it's failing here: https://gitlab.alpinelinux.org/alpine/infra/docker/alpine-gitlab-ci/-/blob/master/overlay/usr/local/bin/build.sh#L141 2020-06-18 12:09:43 PureTryOut[m]: can you see if you can reproduce it? 2020-06-18 12:09:53 if you have time :) 2020-06-18 12:12:29 Well I don't know what "CI_MERGE_REQUEST_PROJECT_URL" resolves to so no 😛 2020-06-18 12:12:54 https://gitlab.alpinelinux.org//aports.git :) 2020-06-18 12:18:37 Yeah seems to time out for me too 2020-06-18 12:24:40 just timeout? Because in CI it complains about root ca bundles 2020-06-18 13:25:51 Actually it got through after a few minutes 2020-06-18 13:27:11 Can you try to run it in the alpinelinux/alpine-gitlab-ci docker container? 2020-06-18 13:33:30 Uhh... That would be a first, no clue how to do that 2020-06-18 13:40:58 docker run -it --rm alpinelinux/alpine-gitlab-ci 2020-06-18 13:41:58 docker run -it --rm alpinelinux/alpine-gitlab-ci /bin/sh 2020-06-18 13:42:00 to be sure 2020-06-18 13:55:28 does something complains about missing ca certficate bundles? 2020-06-18 13:55:39 i did a change in ca-certificates recently 2020-06-18 13:56:02 yes 2020-06-18 13:56:24 fatal: unable to access 'https://gitlab.alpinelinux.org/alpine/aports/': error setting certificate verify locations: 2020-06-18 13:56:26 CAfile: /etc/ssl/certs/ca-certificates.crt 2020-06-18 13:56:28 CApath: none 2020-06-18 13:57:21 using edge i suppose 2020-06-18 13:57:48 can you see what version of ca-certificates{,-bundle} is installed? 2020-06-18 13:59:01 Purging ca-certificates (20191127-r3) 2020-06-18 13:59:12 Purging ca-certificates (20191127-r3) 2020-06-18 13:59:14 sorry 2020-06-18 13:59:16 Upgrading ca-certificates-bundle (20191127-r3 -> 20191127-r4) 2020-06-18 14:18:11 ncopa: so what should we do now? 2020-06-18 14:19:52 hum 2020-06-18 14:20:45 i suppose its curl that pulls it it in 2020-06-18 14:21:08 ca-certificates -r3 was broken because it purges the certificate when uninstalled 2020-06-18 14:21:17 it is fixed in ca-certificates -r4 2020-06-18 14:21:49 but ca-certificates is still purged atm 2020-06-18 14:22:16 meaning something does not depend on it anymore, right? 2020-06-18 14:36:41 i also did the change in curl to depend on ca-certificates-bundle 2020-06-18 14:36:59 so it got uninstalled, triggering the bug -r4 is fixing 2020-06-18 14:37:22 I suppose we are using a docker image? 2020-06-18 14:37:53 we could build new updated docker image 2020-06-18 14:38:09 but i suppose as quick fix I can revert curl update 2020-06-18 14:58:43 so it's pure the upgrade that is breaking? 2020-06-18 15:05:24 yes 2020-06-18 15:05:32 well 2020-06-18 15:05:51 is is the removal that breaks it 2020-06-18 15:06:10 ca-certificates -r3 has a buggy post-install script 2020-06-18 15:06:14 post-deinstall* 2020-06-18 15:14:59 ok 2020-06-18 15:15:05 Then I will just regen the docker images 2020-06-18 15:16:39 w/win 3 2020-06-18 15:16:43 sorry 2020-06-18 16:02:54 what are our build arm hardware on packet 2020-06-18 16:03:40 one is Ampere iirc, but second one? 2020-06-18 16:04:02 ThunderX 2020-06-18 16:04:15 (funny, I was wondering the same, but could not remember Ampere :D) 2020-06-18 16:04:22 and Ampere run in 32bit mode? 2020-06-18 16:05:41 developer on #crystal-lang asked me, he is working on get sponsorship for crystal 2020-06-18 16:05:52 checking 2020-06-18 16:10:24 Ampere is aarch64 2020-06-18 16:10:36 and both runs aarch64 alpine as base OS? 2020-06-18 16:11:29 yes 2020-06-18 16:11:46 armhf / armv7 is aarch64 in 32-bits mode 2020-06-18 16:11:51 thanks for answers 2020-06-18 16:20:35 We have 2 diff ampere I think 2020-06-18 16:20:50 usa4 seems to be hisilicon 2020-06-18 16:20:59 at least, according to packet 2020-06-18 16:21:27 so we have more than 2 from packet? 2020-06-18 16:21:46 Also x86 2020-06-18 16:22:03 We have a bunch of servers from packet 2020-06-18 16:23:11 aha 2020-06-18 16:50:59 PureTryOut[m]: I see CI is working again (I rebuilt the docker images) 2020-06-18 17:02:56 Awesome! 2020-06-18 17:43:03 ncopa: thanks for pointing that out :) 2020-06-18 19:49:51 clandmeter: I've added nld3 and nld5 to zabbix :) 2020-06-18 20:24:38 clandmeter: do you know what the dmeventd service is? It's crashed on both nld3 as nld5 2020-06-18 20:25:04 devicemapper, something todo with lvm 2020-06-18 20:31:08 lvm2-dmeventd-2.02.186-r1 description: Device-mapper event daemon 2020-06-18 20:31:35 I wonder why t consistently crashes 2020-06-18 20:32:00 is there any log 2020-06-18 20:32:50 If there was, it's probably long gone 2020-06-18 20:35:09 iirc, it have option to log to stdout 2020-06-18 20:37:07 though I never used it in 'production', just tried for testing how it works and is it good for my use case 2020-06-18 20:37:39 if I just run dmeventd, it exists with 0 2020-06-18 20:38:11 ah, it forks 2020-06-18 20:38:16 hmm, try to look in man page, it should have option to run in foreground 2020-06-18 20:38:25 yeah 2020-06-18 20:39:46 It just runs, says it's ready 2020-06-18 20:39:52 What was your usecase? 2020-06-18 20:40:44 Just restarting it again 2020-06-18 20:41:48 I wanted to look can I use it as some kind of RAID for virtual machines in cloud 2020-06-18 20:43:38 but found that is easier to mount virtual disks under one of them, and more simple which was main reason to forget about device-mapper 2020-06-18 20:44:46 that was about 3-4 years ago 2020-06-18 20:45:50 ok 2020-06-18 20:48:06 so, I can't help much with this, only remember it have different logging options 2020-06-18 21:03:33 olla 2020-06-18 21:03:48 ikke: yes dmeventd likes to crash 2020-06-18 21:04:04 i never put any energy into it 2020-06-18 21:58:17 Fastly TLS $0 for first 5 domains 2020-06-19 04:05:06 iirc its not necessary to keep dmeventd running if you use exclusively fixed disks 2020-06-19 04:56:58 maldridge: ok, so if it crashes any all the time, we better can just stop it 2020-06-19 04:57:18 clandmeter: does that mean we can provide our own certificates or something like that? 2020-06-19 05:34:14 ikke: i already added one 2020-06-19 05:34:47 ok 2020-06-19 05:34:49 But I guess it need to be activated 2020-06-19 05:35:03 Maybe you can help check 2020-06-19 05:35:27 I looked a bit but could not find it that quickly 2020-06-19 07:00:44 ikke: i think we need to update the cname to d.sni.global.fastly.net 2020-06-19 07:00:52 if i understand the document correctly 2020-06-19 07:01:07 but im not able to test cname locally 2020-06-19 07:13:45 i set dl-cdn ttl to 5 min 2020-06-19 07:18:38 will test in an hour or so 2020-06-19 18:26:35 im updating dl-cdn 2020-06-19 18:26:42 cname 2020-06-19 18:27:15 if it doesnt work ill revert, but that means downtime of max 10min 2020-06-19 18:30:13 k 2020-06-19 18:36:08 ikke: it does not list ipv6 addresses 2020-06-19 18:36:13 im not sure thats supported 2020-06-19 18:37:40 ikke: please add a check for https cert expire for dl-cdn 2020-06-19 18:45:05 ok looks like we a prepend dualstack to the hostname 2020-06-19 18:45:14 lets see if that does not break things 2020-06-19 18:48:25 clandmeter: apparently I already have 2020-06-19 18:48:33 Cert days left 2020-06-19 18:48:35 2020-06-19 18:45:3689 days 2020-06-19 18:48:38 2020-06-19 18:45:36 89 days 2020-06-19 18:48:50 nice 2020-06-19 18:49:41 How is that cert updated? 2020-06-19 18:51:04 So after this ncopa could revert that change again for docker 2020-06-19 18:54:28 not sure he already applied it for docker 2020-06-19 18:55:01 fastly autoupdates it 2020-06-19 18:55:06 if the record stays in dns 2020-06-19 18:55:19 https://docs.fastly.com/en/guides/serving-https-traffic-using-fastly-managed-certificates#certificate-management-and-renewals 2020-06-19 18:56:11 clandmeter: aha, ok, nice 2020-06-19 19:18:12 I'm trying to get our infra into zabbix, but not all hosts have dmvpn 2020-06-19 20:45:39 ouch, got bitten by a blocking outgoing firewall :( 2020-06-20 09:39:16 ikke: im trying to do child pipelines for dynamic configs 2020-06-20 09:39:27 im bumping to the same issue again 2020-06-20 09:39:37 i assinged runners but it wont run :) 2020-06-20 09:39:45 do i just tags? 2020-06-20 09:39:57 mis tags 2020-06-20 09:40:18 can you show a job? 2020-06-20 09:41:22 https://gitlab.alpinelinux.org/clandmeter/docker-abuild/-/jobs 2020-06-20 09:42:02 i guess its because this is not under alpine it wont accept it 2020-06-20 09:42:54 You are indeed missing tags 2020-06-20 09:43:06 docker-alpine at minimum 2020-06-20 09:43:07 which tags should i sue? 2020-06-20 09:43:14 ok 2020-06-20 09:43:22 for every job right? 2020-06-20 09:43:28 Yes 2020-06-20 09:43:51 The shared runners work for any project (otherwise aports MRs would not have any runners) 2020-06-20 09:45:26 clandmeter: if you look here https://gitlab.alpinelinux.org/clandmeter/docker-abuild/-/settings/ci_cd 2020-06-20 09:45:28 under runners 2020-06-20 09:45:33 you see what tags each runner has 2020-06-20 09:46:40 Oh, if you want to build docker images, then we need to assign the runners 2020-06-20 09:46:47 they are now limited to the docker project 2020-06-20 09:47:00 docker group* 2020-06-20 09:47:10 vim is pissing me off with its auto tab crap in yml 2020-06-20 09:47:51 I have :set sw=2 sts=2 ts=2 ai et 2020-06-20 09:47:53 in my muscle memory 2020-06-20 09:57:10 how can i assign that runner? 2020-06-20 09:59:10 The runners are locked to a group, which means I cannot assign it to random projects :( 2020-06-20 10:00:55 child pipelines are not very obvious... 2020-06-20 10:01:02 https://gitlab.alpinelinux.org/clandmeter/docker-abuild/pipelines/23398 2020-06-20 10:01:57 not obvious in what way? 2020-06-20 10:03:31 why they wont start? 2020-06-20 10:05:01 Ok, nothing happens when you click on them 2020-06-20 10:05:38 i see that the child yml is not correct 2020-06-20 10:05:41 maybe thats the issue 2020-06-20 10:09:03 w00t 2020-06-20 10:09:18 ikke: https://gitlab.alpinelinux.org/clandmeter/docker-abuild/pipelines/23401 2020-06-20 10:11:18 still would be nice if it would mention something about the child if it fails. 2020-06-20 10:12:47 hmm this whole child pipelines is weird 2020-06-20 10:13:01 if the childs fail the main pipeline still shows green 2020-06-20 10:14:22 it shows an error for downstream but when you click on it it will show its running (but its not) 2020-06-20 10:14:47 ah ok, I've not used it yet 2020-06-20 10:15:23 Anoying that you cannot click on the child pipeline 2020-06-20 10:15:31 you can now 2020-06-20 10:15:42 oh yes that one 2020-06-20 10:15:44 no idea 2020-06-20 10:16:17 maybe with 13 it has some improivments 2020-06-20 10:16:27 So the idea is that you dynamically generate the yml for all the different versions/ 2020-06-20 10:16:28 ? 2020-06-20 10:16:31 they even mention that child pipes are broken in 12.9 2020-06-20 10:16:37 yes 2020-06-20 10:16:39 aha, ok 2020-06-20 10:16:43 we now have releases.json 2020-06-20 10:16:51 so i read that and generate the yml 2020-06-20 10:16:59 aha, nice 2020-06-20 10:17:21 it can be done much more simple 2020-06-20 10:18:08 but i want to run all arches at ones 2020-06-20 10:18:15 or it will take ages 2020-06-20 10:18:30 yeah, makes sense 2020-06-20 10:18:49 btw, 13.1 is out 2020-06-20 10:18:51 just the tag 2020-06-20 10:19:02 so no official annoucement yet 2020-06-20 10:19:06 ok 2020-06-20 10:19:18 when we have time lets do v13 2020-06-20 10:19:20 Will try the AL upgrade to 3.12 first 2020-06-20 10:19:34 should not take too much time 2020-06-20 10:19:36 ok i need to run to the sligro :) 2020-06-20 10:19:37 (on the test instance) 2020-06-20 10:19:46 well, bike :) 2020-06-20 11:14:37 clandmeter: I have now a project image builder registered for x86_64 which I can assign to individual projects 2020-06-20 11:14:51 If that's working, I can do it for the other arches as well 2020-06-20 11:54:38 clandmeter: any idea what this mount means? "/dev/sda on /var/lib/docker type ext4 (rw,relatime)" 2020-06-20 11:55:01 (note that /dev/sda is mounted on / as well) 2020-06-20 12:38:10 nope, which host? 2020-06-20 12:39:01 deu1-dev1 2020-06-20 12:47:44 ikke: other docker hosts seems to have the same 2020-06-20 12:47:54 yeah 2020-06-20 12:47:58 just wondering what kind of mount it is 2020-06-20 12:48:14 or what the purpose is 2020-06-20 12:48:27 its not done by us i guess, something from within docker. 2020-06-20 12:49:58 I'll just ignore it in Zabbix 2020-06-20 12:50:05 otherwise we get double alerts 2020-06-20 13:52:09 at some point soon i should have a working docker-compose for opencfp 2020-06-20 13:59:22 Ariadne: ok, nice 2020-06-20 15:29:17 clandmeter: upgrade to 3.12 seems to work without issue 2020-06-20 17:14:45 clandmeter: https://gitlab-test.alpinelinux.org/admin 2020-06-20 18:00:11 Nice 2020-06-20 18:00:22 I'm not home ATM 2020-06-20 19:13:53 clandmeter: upgrading tomorrow? 2020-06-20 21:28:45 ikke: ok, but im not home whole day, fathers day tomorrow. 2020-06-20 21:29:39 aha, ok 2020-06-20 21:29:51 armhf does seem stuck though 2020-06-20 21:34:06 clandmeter: armhf builder seems stuck on sync (uniterruptable sleep) 2020-06-20 21:34:30 same with armv7 2020-06-20 21:34:46 git? 2020-06-20 21:35:06 no, just `sync` 2020-06-20 21:35:10 after apk add 2020-06-20 21:35:18 at least, subprocess of apk add 2020-06-20 21:35:31 are they on usa4 ? 2020-06-20 21:35:34 disk issues? 2020-06-20 21:35:55 var/cache/misc/shared-mime-info-1.15-r0.trigger /usr/share/mime 2020-06-20 21:35:57 mps: yes 2020-06-20 21:36:26 I'm building aarch64 linux-edge right now there 2020-06-20 21:36:39 clandmeter: seems so 2020-06-20 21:36:41 ikke: which host is that? 2020-06-20 21:36:50 usa4-dev1 2020-06-20 21:37:04 sd 3:0:0:0: [sdc] tag#32 timing out command, waited 180s 2020-06-20 21:37:23 blk_update_request: I/O error, dev sdc, sector 202196808 op 0x1:(WRITE) flags 0x4a00 phys_seg 2 prio class 0 2020-06-20 21:37:36 doesn't seem healthy 2020-06-20 21:37:50 usa4-dev1 is not in netbox 2020-06-20 21:37:54 only dev3 2020-06-20 21:38:00 just usa4 2020-06-20 21:38:09 usa4.alpin.pw 2020-06-20 21:38:36 multipath issue? 2020-06-20 21:39:13 connection1:0: ping timeout of 3 secs expired, recv timeout 3, last rx 6400330503, last ping 6400331456, now 6400332416 2020-06-20 21:39:17 I guess so 2020-06-20 21:42:33 but it also errors on local apk? 2020-06-20 21:42:55 local apk? 2020-06-20 21:43:12 apk not in qemu 2020-06-20 21:43:27 yes 2020-06-20 21:43:29 it's not ci 2020-06-20 21:43:32 the actual builders 2020-06-20 21:43:54 I stopped the armhf container, and now lxc is confused 2020-06-20 21:44:11 i tried to use lvs 2020-06-20 21:44:14 but it hangs 2020-06-20 21:44:46 i think we only use multipath (iscsi) for qemu 2020-06-20 21:47:59 should we try a force reboot? 2020-06-20 21:48:17 dmesg complains about sdc, but i think thats iscsi 2020-06-20 21:52:26 i think we got bit by https://status.packet.com/incidents/2vf6gk94b3mh 2020-06-20 21:53:16 ive done a reboot 2020-06-20 21:56:18 ok 2020-06-20 21:56:46 reboot was my suggestion as well 2020-06-20 21:56:57 it's not rebooted yet 2020-06-20 21:57:03 my ssh session is still alive 2020-06-20 21:57:10 ok, now it's gone 2020-06-20 21:58:10 i had to reboot from control panel 2020-06-20 21:58:19 its posting 2020-06-20 21:58:32 but its arm, so will take half a day 2020-06-20 21:59:02 i think ncopa should have gotten an update about this 2020-06-20 21:59:07 hope my last downloaded files are saved 2020-06-20 21:59:19 he is the owner 2020-06-20 21:59:33 i moved ownership long time ago 2020-06-20 21:59:48 but it seems he does not really put attention to these email notifications 2020-06-20 22:01:37 ikke: its up again 2020-06-20 22:03:20 i dont see dmvpn routes 2020-06-20 22:06:59 ok i restarted nhrpd 2020-06-20 22:07:02 seems ok now 2020-06-20 22:08:08 its going to try to build missing pkgs now, first up linux-lts 2020-06-21 14:04:58 ikke: still issues with the host? 2020-06-21 14:14:46 clandmeter: unsure 2020-06-21 14:14:51 I still see those messages in dmesg 2020-06-21 14:16:39 hmm 2020-06-21 14:23:45 I raised a ticket at packet 2020-06-21 14:24:26 haha, it seems not to honor line breaks. 2020-06-21 14:24:33 thats rough when you paste a dmesg 2020-06-21 14:55:47 looks like they are aware of the iscsi issues 2020-06-21 15:13:33 ikke: ack? 2020-06-21 15:14:07 please follow https://status.packet.com/ 2020-06-21 15:14:12 should be added soon 2020-06-21 15:16:57 Maybe turn off the runner 2020-06-21 15:17:14 And turn off iscsi for now 2020-06-21 15:22:40 clandmeter: What do we use iscsi fore? 2020-06-21 15:22:44 the qemu runner? 2020-06-21 15:22:58 Yes 2020-06-21 15:24:09 ok 2020-06-21 15:24:24 do we need to reboot again? 2020-06-21 15:42:54 Yup, it's now on the status page 2020-06-21 15:42:58 "Block Storage connectivity issue in EWR1" 2020-06-21 15:58:57 Yup 2020-06-21 15:59:15 If lxc is ok no need to reboot 2020-06-21 15:59:22 ok 2020-06-22 05:34:34 ikke: got mail about iscsi 2020-06-22 05:34:45 Please restart services and try 2020-06-22 05:36:06 What services? The entire server? 2020-06-22 06:00:45 If you like 2020-06-22 06:00:54 But I mean iscsi 2020-06-22 06:26:30 morning 2020-06-22 07:56:07 ncopa: morning 2020-06-22 08:49:50 clandmeter: How do I start the runner machine again? rc-service qemu-runner start says "`/usr/bin/qemu-system-x86_64' is not readable" 2020-06-22 08:50:44 Just restart system 2020-06-22 08:50:58 I have no time to check atm 2020-06-22 08:51:27 ah, there is a runner.start in /etc/local.d 2020-06-22 08:55:45 yes it starts via local service 2020-06-22 08:57:04 I just restarted it 2020-06-22 08:57:09 running that script didn't do a lot 2020-06-22 09:34:08 it's running again 2020-06-22 09:37:41 nice 2020-06-22 09:37:42 thx 2020-06-22 09:37:58 we still need to do 13.0.x 2020-06-22 09:38:05 yes 2020-06-22 16:20:55 https://www.linode.com/spotlight/alpine-linux/ 2020-06-22 16:37:44 Nice 2020-06-22 17:03:48 :) 2020-06-22 17:08:53 :) 2020-06-22 17:15:30 https://about.gitlab.com/releases/2020/06/22/gitlab-13-1-released/ 2020-06-22 17:17:10 "Merge request reviews moved to core" 2020-06-22 17:17:19 nice 2020-06-22 20:39:36 clandmeter: is that behind your new backyard? 2020-06-22 20:42:25 congrats on the limelight 2020-06-22 22:00:52 mps: no my parents garden 2020-06-24 07:58:24 ikke: seems they want to re-write docker-compose in golang. 2020-06-24 08:00:50 aha, ok. Is a nice indication that they intend to keep support docker-compose 2020-06-24 08:01:32 did github uii change? 2020-06-24 08:01:39 yes 2020-06-24 08:01:55 I already had it for a couple of days (opt-in) 2020-06-24 08:02:22 i feels changed 2020-06-24 08:02:31 They moved your cheese 2020-06-24 08:02:35 but cant really see much functional difference 2020-06-24 08:03:40 no, mainly just design change 2020-06-24 08:04:27 i dont see releases anymore 2020-06-24 08:04:36 on the right 2020-06-24 08:04:40 (I was looking for them as well) 2020-06-24 08:04:49 "Latest release" 2020-06-24 08:05:00 sigh 2020-06-24 08:05:15 its one of the most used items for me 2020-06-24 08:07:51 releases are tags now, I think 2020-06-24 08:09:03 That never changed 2020-06-24 08:09:36 hmm, tags then there is releases button 2020-06-24 09:50:03 > ERROR: docker-compose-fish-completion-1.26.0-r0: BAD signature 2020-06-24 09:50:03 Seems like the CDN does things again? 2020-06-24 09:55:16 Cogitri: docker-compose was upgraded, downgraded and upgraded again 2020-06-24 09:55:31 so the cdn still has the previously upgraded package in cache 2020-06-24 09:55:34 Ah yes, we should enforce pkgrel bumps in that case 2020-06-24 09:55:36 just a matter of purging it 2020-06-24 09:55:52 Seems to work now, thanks 2020-06-24 15:14:34 seems like pushing to alpine-mksite does not update https://alpinelinux.org can somebody please regenereate the page manually? 2020-06-24 15:14:51 neither does wwwtest, so i only tested it locally and pushed to prod 2020-06-24 15:15:35 i fixed https://gitlab.alpinelinux.org/alpine/infra/alpine-mksite/-/issues/1 2020-06-24 21:55:17 in last days whenever I push something directly to origin (skipping MR) I have tell algitbot to retry 2020-06-25 04:45:30 mps: yes, we have to update origin on the builders to point to gitlab.a.o instead of git.a.o 2020-06-25 08:18:15 Can I somehow get the test-suite.log from s390x CI? https://gitlab.alpinelinux.org/Cogitri/aports/-/jobs/150914 2020-06-25 08:21:25 the containers are destroyed again afterwards. 2020-06-25 08:23:05 Ah yes, on some Gitlabs the workspace is saved (but that needs a lot of storage, I guess) 2020-06-25 08:23:22 I'll just see if I can print the test-suite.log to the CLI then 2020-06-25 08:24:01 One option is to find some kind of general pattern that we can supply these things as artifacts 2020-06-25 08:26:38 Cogitri: I guess these projects don't use docker runners 2020-06-25 08:26:48 *.log ? 2020-06-25 08:27:06 I think they save the workspace as artifact 2020-06-25 08:31:41 It doesn't seem you supply recursive patterns 2020-06-25 08:32:13 s/you/you can/ 2020-06-25 08:32:13 ikke meant to say: It doesn't seem you can supply recursive patterns 2020-06-25 08:38:47 Ah, that's unfortunate 2020-06-25 08:39:20 maybe something like */*/src/*/*.log works :P 2020-06-25 08:53:46 ikke: do we have an issue for that? 2020-06-25 08:53:51 reg origin 2020-06-25 08:53:57 clandmeter: no 2020-06-25 08:54:18 i guess we need one. 2020-06-25 08:54:23 yes 2020-06-25 08:54:27 need to reserve some time to fix that 2020-06-25 08:54:31 and update gitlab. 2020-06-25 08:54:37 I have time tonight 2020-06-25 08:54:47 also for gitlab 2020-06-25 08:54:56 let me check my agenda 2020-06-25 08:55:25 i think i can after dinner 2020-06-25 09:37:43 clandmeter: ^ 2020-06-25 09:37:58 crash? 2020-06-25 09:38:55 I cannot ping / ssh the builder either 2020-06-25 09:39:08 probably network 2020-06-25 09:43:17 ok, back 2020-06-25 11:07:32 clandmeter: https://gitlab.alpinelinux.org/alpine/infra/infra/-/issues/10697 2020-06-25 14:14:54 ikke: nice 2020-06-25 14:15:04 think we could script it to run it from the lxc host itself 2020-06-25 14:15:12 else we get crazy :) 2020-06-25 14:15:22 yup 2020-06-25 14:15:25 btw, i just got invited for a BBQ 2020-06-25 14:15:39 so i think i cannot make it tonight 2020-06-25 14:15:41 Oh nice 2020-06-25 14:15:42 right 2020-06-25 14:15:49 i will be skinny dipping ;-) 2020-06-25 14:16:03 no i wont post pictures 2020-06-25 14:16:11 sorry mps 2020-06-25 14:16:13 :p 2020-06-25 16:08:47 clandmeter: I don't need photos, but translation of 'skinny dipping' would be good to get 2020-06-25 16:13:41 mps: You'd rather not know 2020-06-25 16:14:59 ah, some of the 'games' for big modern people not yet grown, thanks ikke. better if I don't know what is it :) 2020-06-25 17:28:43 mps: naked swimming 2020-06-25 17:32:47 nothing else ? 2020-06-25 17:36:13 we are born naked, I tend to think :) 2020-06-26 09:55:21 ikke, i updated the remotes on armv7 and aarch64 builders 2020-06-26 09:57:35 with something like this https://tpaste.us/YpB4 2020-06-26 10:01:51 Ok, thanks 2020-06-26 10:02:01 will do the other ones as well 2020-06-26 10:48:18 clandmeter: ugh, windows line endings :/ 2020-06-26 10:49:31 Huh? 2020-06-26 10:49:49 or mac line endings, one of both 2020-06-26 10:49:53 in that paste 2020-06-26 10:50:23 tpaste was broeken in that host 2020-06-26 10:50:38 Buy ik was om Linux 2020-06-26 10:50:42 lol 2020-06-26 10:50:48 nice autocorrect 2020-06-26 10:51:07 It did not go to English :( 2020-06-26 10:51:19 I was on Linux 2020-06-26 10:51:29 Not sure where it came from 2020-06-26 10:52:04 so I assume armhf was also fixed, right? 2020-06-26 10:53:07 I'm not home atm 2020-06-26 10:53:17 Should have the script in root 2020-06-26 10:53:39 I assume because armhf and armv7 are one the same host 2020-06-26 11:00:53 ok, done 2020-06-26 11:29:54 nice 2020-06-26 11:30:01 all hosts done now? 2020-06-26 11:30:07 yes 2020-06-26 11:30:14 cool 2020-06-26 11:30:24 I did not verify armhf yet though 2020-06-26 11:30:29 well its not that cool, so lets call it nice :) 2020-06-26 11:30:44 yes, done as well 2020-06-26 11:31:08 clandmeter: the heat is affecting you :P 2020-06-26 11:32:15 i dont think its the heat, the effect is even in winter times ;-) 2020-06-26 11:33:21 mps: your commits should be instant now. 2020-06-26 11:52:27 clandmeter: thanks, and thanks for info 2020-06-26 16:54:26 clandmeter: ping 2020-06-26 17:02:09 ikke: pang 2020-06-26 17:13:09 pung 2020-06-26 17:14:01 clandmeter: do you have time to look at the gitlab upgrade? 2020-06-26 17:14:16 :) 2020-06-26 17:14:34 I'm not home 2020-06-26 17:14:38 k 2020-06-26 17:14:38 Later maybe 2020-06-26 17:14:47 I pushed latest versions 2020-06-26 17:15:06 I can do it on my own, as long as nothing breaks :) 2020-06-26 17:15:16 Which I don't necessarily expect (we already tested it) 2020-06-26 17:15:30 (Lee Peng) 2020-06-26 17:15:38 https://en.wikipedia.org/wiki/Li_Peng 2020-06-26 17:15:39 [WIKIPEDIA] Li Peng | "Li Peng (Chinese: 李鹏; pinyin: Lǐ Péng; 20 October 1928 – 22 July 2019) was a Chinese politician. Known as the "Butcher of Beijing" for his role in the Tiananmen Square Massacre, Li served as the fourth Premier of the People's Republic of China from 1987 to 1998, and as the Chairman of the Standing Committee..." 2020-06-26 17:15:54 sorry 2020-06-26 17:15:57 Go for it 2020-06-26 17:16:27 Is it breaks it will also break with me around 2020-06-26 19:03:11 ikke: ping 2020-06-26 19:06:20 pong 2020-06-26 19:06:39 we want to give it a shot? 2020-06-26 19:09:19 sure 2020-06-26 19:10:13 Do we want to upgrade to 3.12 at the same time? 2020-06-26 19:10:23 The test instance went without issue, but not sure if there are differences 2020-06-26 19:10:43 i think we can 2020-06-26 19:10:56 should not really affect docker that much 2020-06-26 19:11:01 it runs or it doesnt :) 2020-06-26 19:11:03 heh 2020-06-26 19:11:07 We noticed that :P 2020-06-26 19:11:58 worse case scenario we revert to a backup 2020-06-26 19:12:09 I hope not :) 2020-06-26 19:14:18 I'll run the system upgrade 2020-06-26 19:16:28 ok, done 2020-06-26 19:16:42 should we reboot it first? 2020-06-26 19:16:55 I think that's a good idea 2020-06-26 19:17:04 i guess there is a kernel update 2020-06-26 19:17:10 No suprises next time we reboot 2020-06-26 19:17:18 did y ou update-conf? 2020-06-26 19:17:24 no, not yet 2020-06-26 19:17:27 k 2020-06-26 19:18:15 rc_cgroup_memory_use_hierarchy="YES 2020-06-26 19:18:23 Did we set that? 2020-06-26 19:18:34 uhm 2020-06-26 19:18:40 in rc.conf 2020-06-26 19:18:47 i know that setting 2020-06-26 19:18:52 i just dont know why i do 2020-06-26 19:19:03 I'll just leave it 2020-06-26 19:19:12 i think its because of a boot warning 2020-06-26 19:20:23 -xen_opts=dom0_mem=256M 2020-06-26 19:20:27 +xen_opts=dom0_mem=384M 2020-06-26 19:20:33 huh 2020-06-26 19:20:44 that i dont know 2020-06-26 19:20:47 seems like a default change 2020-06-26 19:20:51 k 2020-06-26 19:21:34 would be nice if update-conf could to a 3-way diff 2020-06-26 19:21:45 but that would require us to keep the original somewhere 2020-06-26 19:22:49 -features="ata base ide scsi usb virtio ext4" 2020-06-26 19:22:51 +features="ata base cdrom ext4 keymap kms mmc raid scsi usb virtio" 2020-06-26 19:22:57 mkinitfs.conf 2020-06-26 19:23:02 keep the default 2020-06-26 19:23:06 err 2020-06-26 19:23:10 keep original 2020-06-26 19:23:19 nod 2020-06-26 19:23:25 if it boots it boots :) 2020-06-26 19:24:04 /etc/update-extlinux.conf 2020-06-26 19:24:18 default=lts 2020-06-26 19:24:39 its not yet on lts? 2020-06-26 19:24:52 which version were we on? 2020-06-26 19:25:08 4.19.78-0-virt 2020-06-26 19:25:15 virt apparently 2020-06-26 19:25:18 ah right, keep virt 2020-06-26 19:25:21 its kvm 2020-06-26 19:25:23 But it's 'hardened' now 2020-06-26 19:25:41 ok make it virt 2020-06-26 19:25:43 ok 2020-06-26 19:27:12 I think that's it 2020-06-26 19:27:27 run update-extlinux I guess? 2020-06-26 19:28:39 Ok, do we shutdown gitlab first? (docker-compose down) 2020-06-26 19:28:57 ok you can 2020-06-26 19:29:02 i guess docker will stop the containers 2020-06-26 19:31:42 ok, done 2020-06-26 19:31:49 rebooting the system now 2020-06-26 19:31:59 cross fingers 2020-06-26 19:33:40 ping is back 2020-06-26 19:33:45 and I'm in 2020-06-26 19:34:01 now to upgrade gitlab 2020-06-26 19:34:22 pulling new image for 13.0 2020-06-26 19:34:30 did gitlab start? 2020-06-26 19:34:33 no 2020-06-26 19:34:35 i guess not 2020-06-26 19:34:40 The containers are not there 2020-06-26 19:34:41 due to docker-compose down 2020-06-26 19:34:45 (which was my intention) 2020-06-26 19:35:00 would be nice to know if it would work without an upgrade 2020-06-26 19:35:20 I can change the version back to test 2020-06-26 19:35:34 I did the same on the test instance 2020-06-26 19:35:46 i let you decide 2020-06-26 19:35:55 its ok for me 2020-06-26 19:37:17 started 12.10 2020-06-26 19:39:35 currently waiting on 'gitlab_1 | Updating directories...' 2020-06-26 19:39:40 for a while 2020-06-26 19:40:03 ok, continuing 2020-06-26 19:40:54 'gitlab_1 | s6-svwait: fatal: unable to subscribe to events for /run/s6/web: No such file or directory' 2020-06-26 19:41:13 It's up now 2020-06-26 19:41:21 going for 13.0 now 2020-06-26 19:45:45 Ok, up, 13.0.6 2020-06-26 19:45:50 13.0.7 2020-06-26 19:45:52 hmm 2020-06-26 19:45:53 ok 2020-06-26 19:45:54 :) 2020-06-26 19:56:37 looks good 2020-06-26 19:57:02 its raining cats and dogs here 2020-06-26 19:57:29 No cloud to find here 2020-06-26 19:57:58 looking at the radar, its coming your way 2020-06-26 19:58:37 quite slowly 2020-06-26 20:02:56 Hey, can you maybe give me the IP for my container cogitri-edge-aarch64.usa1.alpin.pw ? Seems like the name doesn't resolve for me even though I have alpine's DNS set 2020-06-26 20:29:02 Ah nevermind, somehow dig can resolve it but ssh not 2020-06-26 20:30:00 strange 2020-06-26 20:55:50 clandmeter: ping 2020-06-26 20:58:17 pang 2020-06-26 20:58:37 did we break it? 2020-06-26 20:58:37 clandmeter: does e-mail to ticket only work for alpine/aports? 2020-06-26 20:58:40 no 2020-06-26 20:58:43 afaik not 2020-06-26 20:58:53 no its a general thing afaik 2020-06-26 20:59:10 maybe it only shows up after there already is a ticket 2020-06-26 20:59:50 https://gitlab.alpinelinux.org/alpine/infra/mirrors/-/issues does not show the e-mail address to send it to 2020-06-26 21:00:02 or perhaps because it's private 2020-06-26 21:00:13 that would make sense 2020-06-26 21:01:30 Would be nice to be able to forward mirror requests 2020-06-26 21:01:36 https://gitlab.alpinelinux.org/alpine/infra/docker/ansible/-/issues/2 2020-06-26 21:01:37 heh 2020-06-26 21:01:59 why on the ansible project :D 2020-06-26 21:02:31 first one listed or so? 2020-06-26 21:02:50 i have no idea 2020-06-26 21:02:57 brain fart i suspect 2020-06-26 21:03:43 moving it to aports 2020-06-26 21:26:23 question: are all of alpine's build boxes in roughly the same physical place? 2020-06-26 21:27:25 no 2020-06-26 21:27:35 They are all over the world :) 2020-06-26 21:27:43 how do you handle bandwidth issues then? 2020-06-26 21:27:51 this is a problem I've recently started thinking more about with Void 2020-06-26 21:28:32 Honestly we (I) haven't noticed any major bandwidth issues with our builders to be honest 2020-06-26 21:32:31 hmm, Ok 2020-06-26 21:33:06 We have one CI builder in south-america that has issues from time to time 2020-06-26 21:36:19 s390x? 2020-06-26 21:36:33 No, that one has a different issue 2020-06-26 21:36:46 like being an s390 machine :P 2020-06-26 21:37:00 heh 2020-06-27 06:31:55 Sorry, but could you maybe take a look at https://gitlab.alpinelinux.org/alpine/infra/infra/-/issues/10692 ? pmOS wants to finish their image for the Pinephone pmOS community edition so it'd be nice if they had fresh appstream data on that 2020-06-27 07:17:30 clandmeter: Where do you want to host this ^? 2020-06-27 07:26:45 I don't mind 2020-06-27 07:27:11 We have 2or3 docker boxes 2020-06-27 07:27:14 yeah 2020-06-27 07:27:20 deu1 or gbr2 I guess? 2020-06-27 07:27:41 I don't know from my head 2020-06-27 07:28:10 The cgit one had high cpu 2020-06-27 07:28:23 Cogitri: I left one remark regarding the docker-compose config 2020-06-27 07:28:53 ah, I see it's already fixed 2020-06-27 07:29:07 Or you can try adding it in ansible ;) 2020-06-27 07:31:55 I have no clue how it add it for just a single host, the current ansible code only contains generic definitions 2020-06-27 07:32:22 Probably add a site entry for a single host? 2020-06-27 07:32:49 generally your ansible should be written to take variables 2020-06-27 07:33:24 The one now is very simplistic 2020-06-27 07:33:48 clandmeter: right, deu1 has high cpu indeed 2020-06-27 07:44:24 Do we want to move this project under the alpine infra group btw? 2020-06-27 08:03:35 Cogitri: do you mind me moving the project under alpine/infra/compose? fyi I'll make you maintainer and make the project public then 2020-06-27 08:07:55 Ah sure, no problem with that. Thanks for looking into this :) 2020-06-27 08:13:40 ok, done 2020-06-27 09:58:25 Cogitri: I get a segfault when building the image :-/ 2020-06-27 10:02:03 https://tpaste.us/5Eb9 2020-06-27 10:10:11 Oof 2020-06-27 10:10:33 On x86_64? I can look into it in a bit 2020-06-27 10:10:37 yes 2020-06-27 10:11:18 Cogitri: maybe it's better to separate the docker image from the compose project, like we do for other projects. then the image is built and uploaded by CI and we don't need to build it on the hosts 2020-06-27 10:11:26 (not that it would solve this problem necessarily) 2020-06-27 11:00:43 ikke: Huh, I just built the thing locally and that worked just fine on my x86_64 machine 2020-06-27 11:01:06 I did docker build -t appstream . 2020-06-27 11:41:45 As for the splitting: Sure, I don't mind yhat but not sure what's necessary for that 2020-06-27 11:42:32 Another project on gitlab, move the Dockerfile + a simple gitlab-ci.yml file there 2020-06-27 11:42:46 And maybe it only SEGFAULTs with the many buildjobs we have (ninja defaults to -j$(nproc)), so maybe try with ninja -j1 in the Dockerfile? 2020-06-27 11:42:50 And I need to create a repo on docker hub 2020-06-27 11:42:58 I can try 2020-06-27 11:43:18 though nproc is 4 on that host 2020-06-27 11:43:57 Oh, I built with my laptop (so 4 cores/8 threads) 2020-06-27 11:51:22 Also, is pkgs.a.o down? 2020-06-27 11:52:07 no 2020-06-27 12:17:11 Ah weird, seems to work again for me, not sure what was that 2020-06-27 12:27:18 Now it's raining cats and dogs here 2020-06-27 12:30:34 Cogitri: https://gitlab.alpinelinux.org/alpine/infra/docker/appstream-generator 2020-06-27 12:31:29 https://gitlab.alpinelinux.org/alpine/infra/docker/alpine-gitlab-ci/-/blob/master/.gitlab-ci.yml 2020-06-27 12:39:06 Ah nice, thanks 2020-06-27 12:39:25 What do I put it in the 1st repo? 😅 2020-06-27 12:39:40 The Dockerfile? 2020-06-27 12:44:49 yes 2020-06-27 16:47:16 ikke: The dockerfile builds fine now (thanks for the help so far! :) but seems like it's missing some credentials? https://gitlab.alpinelinux.org/alpine/infra/docker/appstream-generator/-/jobs/152549#L32 2020-06-27 17:10:31 Cogitri: ah, forgot to give algitbot access to that docker repo 2020-06-27 17:12:14 Cogitri: job succeded 2020-06-27 17:12:32 strange that it fails on the host itself 2020-06-27 17:13:12 Yup, strange indeed 2020-06-27 17:14:44 Ok, pushed an updated docker-compose.yml 2020-06-27 17:51:47 Cogitri: it's running now 2020-06-27 17:52:03 the appstream generator 2020-06-27 17:54:11 Great, thanks :) 2020-06-27 17:55:04 Once it resolves: https://appstream.alpinelinux.org 2020-06-27 18:17:44 it's working for me now 2020-06-27 18:40:16 Cogitri: it's finished, but the log shows some errors 2020-06-27 18:41:04 not sure if that's expected 2020-06-27 18:42:30 Can you send me the log? 2020-06-27 18:42:36 Seems like 3.12 wasn't generated 2020-06-27 18:47:01 Cogitri: last 200 lines: https://tpaste.us/QnYQ 2020-06-27 18:47:45 let me know if you need more 2020-06-27 18:48:14 That should be fine, just some invalid desktop files 2020-06-27 18:49:05 ok, in the end it says it could not generate the html due to a template missing 2020-06-27 18:55:10 Will look into that in a bit, but that's just an overview of what packages got generated, so not fatal that it's missing 2020-06-27 18:56:14 nod 2020-06-27 19:19:07 Hm, does the log mention something about 3.12? 2020-06-27 19:19:48 no, only edge 2020-06-27 19:22:40 Weird 2020-06-27 19:24:04 the exit code is 1 2020-06-27 19:32:52 Ah, I guess I messed something up in the config then 2020-06-27 19:33:14 Is the job automatically run? 2020-06-27 19:34:20 No 2020-06-27 19:35:31 If you want it scheduled, you should add cron in the appstream-generator container that automaticall runs the job 2020-06-27 19:35:39 same as we do for pkgs.a.o 2020-06-27 19:36:40 Oh, somehow I had expected one would schedule the container to be run as cron and not do cron inside the container 2020-06-27 19:36:46 I'll look into that, thanks! 2020-06-27 22:02:38 Ok. pushed a new version which should generate for 3.12 as well now (and put files into a dir that's versioned by date) 2020-06-27 22:06:33 Would it be possible to clean up the current dir and run it again? It should run with cron afterwards 2020-06-27 22:14:54 "appstream-generator_appstream-generator_1 exited with code 0" 2020-06-27 22:15:31 needed to update the compose project 2020-06-27 22:17:04 ok, should be running now 2020-06-27 22:25:51 Great, thanks 2020-06-27 22:27:51 Seems to 404 now, but maybe it just needs a little? 2020-06-27 22:31:12 2020/06/27 22:30:11 [error] 20#20: *3 "/static/export/index.html" is not found (2: No such file or directory), client: 172.23.0.15, server: localhost, request: "GET / HTTP/1.1", host: "appstream.alpinelinux.org" 2020-06-27 22:32:15 Maybe the cronjob didn't run yet (or I messed that up?) 2020-06-27 22:32:31 Oh right, it generates to /static/$date now 2020-06-27 22:33:55 Updated the nginx config 2020-06-27 22:43:32 ok, no more 404 2020-06-27 22:47:03 Okay, patched appstream-generator into shape so it can export to a certain dir, I think with my current setup caching wouldn't work properly. Once the new docker image is done building it should hopefully work. Sorry for the trouble :) 2020-06-28 06:50:55 https://build.alpinelinux.org/buildlogs/build-edge-x86_64/community/firefox/firefox-77.0.1-r3.log seems like x86_64 has no storage left? 2020-06-28 09:52:11 ikke: ^ 2020-06-28 10:02:49 Cogitri: yeah, I already freed up a bit of space 2020-06-28 10:06:54 Ah okie 2020-06-28 10:10:20 Can you also purge the static volume and restart the appstream container? Should export to the correct things then and properly cache 2020-06-28 10:11:40 ok, note that I already purged it last night 2020-06-28 10:11:43 should I do it again 2020-06-28 10:11:51 (but before your latest image) 2020-06-28 11:06:14 Cogitri: recreated the containers and volumes from the latest image 2020-06-28 11:19:14 Okie, thanks! 2020-06-28 11:19:17 ACTION hopes it just works now :) 2020-06-28 15:14:21 clandmeter: ping (wrongly pinged you in #alpine-linux) 2020-06-28 15:14:34 ok 2020-06-28 15:15:35 I did a sys upgrade on gbr2-dev1 last night (frankly just to get an up-to-date tpaste) 2020-06-28 15:15:45 but that also installed a new kernel (with kernel modules) 2020-06-28 15:16:18 I had to restart docker as well 2020-06-28 15:18:38 ok 2020-06-28 15:18:45 thats packet i guess? 2020-06-28 15:18:47 err 2020-06-28 15:18:54 linode 2020-06-28 15:19:00 yeah 2020-06-28 15:20:04 and all ok? 2020-06-28 15:20:15 It just cannot load new kernel modules 2020-06-28 15:20:27 did you reboot? 2020-06-28 15:20:30 no 2020-06-28 15:20:36 you need to 2020-06-28 15:20:41 with new kernel 2020-06-28 15:20:55 alpine does not keep old modules/kernel 2020-06-28 15:20:58 Yeah, I understand 2020-06-28 15:21:02 same with archlinux 2020-06-28 15:21:21 ok, so you have a question or issue? 2020-06-28 15:21:35 Just wanted to discuss with you what the best option was 2020-06-28 15:21:40 just reboot it is :) 2020-06-28 15:21:49 yeah :) 2020-06-28 15:21:56 but your irc session is on that box 2020-06-28 15:22:02 oh ok 2020-06-28 15:22:14 and pkgs.a.o 2020-06-28 15:22:16 not only mine 2020-06-28 15:22:19 yup 2020-06-28 15:22:25 ncopa and danieli as well 2020-06-28 15:22:40 danieli was angry with me last time ;-) 2020-06-28 15:22:48 haha, not angry, but it was a bit unexpected 2020-06-28 15:22:54 :P 2020-06-28 15:23:02 reboot is ok for me 2020-06-28 15:23:08 ack 2020-06-28 15:23:13 i dont have friends anyway on irc ;-) 2020-06-28 15:26:52 ok, all seems right 2020-06-28 15:26:54 it's alive, it's aliveeee! 2020-06-28 17:05:00 ikke: Sorry, did another small change to the Dockerfile and docker-compose, seems to work locally now 2020-06-28 17:06:24 ok 2020-06-28 17:07:13 purge everything again? 2020-06-28 17:07:56 Shouldn't be necessary I think 2020-06-28 17:07:58 ok 2020-06-28 17:08:23 restarted the container 2020-06-28 17:09:36 Thanks 2020-06-28 17:10:33 Note that the cron will run at 2am 2020-06-28 17:12:16 Ah, thanks for mentioning, somehow I had expect the cronjob would start immediately on start of the container and then wait 24hrs 2020-06-28 17:15:34 the daily periodic cron is just hardcoded to run at 2am 2020-06-28 17:23:45 Oh okie :) 2020-06-28 17:24:56 https://gitlab.alpinelinux.org/alpine/aports/-/blob/master/main/alpine-baselayout/crontab#L5 2020-06-28 17:28:17 running it manually now 2020-06-28 17:28:44 ugh 2020-06-28 17:28:48 segfault :-/ 2020-06-28 17:29:01 Huh what, how does it SEGFAULT now 2020-06-28 17:29:12 2020-06-28 17:28:37 - INFO: Processed gcr/3.36.0-r0/aarch64, components: 0, hints: 0 2020-06-28 17:29:15 2020-06-28 17:28:37 - DEBUG: Looking for icon 'lftp-icon' for 'lftp/4.9.1-r0/aarch64::lftp.desktop' (XDG) 2020-06-28 17:29:16 Segmentation fault 2020-06-28 17:34:27 [ 7353.411268] appstream-gener[24624]: segfault at 0 ip 00007fa08c0127fe sp 00007f80870753d0 error 4 in ld-musl-x86_64.so.1[7fa08bfdd000+47000] 2020-06-28 17:50:21 How weird, the container seems to work just fine on my laptop and vps 🤔 2020-06-28 17:51:00 I guess maybe purge the volume for good measure, but this sure is weird 2020-06-28 17:56:15 Ah, seems like it SEGFAULTed for me now as well, time to debug, thanks for the message 2020-06-28 18:00:48 np 2020-06-28 18:53:53 Okie, pushed a fix for the asgen-config.json that should mitigate that 2020-06-28 18:54:31 ok 2020-06-28 18:55:21 running it again 2020-06-28 19:13:02 Thanks 2020-06-28 19:14:30 Cogitri: one potential fix, add -p to the mkdir of the current day so that you can run it more then once per day 2020-06-28 19:14:44 ok, it just finished 2020-06-28 19:15:08 did v3.12 now as well 2020-06-28 19:15:47 Nice 2020-06-28 19:16:20 Ah, I didn't really think it'd be relevant since we only run daily (and I think appstream-generator complains about putting data into a non-empty dir) 2020-06-28 19:16:52 yeah 2020-06-28 19:17:00 just noticed it when I tried to run it again after the segfault 2020-06-28 19:54:06 Ah yes, hopefully won't happen again though :) 2020-06-28 19:54:18 Seems like everything works now, thanks for the help! 2020-06-28 19:55:17 Ah, another thing - did you get to making the "make aports forks public" script a cronjob yet or do we still have to ping you for that? 2020-06-28 20:49:44 still manually :) 2020-06-28 21:09:56 ikke: thx for fixing the appstream generator. Cogitri: sorry it took some time (im not that active lately...) 2020-06-28 21:42:37 Ah no worries about that :) 2020-06-28 21:52:36 Thanks for all the work infra does, we could hardly do anything without that :) 2020-06-29 06:40:26 morning. the alpine-mirrors package. do we still need it? It fails to build: 2020-06-29 06:40:33 >>> ERROR: alpine-mirrors: Following mirrors failed: 2020-06-29 06:40:33 http://ftp.yzu.edu.tw/Linux/alpine/ 2020-06-29 06:40:33 http://nl.alpinelinux.org/alpine/ 2020-06-29 06:40:33 http://dl-4.alpinelinux.org/alpine/ 2020-06-29 06:40:33 http://dl-5.alpinelinux.org/alpine/ 2020-06-29 06:40:34 http://mirror.rise.ph/alpine 2020-06-29 06:40:56 and i suspect it is not maintained? 2020-06-29 06:58:24 I think mirrors.a.o is now covering that? 2020-06-29 06:58:31 which is maintained 2020-06-29 06:59:07 is setup-alpine still using `/usr/share/alpine-mirrors/*`? 2020-06-29 07:13:06 what is backing mirrors.a.o? 2020-06-29 07:13:54 A repo on gitlab 2020-06-29 07:17:23 interesting 2020-06-29 10:38:45 Cogitri: it ran succesfully tonight :) 2020-06-29 11:36:04 Great :) 2020-06-29 21:08:26 clandmeter: TIL https://docs.docker.com/compose/extends/ 2020-06-29 21:10:54 nice, never seen it. 2020-06-29 21:15:58 kde is moving to gitlab as well 2020-06-29 21:16:12 https://about.gitlab.com/blog/2020/06/29/welcome-kde/ 2020-06-29 21:17:50 not sure if they are self-hosting or using gitlab.com 2020-06-29 21:18:12 ah, community edition, so self-hosted 2020-06-30 11:16:53 ikke: yes, https://dot.kde.org/2020/06/30/kdes-gitlab-now-live