2022-06-02 10:13:55 algitbot: retry master 2022-06-02 13:29:25 algitbot: retry master 2022-06-03 15:02:29 umh... 2022-06-03 16:17:53 algitbot: retry master 2022-06-03 21:32:36 algitbot: retry 3.16-stable 2022-06-04 11:48:30 algitbot: kick master 2022-06-04 12:47:20 algitbot: kick master 2022-06-07 19:56:05 algitbot: retry master 2022-06-07 19:58:05 algitbot: retry master 2022-06-07 20:35:51 algitbot: kick master 2022-06-07 20:35:59 algitbot: retry master 2022-06-07 22:48:05 algitbot: retry master 2022-06-07 23:11:33 algitbot: kick master 2022-06-07 23:11:41 algitbot: retry master 2022-06-08 09:34:12 algitbot: retry master 2022-06-08 09:41:56 algitbot: retry master 2022-06-08 09:53:59 ikke: could you look at that one above 2022-06-08 10:20:53 psykose: what's up? 2022-06-08 10:21:00 the apparmor test 2022-06-08 10:21:04 on armv7 2022-06-08 10:21:10 it passes on the others so it looks strange 2022-06-08 10:21:12 not sure why it fails 2022-06-08 10:23:28 so one failure is 10 apparmor denials versus 20? 2022-06-08 10:23:42 seems to be 2022-06-08 10:25:22 the other one compares like 1k chars 2022-06-08 10:25:45 those are fun 2022-06-08 10:25:50 oh, only 2 tests 2022-06-08 10:25:58 but they are testing a bazillion thingsd 2022-06-08 10:26:16 ah, it's also 10/20 2022-06-08 10:26:21 Don't know if this is worth mentioning: I think our test-aa-notify patch is from https://gitlab.com/apparmor/apparmor/-/issues/220 , but the merge request that closes that issue is https://gitlab.com/apparmor/apparmor/-/merge_requests/848 which does things a little bit differently 2022-06-08 10:27:11 that's related to some help output, i don't see why it would affect the 10/20 difference 2022-06-08 10:27:17 well 2022-06-08 10:27:22 the 'less strict' might make that pass too 2022-06-08 10:47:08 algitbot: retry master 2022-06-08 10:47:20 alas it was unrelated 2022-06-08 10:48:15 algitbot: kick master 2022-06-08 10:48:19 algitbot: retry master 2022-06-08 10:48:26 algitbot: retry master 2022-06-08 11:20:46 Ok, I ran `python3 test-aa-notify.py -v` to get verbose output, and the 2 tests that failed on armv7 (test_entries_since_login and test_entries_since_login_verbose) are actually skipped with the reason "Requires wtmp on system" 2022-06-08 11:23:04 So, probably the question to ask is why test-aa-notify.py finds a /var/log/wtmp on the armv7 builder and doesn't skip the tests 2022-06-08 11:24:19 it means something else just put it there ever once upon a time 2022-06-08 11:24:22 it doesn't get cleaned or anything 2022-06-08 11:24:28 sadly the builds are not clean chroots 2022-06-08 11:26:29 can't wait for the day it runs rootbld 2022-06-08 11:27:17 if it didn't instantly start with the 'no network' requirement we could have probably deployed it ages ago and went from there 2022-06-08 11:27:37 We can fix the broken APKBUILDs with that easily though 2022-06-08 11:27:47 yes, but so far it's been 90% of the bikeshed 2022-06-08 11:27:57 we could have just.. not, and fixed it after, and it would be a nice improvement 2022-06-08 11:28:02 just add `options="net"` everywhere, at least then it's clear where the network is used as well and we can slowly get rid of it over time 2022-06-08 11:28:20 there's also xvfb-run tests and the like, but yeah, you fix them one by one 2022-06-08 11:28:29 Yeah I'm saying start rootbld'ing everything first, fix issues when they come up. 3.16 is out so no release to care about atm 2022-06-08 11:28:41 my thoughts too, best time to break everything 2022-06-08 11:28:50 but everyone takes a vacation after release :) 2022-06-08 11:29:26 I already know 2 aports that will be needing options="net" 2022-06-08 11:29:34 well then go add them! 2022-06-08 11:29:43 I always add it where I find it 2022-06-08 11:29:51 I've already done tons of packages over time 2022-06-08 11:30:18 everywhere i've wanted to add it was either prepare() fetch (which is planned to be allowed but not implemented..) or go stuff (which we probably need to fix anyway..) 2022-06-08 11:30:20 which is quite ech 2022-06-08 11:30:23 aside from that, yeah 2022-06-08 11:30:59 Just checking, options="net" is needed for check() too, right? 2022-06-08 11:31:13 I mean, if the network access is done in check() 2022-06-08 11:31:25 depends on what you want 2022-06-08 11:31:28 sometimes yes, sometimes no 2022-06-08 11:31:37 yeah it's needed for check if tests require network access 2022-06-08 11:31:51 sometimes the checks are really stupid so you just skip it, or skip those, etc 2022-06-08 11:31:51 shrug 2022-06-08 11:31:54 in some cases only the tests require net, I'd love it to enable it just for that phase 2022-06-08 11:32:38 I'm quite certain this check isn't skipped...because I put it in there, to make sure this particular aport is compatible with Busybox's wget 2022-06-08 11:34:39 Most of the time I skip tests if they require network access but sometimes you can't sadly 2022-06-08 11:41:15 ikke: ^ seems it was /var/log/wtmp 2022-06-08 11:43:50 apparmor? 2022-06-08 11:47:09 yeah 2022-06-08 11:47:50 rbq: though, funnily, i have that file and didn't skip the tests and they also passed for me 2022-06-08 11:56:15 How many tests are skipped? (3 for me) 2022-06-08 11:56:34 lost the logs 2022-06-08 11:57:19 I just created an empty /var/log/wtmp 2022-06-08 11:57:43 and the tests are still skipped, for a different reason: "Could not find last login" 2022-06-08 11:58:12 ah, that means not only is it there but it's not empty 2022-06-08 11:58:27 and has some valid date 2022-06-08 11:58:31 data 2022-06-08 11:58:35 or whatever 2022-06-08 11:58:44 Well, the 2 tests that are failing on armv7 (due to /var/log/wtmp) are skipped with that reason, the 3rd is skipped due to "Requires kern.log on system" 2022-06-08 11:59:09 But I saw that there was 1 test skipped on armv7, so I assume that's the kern.log one 2022-06-08 12:03:54 If I'm reading the Python right, it's running `aa-notify -f /var/log/wtmp -l` for test_entries_since_login and for test_entries_since_login_verbose an additional "-v" is added 2022-06-08 12:05:33 So, the output with "20" instead of "10" is very likely generated by the aa-notify command 2022-06-08 12:05:58 and that causes the tests to fail 2022-06-08 12:33:46 As expected, I was not reading the Python properly, it is not running aa-notify on /var/log/wtmp, but on the test files, it's just that the tests are skipped if (1) wtmp is not found (2) wtmp exists, but get_last_login_timestamp(username) returns 0 2022-06-08 12:35:50 and get_last_login_timestamp is from the apparmor.notify module 2022-06-08 12:38:11 I created an empty /var/log/wtmp, and faked the value of get_last_login_timestamp, and now the tests are failing for me as well 2022-06-08 12:54:15 Ok, whatever 2022-06-08 12:55:53 Look at the timestamp of the build log, it says some time on 15 April 2022, I'm now faking values of get_last_login_timestamp (in Unix time)...and if it's past a certain value, the test fails (at least that's what I can conclude now) 2022-06-08 12:56:21 I mean, the timestamp of the part where it says "AppArmor denials" 2022-06-08 13:05:55 If the time since last login is more than 30 days, then the tests will fail 2022-06-08 13:10:53 should just delete the log 2022-06-08 13:15:03 Probably :) 2022-06-08 13:18:07 iiikkeeeee 2022-06-08 13:26:08 Alright, I roughly understand what's going on...each test log file is 33 lines long consisting of 10 lines that have the current timestamp, 10 lines that have are 30 days in the past, 10 lines that are 999 days in the past, and 3 "unrelevant entries" 2022-06-08 13:26:50 So, that's where "10 denials" comes from 2022-06-08 13:27:14 and if the last login time is over 30 days, then another 10 lines get added, and you get "20 denials" 2022-06-08 13:27:27 s/get added/fit the criteria/ 2022-06-08 13:27:45 ACTION is speechless 2022-06-09 00:42:13 charming 2022-06-09 00:51:29 all of this because openwrt people don't know how to use SONAMEs smh 2022-06-09 06:20:16 algitbot: kick master 2022-06-09 06:20:23 algitbot: retry master 2022-06-10 09:06:25 algitbot: retry master 2022-06-10 09:21:59 algitbot: retry master 2022-06-10 09:27:51 algitbot: retry master 2022-06-10 10:26:35 algitbot: retry master 2022-06-10 10:42:46 algitbot: retry master 2022-06-10 11:28:09 algitbot: retry master 2022-06-10 11:37:53 algitbot: retry master 2022-06-10 11:38:09 algitbot: retry 3.16 2022-06-10 11:38:14 * retry 3.16-stable 2022-06-10 11:38:21 algitbot: retry 3.16-stable 2022-06-10 12:03:01 algitbot: retry master 2022-06-10 12:12:15 algitbot: retry master 2022-06-10 12:18:23 algitbot: retry master 2022-06-10 12:40:50 algitbot: retry master 2022-06-10 13:23:46 algitbot: retry master 2022-06-10 13:56:33 algitbot: retry master 2022-06-10 14:51:46 algitbot: retry master 2022-06-10 15:42:50 algitbot: retry master 2022-06-10 16:29:07 algitbot: retry master 2022-06-10 19:32:22 ikke: you left the echo in there :) 2022-06-10 19:32:40 ouch 2022-06-11 00:16:23 Hmm, are tests disabled by default for riscv64? 2022-06-11 00:20:06 perl-ldap failing on riscv64 seems to be due to perl-text-soundex not being installed, it is listed as checkdepends...probably that is wrong, and it should be depends instead 2022-06-11 05:25:45 algitbot: retry master 2022-06-11 06:14:43 rbq: yes, they are 2022-06-11 06:15:24 rbq: ABUILD_BOOTSTRAP is set on riscv64, which results in checks not being run 2022-06-11 06:23:38 rbq: hmm, it seems to have fixed itself? 2022-06-11 06:39:53 algitbot: retry master 2022-06-11 07:53:59 I'll try to have a look at whether perl-ldap is usable without perl-text-soundex as depends later on, however, I don't use ldap, so I don't know how well my judgment is in this area...perhaps perl-ldap's maintainer should be notified about this? 2022-06-11 07:55:16 Looking at the build logs, perl-ldap's install script is still trying to install Text::Soundex by itself...it's just that it no longer causes the whole build to fail 2022-06-11 07:55:44 s/well/good/ 2022-06-11 08:29:20 Judging from how Text::Soundex is loaded, it doesn't seem to be a hard requirement... 2022-06-11 08:40:26 and perl-ldap's APKBUILD contains comments on how confusing all this is (make installs soundex to usr/local, and make check removes it)...oh well, I guess nothing much can be done about it 2022-06-11 13:39:13 ikke: seems busybox is stuck 2022-06-11 13:42:09 algitbot: retry master 2022-06-11 13:47:31 That never happened before 2022-06-11 13:53:51 not once! 2022-06-11 14:13:40 Yep, all builders stuck on it for ~10 hours 2022-06-11 14:15:11 i picked a real good time to go to bed then :) 2022-06-11 14:18:36 I was wondering why r1 of perl-ldap only appeared on riscv64...now I know what happened 2022-06-11 14:24:14 So, what are we going to do about it? 2022-06-11 14:24:25 It's anoying that it happens every time 2022-06-11 14:24:48 (symptom: SIGPIPE gets ignored) 2022-06-11 14:27:29 algitbot: retry master 2022-06-11 14:28:56 It seems s390x and riscv64 were not affected? 2022-06-11 14:29:06 correct 2022-06-11 14:29:47 We did notice in the past that mqtt-exec, through use of mosqtuitto, did ignore SIGPIPE, which children inherited 2022-06-11 14:29:51 that was fixed 2022-06-11 14:29:57 But it still happens 2022-06-11 15:19:16 psykose: the grub issue might also be a gcc bug: project/submodule/.git/ 2022-06-11 15:19:27 http://dup.pw/alpine/aports/e827b100297b 2022-06-11 15:19:36 sorry, a cannot compute today 2022-06-11 15:19:39 https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104853 2022-06-11 15:20:24 ah, maybe, though it's the same thing as before for u-boot 2022-06-11 15:21:50 algitbot: retry master 2022-06-11 15:23:42 also looks like bb is stuck again anyway, hehe 2022-06-11 15:23:58 And it will be stuck until we fix it 2022-06-11 15:28:19 if by fix it you mean restart it ~2 more times until the next month :p 2022-06-11 15:29:12 It's not random 2022-06-11 15:30:04 it has been so far, though 2022-06-11 15:36:30 So right now on x86_64, I see buildrepo (and all children) ignoring SigPipe 2022-06-11 15:36:49 So no matter how often I kill the build, it will keep stuck, unless I kill buildrepo as well 2022-06-11 15:37:55 So something is triggering sigpipe being ignored in buildrepo 2022-06-11 15:43:06 yeah, but what i mean is we've been in this state before 2022-06-11 15:43:20 and then after 2 hours of debugging it, you randomly kill it and the sigign is gone and it passes 2022-06-11 15:43:36 we've been through this rodeo like 4 times since after the original sigign fix 2022-06-11 15:43:43 I can just restart mqtt-exec and it will probably go through 2022-06-11 15:43:46 mhm 2022-06-11 15:44:10 In that sense, it's not random 2022-06-11 15:44:26 It will fail until it's reset 2022-06-11 15:45:01 Something that nmeum was thinking of last time is if buildrepo through some mechanism uses mosquitto 2022-06-11 15:45:11 which is what caused the issue in mqtt-exec 2022-06-11 15:46:40 mhm 2022-06-11 15:46:59 what's the process hierarchy again? 2022-06-11 15:47:02 I keep on forgetting 2022-06-11 15:47:32 maybe we should open an issue on gitlab somewhere to collect this sort of information about this bug 2022-06-11 15:56:56 so, aports-build has a dependency on lua5.2-mqtt-publish which in turn uses mosquitto (which will ignore sigpipe) but how this relates to the process hierarchy spawned by the builder 2022-06-11 15:57:06 s/but/but not sure/ 2022-06-11 16:07:04 nmeum: https://tpaste.us/gZp6 2022-06-11 16:07:55 build-edge-x86_64 [~]# sign 61417 2022-06-11 16:07:57 SigIgn: 0000000000000004 2022-06-11 16:08:48 build-edge-x86_64 [~]# sign 61545 2022-06-11 16:08:50 SigIgn: 0000000000001006 2022-06-11 16:10:22 I created https://gitlab.alpinelinux.org/alpine/infra/infra/-/issues/10758 to collect more info 2022-06-11 16:12:56 https://gitlab.alpinelinux.org/alpine/aports/-/blob/master/main/aports-build/report-build-errors.lua 2022-06-11 16:12:59 how is this invoked? 2022-06-11 16:13:13 > publish.single(topic, payload, nil, true, conf.mqttbroker) 2022-06-11 16:14:04 ^ this line might ignore sigpipe depending on how lua5.2-mqtt-publish works (haven't looked that up yet) 2022-06-11 16:14:39 usr/share/buildrepo/plugins/report-build-errors.lua 2022-06-11 16:15:21 is this called from aports-build or how does this work? 2022-06-11 16:15:21 nmeum: it's a drop-in plugin for buildrepo 2022-06-11 16:15:43 ah 2022-06-11 16:15:45 nmeum: https://gitlab.alpinelinux.org/alpine/lua-aports/-/blob/master/bin/buildrepo.lua#L38 2022-06-11 16:16:22 ok, but the postbuild function is run in the same process that later forks() to execute abuild -r? 2022-06-11 16:16:39 I think so, buildrepo is just a single process 2022-06-11 16:17:11 (btw, this also answers a question I had, as the plugin used to be placed in /etc/buildrepo/plugins, but apparently is not necessary anymore) 2022-06-11 16:17:12 because in this case what might be happening is: (a) a build fails (b) postbuild is run (c) postbuild runs publish.single from report-build-errors.lua (d) publish.single ignores SIGPIPE (e) the next build executed by buildrepo runs with SIGPIPE ignored 2022-06-11 16:17:27 Sounds plausible 2022-06-11 16:17:27 which could also explain why it doesn't happen when you restart mqtt-exec 2022-06-11 16:19:13 Why is it a good idea that invoking some mosquitto function can cause the process to ignore sigpipe 2022-06-11 16:19:56 well, yeah that's why it is not a good idea to modify the signal ignore mask in a library 2022-06-11 16:20:02 nod 2022-06-11 16:20:39 I mean we could complain to the mosquitto folks about this but I doubt that they will listen ':D 2022-06-11 16:21:17 we could also restore the default sigpipe signal handler in the lua wrapper for mosquitto https://github.com/flukso/lua-mosquitto/blob/master/lua-mosquitto.c#L287-L300 2022-06-11 16:21:27 but not sure when ctx_destroy is invoked actually 2022-06-11 16:21:33 nmeum: We can at least try :) 2022-06-11 16:22:02 do we have a testbed were we can easily reproduce this to see what changes fix the issue? 2022-06-11 16:22:28 so testbed should be: build fails -> afterwards sigpipe is in SigIgn 2022-06-11 16:22:50 build successeds -> report-build-errors.lua is not invoked and sigpipe should not be in sigign 2022-06-11 16:23:38 We can setup a dummy builder somewhere 2022-06-11 16:25:21 if that's not a lot of work that might be an idea 2022-06-11 16:25:31 :D 2022-06-11 16:26:39 I mean if it is difficult to setup a dummy builder then we might as well just patch lua-mosquitto 2022-06-11 16:27:02 we can also add a dependency on luaposix to aports-build and reset the default SIGPIPE handler from there 2022-06-11 16:28:10 nmeum: so the idea is to setup a builder that fails building a package, we then fix it, and then try to build something that would invoke something like yes 'foo bar' | head -n10, right? 2022-06-11 16:28:37 basically yes 2022-06-11 16:28:45 Should be doable 2022-06-11 16:28:50 but I am like 70% sure that my assesment of the issue is correct 2022-06-11 16:29:03 To me, it at least sounds plausible 2022-06-11 16:29:21 so if no new buildrepo process is spawned after a build failure then the problem really is report-build-errors.lua 2022-06-11 16:29:50 I think I can also just do some tests with lua-mosquitto directly without setting up a dummy builder 2022-06-11 16:30:01 ok 2022-06-11 16:30:47 On an idle builder, there is no buildrepo process 2022-06-11 16:31:03 So as soon as all packages have been built, everything stops except mqtt-exec 2022-06-11 16:31:17 but if 5 new packages are pushed these are all build by the same buildrepo process? 2022-06-11 16:31:31 yes, and I think even new packages pushed afterwards 2022-06-11 16:31:35 so if package 3 fails to build package 4 will be invoked with an ignored sigpipe 2022-06-11 16:31:39 yes 2022-06-11 16:31:47 hmm 2022-06-11 16:32:16 If a package fails, I think buildrepo stops 2022-06-11 16:32:20 (unless -k is provided) 2022-06-11 16:33:19 hmm 2022-06-11 16:33:30 Now only mqtt-exec is left 2022-06-11 16:33:43 algitbot: retry master 2022-06-11 16:34:57 nmeum: after retrying, immediately buildrepo is ignoring sigpipe 2022-06-11 16:35:09 hm, maybe that's not it then 2022-06-11 16:35:13 https://tpaste.us/k6Y6 2022-06-11 16:35:19 did mqtt-exec spawn a new buildrepo instance? 2022-06-11 16:35:33 yes, compare the pids 2022-06-11 16:37:35 buildrepo is the first process with sigpipe ignored though, right? 2022-06-11 16:38:12 so I strongly suspect that it called some mosquitto mqtt function (maybe that can be checked somehow by setting a breakpoint in buildrepo, I don't know how one debugs lua code) 2022-06-11 16:38:38 nmeum: the builder does publish to build.a.o 2022-06-11 16:38:43 so maybe that is causing it? 2022-06-11 16:38:56 though, it would not explain why restart mqtt-exec would fix it 2022-06-11 16:41:44 we can also just patch mosquitto to never ignore SIGPIPE and be done with it 😈 2022-06-11 16:42:01 Totally fine with me 🤐 2022-06-11 16:42:39 It should be the service that determined whether to ignore it 2022-06-11 16:42:48 determines* 2022-06-11 16:44:05 probably not a good idea though as doing that will also affect other applications using mosquitto 2022-06-11 16:55:04 ikke: I opened an issue upstream https://github.com/eclipse/mosquitto/issues/2564 2022-06-11 17:00:49 let's hope the 3.16 builder doesn't get stuck too :S 2022-06-11 20:28:33 algitbot: retry master 2022-06-14 04:57:57 algitbot: kick master 2022-06-14 10:30:58 algitbot: retry master 2022-06-14 10:31:04 algitbot: retry 3.16-stable 2022-06-14 10:31:06 algitbot: retry 3.15-stable 2022-06-15 05:57:34 algitbot: retry master 2022-06-15 07:18:28 algitbot: retry master 2022-06-15 14:34:06 algitbot: retry master 2022-06-16 02:04:32 What a coincidence, the test/ssl/peer_certificate.pem included in libtorrent-rasterbar expired 4 days ago 2022-06-16 02:13:06 ...and the certificates have been rebuilt upstream: https://github.com/arvidn/libtorrent/pull/6925 2022-06-16 07:24:02 rbq: pushed ^ 2022-06-16 09:36:48 algitbot: retry master 2022-06-16 09:41:26 nmeum: thank you 2022-06-16 09:42:05 why was this backported? 2022-06-16 09:42:12 mps: for what? :D 2022-06-16 09:45:27 nmeum: I thought you backported but later saw that was ncopa 2022-06-16 09:45:42 why does it need to be backported though? 2022-06-16 09:46:46 nmeum: when I upgrade linux-edge in edge I usually (always) just cherry-pick it to latest stable 2022-06-16 09:47:30 without this backport I will have always to make change manually 2022-06-16 09:48:01 ah 2022-06-16 10:09:33 algitbot: retry master 2022-06-16 21:50:06 algitbot: retry master 2022-06-17 09:36:34 algitbot: retry master 2022-06-17 10:23:58 algitbot: retry master 2022-06-17 10:32:04 algitbot: retry master 2022-06-17 10:42:04 algitbot: retry master 2022-06-17 10:51:17 algitbot: retry master 2022-06-17 11:57:54 algitbot: retry master 2022-06-17 15:37:15 algitbot: retry master 2022-06-19 07:24:55 algitbot: retry master 2022-06-20 16:47:23 algitbot: retry master 2022-06-20 16:54:43 algitbot retry master 2022-06-20 16:54:48 algitbot: retry master 2022-06-21 06:50:02 ikke, msg.alpinelinux.org outage? 2022-06-21 07:02:50 umh, seems also gitlab.a.o is down 2022-06-21 07:05:07 ah...it seems a dns issue 2022-06-21 07:05:15 works here 2022-06-21 07:05:31 it's cached then 2022-06-21 07:06:41 it resolves from non cached hosts 2022-06-21 07:07:06 To me fails from 8.8.8.8but works from 208.67.220.220 2022-06-21 07:07:27 no problem from 8.8.4.4 2022-06-21 07:07:48 https://tpaste.us/endX 2022-06-21 07:08:12 frmo my local 8.8.4.4, could be a local google server issue 2022-06-21 07:09:06 must have been a glitch on google dns 2022-06-21 07:09:07 https://tpaste.us/k6vw 2022-06-21 07:09:13 now works 2022-06-21 07:09:39 probably related to the current cloudflare outage, given the timing.. 2022-06-21 07:09:49 or just a hell of a coincidence :p 2022-06-21 07:11:15 https://tpaste.us/OqBX 2022-06-21 07:11:16 umh... 2022-06-21 07:17:17 anyway msg.a.o does not work 2022-06-22 10:46:14 ncopa: fyi, the [pwd] in the script section is just because it's required to have a script section in gitlab (for now). 2022-06-22 10:47:25 oh, ok. so its not executed? 2022-06-22 10:48:47 ncopa: the image sets an entrypoint 2022-06-22 10:48:49 https://gitlab.alpinelinux.org/alpine/infra/docker/abuild-ci/-/blob/master/overlay/entrypoint 2022-06-22 10:49:58 maybe better to get rid of that 2022-06-22 10:50:27 or at least, forward "$@" to sh 2022-06-22 10:55:54 im not able to reproduce the failure locally 2022-06-22 10:57:02 i suspect it has to do with permissions or similar. what user checks out the git tree? 2022-06-22 10:58:50 ncopa: I believe root 2022-06-22 10:59:32 ncopa: git config --global safe.directory * 2022-06-22 11:11:17 no the problem was my cp .abuild . 2022-06-22 11:18:12 Ah 2022-06-22 11:18:40 slightly annoyed that kyua didn't print what was on stderr 2022-06-22 15:20:27 algitbot: retry master 2022-06-22 17:03:48 algitbot: retry master 2022-06-22 21:42:56 algitbot: retry master 2022-06-24 16:15:15 algitbot: retry master 2022-06-27 19:31:18 algitbot: retry master 2022-06-28 06:49:43 algitbot: kick master 2022-06-28 06:49:51 algitbot: retry master 2022-06-28 16:12:35 algitbot: retry master 2022-06-28 16:28:30 algitbot: retry master 2022-06-28 16:35:31 algitbot: retry master 2022-06-28 16:39:33 algitbot: retry master 2022-06-28 16:44:24 algitbot: retry master 2022-06-28 16:45:55 algitbot: retry master 2022-06-28 19:32:52 algitbot: retry master 2022-06-30 05:17:57 algitbot: kick master