2021-06-01 14:30:12 ikke: can't push to secfixes-tracker docker again. please apply: https://tpaste.us/9j60 2021-06-01 14:37:23 i also have another service which translates the DWF CVE database into same format the NVD/CIRCL feed uses. 2021-06-01 14:37:35 i'll host that myself though, if you guys don't want it 2021-06-01 16:20:24 clandmeter: I guess we could clean this up now, right?\ 2021-06-01 16:20:26 usa5-dev1Free disk space is less than 5% on volume /srv/mirror (3.0358 %)1y 3m 29d 2021-06-01 16:20:34 only 1 years and 3 months open :D 2021-06-01 16:20:57 whats that? 2021-06-01 16:21:00 zabbix? 2021-06-01 16:21:05 That came from zabbix, yes 2021-06-01 16:21:19 https://zabbix.alpinelinux.org/tr_events.php?triggerid=15751&eventid=287181 2021-06-01 16:21:46 i added 100G 2021-06-01 16:21:58 but thats not that much anymore now :) 2021-06-01 16:22:06 heh 2021-06-01 16:22:34 we are really hitting limits 2021-06-01 16:22:40 Yes, we are 2021-06-01 16:22:46 we loosen them from tiem to time, but the vg's are also getting full 2021-06-01 16:23:27 we still have the mirrors who are kind of ready to serve 2021-06-01 16:23:31 The new geo located mirrors, do they have enough space? 2021-06-01 16:23:38 yes 2021-06-01 16:23:42 i increased them last time 2021-06-01 16:23:44 Ok 2021-06-01 16:23:46 i think to 1.5 2021-06-01 16:23:50 or more 2021-06-01 16:23:57 So we could decommission the mirrors on the other hosts, right? 2021-06-01 16:24:04 not yet 2021-06-01 16:24:08 at some point, at least, I mean 2021-06-01 16:24:13 cdn uses them 2021-06-01 16:24:16 yes 2021-06-01 16:24:30 i was looking into havign ipv6 on them 2021-06-01 16:24:33 which it does now 2021-06-01 16:24:36 nld3 is full as well 2021-06-01 16:24:46 but we need to think how we handle multiple services 2021-06-01 16:25:10 or we need to use ipv6 nat 2021-06-01 16:25:17 ACTION hides 2021-06-01 16:34:14 for http, can let traefik handle it, but the issue was non-http traffic 2021-06-01 16:34:40 Would there be anything against point the AAAA directly to the container IP? 2021-06-01 16:35:28 i donbt think so 2021-06-01 16:35:53 but how would i access rsync.a.o via http? 2021-06-01 16:37:13 they are all different containers 2021-06-01 16:38:37 how does that work with ipv4? 2021-06-01 16:38:55 connect to the host 2021-06-01 16:39:05 then the port gets mapped to the container 2021-06-01 16:39:09 (thinking out loud) 2021-06-01 16:39:39 iptables 2021-06-01 16:40:00 this is kind of what ipv6-nat docker thingy does 2021-06-01 16:40:31 its not the same, but the outcome is similar 2021-06-01 16:40:52 you define the ports in docker and something does magic to expose them 2021-06-01 18:34:58 Ariadne: I've upgraded the secfixes tracker with your changes 2021-06-01 18:35:05 thx 2021-06-01 18:36:25 build-3-14-ppc64le seems missing in action 2021-06-01 18:38:10 just started it again 2021-06-01 18:38:44 thanks 2021-06-02 07:14:29 is armhf CI stuck or busy 2021-06-02 08:49:51 im following up on the libftdi1 bad signature issue. I wonder what host is the backend of dl-cdn? 2021-06-02 08:50:19 it used to be dl-4 2021-06-02 08:50:39 But I think it's now supported by nld3 and nld5? 2021-06-02 09:13:23 i found dl-t1-2.nld3.alpin.pw 2021-06-02 09:15:17 but i have a problem, that it has the timestamps in whole seconds, while build-edge-armhf has the timestamps in miliseconds 2021-06-02 09:17:14 ikke: do you mind if i upgrade dl-t1-2.nld3.alpin.pw? 2021-06-02 09:17:43 No, go ahead 2021-06-02 09:18:03 to 3.13 2021-06-02 09:18:15 v3.7 -> v3.13 2021-06-02 09:18:35 Yeah, I have a ticket open to upgrade all containers there 2021-06-02 09:19:27 * 2021-06-02 09:19:27 * The default and preferred location for nginx vhost configs has been changed 2021-06-02 09:19:27 * from /etc/nginx/conf.d to /etc/nginx/http.d. Although we did our best to not 2021-06-02 09:19:27 * break existing setups by this upgrade, we strongly recommend to verify it. 2021-06-02 09:19:28 * 2021-06-02 09:25:17 Yes, I'm aware 2021-06-02 09:51:49 im gonna fix that now 2021-06-02 11:10:33 ok, i think im gonna reboot the dl-t1-2.nld3.alpine.pw instance 2021-06-02 11:10:42 it was upgraded from 3.7 to 3.13 2021-06-02 11:11:31 its kinda busy 2021-06-02 11:13:47 dl-t1-2 was upgraded. should I update a ticket/doc that it was done? 2021-06-02 11:14:09 https://gitlab.alpinelinux.org/alpine/infra/infra/-/issues/10719 2021-06-02 15:39:32 i notice that algitbot reports rather verbosely about the mailing list patches 2021-06-02 15:39:39 is there a way to filter those out 2021-06-02 15:41:54 https://gitlab.alpinelinux.org/alpine/infra/compose/webhook/-/blob/master/config/scripts/mqtt-publish.lua 2021-06-02 16:12:02 probably best we never worked out a deal with fosshost anyway 2021-06-02 16:12:04 https://fosshost.org/news/freenode-faq 2021-06-02 17:08:57 ddevault: Do you know why Romans commits showed up in the patch branch? 2021-06-02 17:09:12 context? 2021-06-02 17:09:13 patches/3537 | prspkt | community/libinput: upgrade to 1.18.0 2021-06-02 17:09:22 At least, it appeared so 2021-06-02 17:09:25 can you rephrase that in the form of a link 2021-06-02 17:10:31 I'm on a slow computer right now and navigating gitlab's javascript-laden interface is a challenge 2021-06-02 17:10:41 https://tpaste.us/d5nx 2021-06-02 17:11:14 hm, not sure 2021-06-02 17:11:27 if you send me an email about it I will investigate in a few days 2021-06-02 17:14:36 oh, maybe because it was rebased 2021-06-02 17:14:56 948249ae...0a568a04 - 5 commits from branch master 2021-06-02 17:14:58 93ff8d37 - community/py3-prometheus-client: upgrade to 0.11.0 2021-06-02 18:39:56 ikke: i rebased the branch 2021-06-02 18:40:12 yes, I noticed 2021-06-03 06:24:15 good morning. the build-*-x86* servers will be shut down for maintenance in a few minutes 2021-06-03 06:34:12 build-*-x86* is now off 2021-06-03 07:48:50 servers are back 2021-06-03 07:49:02 need to verify that all services are back too 2021-06-03 07:53:04 distfiles not yet it seems 2021-06-03 07:53:45 build.a.o 2021-06-03 08:18:14 distfiles still down? 2021-06-03 08:20:12 and its up it seems :) 2021-06-03 08:41:43 distfiles is still down for some reason 2021-06-03 09:35:47 ncopa: dev.a.o still down? 2021-06-03 09:37:34 i think so. I think the network is borked 2021-06-03 09:37:52 might be ip address conflict 2021-06-03 09:48:35 This solved it: https://tienbm90.medium.com/resolved-br0-received-packet-on-xx-with-own-address-as-source-address-fad895d410a4 2021-06-03 09:52:23 and this just happened by only moving them? 2021-06-03 09:52:23 hmm 2021-06-03 09:52:23 We had the issue before 2021-06-03 09:52:23 Probably due to the reboot 2021-06-03 10:03:18 we should probably add that setting to /etc/network/interfaces 2021-06-03 10:08:39 maybe we should upgrade nld8-dev1 to alpine 3.13 and use ifupdown-ng? 2021-06-03 10:09:03 i updated the network config and closed the issue 2021-06-03 10:09:21 Fine with me 2021-06-03 10:37:50 clandmeter: what syncs gitlab.a.o again to git.a.o? 2021-06-03 10:38:10 gitlab itself? 2021-06-03 10:38:15 not sure 2021-06-03 10:38:20 we had a service before 2021-06-03 10:38:30 oh git.a.o 2021-06-03 10:38:39 uhm 2021-06-03 10:38:43 We have mirroring setup to github.a.o 2021-06-03 10:38:46 github.com* 2021-06-03 10:39:00 maybe it pulls based on mqtt? 2021-06-03 10:39:12 I thought we had the same for git.a.o 2021-06-03 10:39:20 i recall that we left algitbot to allow to push to git.a.o 2021-06-03 10:39:48 algitbot was responcible for that? 2021-06-03 10:39:53 hmm 2021-06-03 10:41:30 I don't see any reference 2021-06-03 10:43:14 where does the bot run anyway? 2021-06-03 10:43:20 deu1 2021-06-03 10:44:29 we have dns for deu1? 2021-06-03 10:45:54 ah right 2021-06-03 10:45:58 typo :) 2021-06-03 10:46:11 you typed due, didn't you? 2021-06-03 10:46:28 nope :p 2021-06-03 10:46:36 forgot the 1 from dev1 2021-06-03 10:46:55 those devX things should just go 2021-06-03 10:47:01 pretty useless :) 2021-06-03 10:47:03 yeah 2021-06-03 10:47:33 it's kinda due to 1 site per device that defeats the purpose 2021-06-03 10:49:11 I see an mqtt-exec.alerts process that failed on git-old.a.o, but it seems to be hooked to sircbot, and I don't see any reference to updating the repo there as well 2021-06-03 10:51:33 oh, it's not git-old anymore, ofcourse 2021-06-03 10:51:43 so git is in docker and i think it pulls from gitlab 2021-06-03 10:52:35 Fetching: alpine/aports 2021-06-03 10:52:37 Fetching origin 2021-06-03 10:52:39 From https://gitlab.alpinelinux.org/alpine/aports 2021-06-03 10:55:28 ok, now it works 2021-06-03 10:55:40 it seems to have missed your previous push 2021-06-03 11:00:00 i can live with it 2021-06-03 11:00:55 yeah, eventual consistency :D 2021-06-03 11:01:26 clandmeter: What do you think of having our own docker registory for our custom images? 2021-06-03 11:01:46 especially since we probably have to host riscv64 and mips64 ourselves 2021-06-03 11:02:07 gitlab has support for it, we just need to enable it 2021-06-03 11:02:25 (helps with the new docker limits as well) 2021-06-03 11:23:19 sounds ok for me 2021-06-03 11:23:36 doenst need anything special i guess 2021-06-03 11:23:40 just a simple linode 2021-06-03 11:24:07 i was thinking how to setup the runner for riscv 2021-06-03 11:24:29 the runner needs to use a custom apk repo 2021-06-03 11:24:42 as i prefer to keep it seperate for now 2021-06-03 11:25:19 i do plan to add riscv apk key to aports 2021-06-03 11:26:00 im also looking into the mr's of roman 2021-06-03 11:26:26 the autotools mr fixes can be merged 2021-06-03 11:26:59 the rest im not sure, maybe better to split them into more logical mrs 2021-06-03 11:27:07 need to run bbl 2021-06-03 16:39:48 Is there a way in GitLab to prevent people making merge request from protected branches ? 2021-06-03 16:43:41 not that I'm aware of 2021-06-03 16:51:11 thanks 2021-06-03 16:52:25 I could disable default branch protection, though 2021-06-03 17:35:59 https://gitlab.alpinelinux.org/Leo/aports-proxy-bot/-/jobs/407658 weird error 2021-06-03 17:37:17 oh, new riscv64 runner we added 2021-06-03 17:38:07 removed the tags, so it should not be used atm 2021-06-03 18:03:19 re: protected branches. think it is better to keep them enabled, just wanted to know if there was a GitLab way to prevent users from creating merge requests from it 2021-06-04 08:13:27 good morning! 2021-06-04 08:14:06 do we have resources for another riscv64 builder? I'm thinking of cloning the build-edge-riscv64 to build-3-14-riscv64 2021-06-04 08:14:20 but do we have enough hardware and diskspace for that? 2021-06-04 08:16:36 This is now all done in docker 2021-06-04 08:24:58 ncopa: why do you want a stable builder? 2021-06-04 08:25:30 ikke: seems the builder is running on the os disks 2021-06-04 08:25:54 i killed the mirror on the disk, we should move it to that VG instead 2021-06-04 08:41:48 I’m thinking that since the riscv64 now build the entire man and community, why not ship it with 3.14? 2021-06-04 08:42:06 or does it build with ignore errors? 2021-06-04 08:43:35 i think its far from ready 2021-06-04 08:44:05 also need to look into romans changes and how to nicely get them into aports 2021-06-04 09:08:28 what is the internal DNS server IP again? 2021-06-04 09:08:41 can't resolve e.g. netbox without it 2021-06-04 09:10:07 172.16.8.3 2021-06-04 09:10:15 thanks 2021-06-04 09:13:55 gitlab is going AI :| 2021-06-04 09:16:49 clandmeter: please revert to patchwork 2021-06-04 09:22:49 do i still have an aarch64 container somewhere? i can't find it in netbox or dns anywhere 2021-06-04 09:27:18 its probably moved to the new box 2021-06-04 09:27:59 kunkku: ping 2021-06-04 09:28:40 danieli: 172.16.23.17 2021-06-04 09:29:02 thanks! 2021-06-04 09:29:11 i definitely lost track, and i can't log in anywhere to check :) 2021-06-04 09:29:26 i know the feeling 2021-06-04 12:39:47 I at least tried to keep netbox up-to-date, though, not necessarily the containers 2021-06-04 14:53:52 Disk full 2021-06-04 14:53:54 oh well 2021-06-04 15:04:43 What disk? 2021-06-04 15:06:44 not sure. it was on my ncopa-edge-aarch64 2021-06-04 15:06:59 but i wonder if it was when building iso image? 2021-06-04 15:07:06 i have cleaned up a bit 2021-06-04 15:07:34 there are 116G free now 2021-06-04 15:08:25 is it just me or is dl-cdn.a.o really slow? 2021-06-06 07:35:36 Ariadne: we lost the mips builders 2021-06-06 07:35:54 i'm aware 2021-06-06 07:35:57 ok 2021-06-06 07:35:59 i have already pinged aag about it 2021-06-06 07:36:19 i plan to set up a new mips64 builder in my colo this summer 2021-06-06 07:36:43 i already have an edgerouter infinity new in box 2021-06-06 07:37:14 that will be on my network instead of the current dodgy situation ;) 2021-06-06 07:37:50 Nice 2021-06-06 09:21:55 Ariadne: x86 fails, on edge it has been disabled 2021-06-08 08:53:22 interesting, those build-3-[2-4]-armhf builders are the really old wandboard ones, that for some reason has resurrected. 2021-06-08 08:54:31 where are they hosted? 2021-06-08 09:02:12 in norway, 35 mins from where i live 2021-06-08 09:02:29 i will take care of shutting them down permanently 2021-06-08 09:02:41 i found some other leftover machines of mine there 2021-06-08 09:37:30 cool 2021-06-08 09:37:36 i mean.. s/cool/interesting/g 2021-06-08 10:04:17 FYI fastly is having major issues atm 2021-06-08 10:04:32 yeah 2021-06-08 10:05:04 seems like DNS issues judging by some of the error messages i've seen, possibly triggered by BGP issues 2021-06-08 10:05:28 milions of docker users having issues now :D 2021-06-08 10:05:48 there has apparently been a 20% drop in ISP traffic due to the outage 2021-06-08 10:06:20 https://docs.fastly.com/en/guides/fastlys-network-status 2021-06-08 10:06:23 connection failure :D 2021-06-08 10:06:25 effective status apge 2021-06-08 10:07:09 ^ 2021-06-08 10:08:03 the issues are way bigger than just those regions listed with issues 2021-06-08 10:08:12 clandmeter: should we post a news item on the website? 2021-06-08 10:08:37 ncopa: ^ 2021-06-08 10:08:38 ah so its really bigger 2021-06-08 10:09:17 sounds like somebody pushed to wrong branch to fastly backend :) 2021-06-08 10:10:16 Fastly error: unknown domain: bbc.co.uk. 2021-06-08 10:10:16 Details: cache-osl6533-OSL 2021-06-08 10:10:27 that smells DNS 2021-06-08 10:12:22 https://news.ycombinator.com/item?id=27432408 2021-06-08 10:12:43 yup, it's bad 2021-06-08 10:12:54 yup 2021-06-08 10:12:55 i've been getting various error messages 2021-06-08 10:12:57 reddit also down 2021-06-08 10:13:04 pinterest, the guardian, bbc, financial times, ny times, reddit, twitch, twitter, github, gov.uk, stack overflow, spotify, ++ 2021-06-08 10:13:12 https://status.fastly.com/ 2021-06-08 10:13:58 ikke: you can do, but i think in a few minutes everybody knows whats going on. 2021-06-08 10:14:14 which servers Alpine mirrors using to sync data? fastly? 2021-06-08 10:14:20 dl-cdn is fastly 2021-06-08 10:14:29 but other mirrors are behind fastly 2021-06-08 10:14:32 fastly is caching it for us 2021-06-08 10:14:38 oh, ok, thanks 2021-06-08 10:14:40 it's not a mirror itself 2021-06-08 10:14:43 our normal mirrors should be fine 2021-06-08 10:14:56 just not the default one :| 2021-06-08 10:15:00 unless it really is BGP-related and propagates 2021-06-08 10:15:13 but that'd be a screw-up on an even more massive scale 2021-06-08 10:15:41 investing performance... 2021-06-08 10:16:06 "degraded performance" well it's a bit worse than that 2021-06-08 10:16:13 :) 2021-06-08 10:16:19 half the internet is down 2021-06-08 10:16:37 half would be with cloudflare :P 2021-06-08 10:16:44 xkcd is ironically down 2021-06-08 10:16:51 MY-R: well, this is the other half :P 2021-06-08 10:16:53 alpine is half my life ;-) 2021-06-08 10:17:03 other half I bet with amazon :P 2021-06-08 10:17:19 danieli: can xkcd be unironically down? 2021-06-08 10:17:26 ikke: touchΓ© 2021-06-08 10:20:53 I guess our CI is affected as well 2021-06-08 10:21:57 guess i'll have to make a cup of coffee with some diesel in it 2021-06-08 10:30:26 seems its not that big of an issue, they still have somebody updating the status page every 3 minutes 2021-06-08 10:30:45 :D 2021-06-08 10:33:32 https://downdetector.com/ 2021-06-08 10:34:24 lol, even cloudflare has issues due to fastly? 2021-06-08 10:39:13 now they list all off their locations as in degraded performance 2021-06-08 10:39:50 their business continuity regarding covid-19 is also operational :) 2021-06-08 10:40:29 ikke: not that i can see 2021-06-08 10:40:50 danieli: downdetector reports it 2021-06-08 10:40:56 i saw 2021-06-08 10:41:12 i got into the dash and their site fine though, and their NS services works fine 2021-06-08 10:43:45 would be nice if whole net go down for a week, so we people could get a small piece of real life 2021-06-08 10:44:01 real lockdown, how nice :) 2021-06-08 10:44:09 The big depression 2021-06-08 10:44:13 :P 2021-06-08 10:44:28 no, it will be real freedom 2021-06-08 10:44:54 mps: become a farmer and be happy :) 2021-06-08 10:45:29 I'd rather choose my old 'job', soldier 2021-06-08 10:45:38 i bet you will setup a vineyard 2021-06-08 10:46:31 no, I'm not kind of people who 'get their bread in sweet of their face' 2021-06-08 10:46:39 Identified - The issue has been identified and a fix is being implemented. 2021-06-08 10:46:39 Jun 8, 10:44 UTC 2021-06-08 10:46:52 and now its completely down 2021-06-08 10:47:06 https://bbc.co.uk/ 2021-06-08 10:47:20 that was completely down all along 2021-06-08 10:47:28 not for me 2021-06-08 10:47:34 lingering cache probably 2021-06-08 10:47:37 yeah 2021-06-08 10:47:39 thats the fun of caching 2021-06-08 10:48:00 ha 2021-06-08 10:48:04 things don't replicate as expected 2021-06-08 10:49:21 Curious to the postmortem 2021-06-08 10:50:34 i wonder how they fixed it with stack overflow being down :) 2021-06-08 10:54:40 i hear it was related to their WAF 2021-06-08 10:54:45 but it smells like DNS to me 2021-06-08 10:55:02 wife acceptance factor? 2021-06-08 10:59:20 web application firewall 2021-06-08 11:03:35 seems like dl-cdn.alpinelinux.org is stil up 2021-06-08 11:03:58 or came back 2021-06-08 11:04:48 just came back I guess 2021-06-08 11:04:50 I had issues 2021-06-08 11:06:11 alpinelinux.org has been up all the time right? 2021-06-08 11:07:17 https://tpaste.us/ZEll 2021-06-08 11:07:20 ncopa: yes 2021-06-08 11:08:47 oh, downloads is using dl-cdn 2021-06-08 11:12:00 clandmeter: I like you sense of humor :) (WAF) 2021-06-08 11:28:13 clandmeter: someone else i know made the same joke 2021-06-08 11:29:03 It's probably all over the place 2021-06-08 11:43:19 probably 2021-06-08 15:21:12 https://lists.alpinelinux.org/~alpine/devel/%3C0a7922233331ce1025410f5d117448f3%40disroot.org%3E 2021-06-08 15:21:19 can someone check the gitlab webhook deliveries associated with this discussion? 2021-06-08 17:28:59 ddevault: checking 2021-06-08 17:48:42 ddevault: I see sometimes a 500 response is returned, details follow 2021-06-08 17:49:58 ddevault: https://tpaste.us/0EQN 2021-06-08 17:55:09 thanks ikke 2021-06-08 17:55:16 hm, that link does not work 2021-06-08 17:55:18 404 2021-06-08 17:55:52 ddevault: https://tpaste.us/N8lm 2021-06-08 17:56:02 works, thanks 2021-06-08 17:56:18 do you have a timestamp for this request? 2021-06-08 17:57:20 2021-05-30 13:32:26.232071 2021-06-08 17:57:26 ty 2021-06-08 17:57:43 seems to be an issue with their unicode username 2021-06-08 17:57:51 python's mail decoding really fucking sucks 2021-06-08 17:57:55 thanks, I'll look into it 2021-06-08 18:00:21 yw 2021-06-08 18:00:25 the exception, for posterity: https://paste.sr.ht/~sircmpwn/583080e5276d56f71b73359d140621d8e3cf8164 2021-06-08 18:35:19 i have seen that same exception message more than i'd like to recount 2021-06-08 18:42:11 oh, don't get me started, danieli 2021-06-08 18:42:52 imagine maintaining a python email processor which has seen 1.5 million emails to date 2021-06-08 18:44:56 i've maintained some pretty big python $things too, it's... not an easy task 2021-06-08 18:45:06 hence gradually rewriting everything in Go 2021-06-08 18:45:09 though dispatch may escape that 2021-06-08 19:44:36 it's a bit off topic for this channel, but how come you chose golang over $lang? 2021-06-08 19:52:09 $lang is not yet developed enough, and does not target the web niche 2021-06-09 13:50:24 clandmeter: ^ 2021-06-10 04:53:04 ncopa: for some reason, rsync on ppc64le/edge is not updating files, even though, it says it's updating them. manually running the rsync command that aports-build does, each time returns the same list of files 2021-06-10 04:57:40 ncopa: it has to do with --delay-updates 2021-06-10 04:57:57 without --delay-updates the file is actually updated 2021-06-10 05:08:01 https://tpaste.us/PK4D 2021-06-10 06:17:17 ok? what about build-3-14-ppc64le? 2021-06-10 06:20:33 I have not checked that one yet 2021-06-10 06:20:46 PureTryOut noticed this failing in CI 2021-06-10 06:21:15 because APKINDEX was not updated 2021-06-10 06:30:13 could it be the ptmu issue? 2021-06-10 06:31:14 Should it just hang, then? 2021-06-10 06:31:22 or give errors, as the packets never arrive 2021-06-10 06:31:36 and not sure how removing --delay-updates would fix that 2021-06-10 06:31:41 i would expect that yes 2021-06-10 06:32:14 and mtu on that builder is set to 1476 2021-06-10 06:32:23 ok so its the --delay-updates that makes the difference 2021-06-10 06:32:34 it seems so 2021-06-10 06:32:57 I tested it on APKINDEX.tar.gz, and after removing that option, it was updated 2021-06-10 06:35:05 what about the time stamps? 2021-06-10 06:35:13 and date on the machines 2021-06-10 06:35:27 im trying to find the hostname for the machiens 2021-06-10 06:41:06 the file timestamps on dl-master were older 2021-06-10 06:41:36 do you have a log or anything? a dry-run seems to do nothing currently 2021-06-10 06:42:36 for me it does 2021-06-10 06:42:36 oh, community has lots of packages not synced 2021-06-10 06:42:38 yes 2021-06-10 06:42:41 exactly 2021-06-10 06:42:59 APKINDEX.tar.gz used to be in that list as well 2021-06-10 06:44:56 the mqtt-exec.aports-build is not running. did you stop it? 2021-06-10 06:45:06 yes 2021-06-10 06:45:47 do you mind if I try to run aports-build manually? 2021-06-10 06:45:55 to see what it does 2021-06-10 06:46:13 i suspect build fails for some reason 2021-06-10 06:46:18 I did that yesterday 2021-06-10 06:46:23 it took quite some time for some reason 2021-06-10 06:46:28 but it succeeded 2021-06-10 06:46:34 may I try? 2021-06-10 06:46:59 sured 2021-06-10 06:49:28 ncopa: fyi, this is my ssh config for the ppc64le builders: https://tpaste.us/6Pbr 2021-06-10 06:52:47 thanks! 2021-06-10 06:52:56 could it be that target disk is full? 2021-06-10 06:53:29 the --delay-updates will copy all the files with different names first, and then delete/rename 2021-06-10 06:53:40 I checked, 85% 2021-06-10 06:53:47 how much is free? 2021-06-10 06:53:51 in GB 2021-06-10 06:54:03 226G 2021-06-10 06:57:14 can i try run the rsync manually, with --delay-updates? 2021-06-10 06:57:45 of community 2021-06-10 06:57:55 the aports-build is working on testing now 2021-06-10 07:03:29 Yes, I've done that before 2021-06-10 07:16:21 im trying to rsync without the --delay-updates and it hangs on breeze 2021-06-10 07:16:44 hmm 2021-06-10 07:16:46 on the target there is a .~tmp~ directory with the previously uploaded files 2021-06-10 07:17:01 which i think --delay-updates created 2021-06-10 07:17:22 we set the mtu lower on the host, riehr? 2021-06-10 07:17:24 right? 2021-06-10 07:17:52 do we do clamp-mss? 2021-06-10 07:18:49 No, for testing, I just set the mtu on the interface 2021-06-10 07:18:59 but clamp-mss is better long-term solution 2021-06-10 07:20:48 i think its the return packet that gets black-holed 2021-06-10 07:21:20 yeah, makes sense 2021-06-10 07:21:31 the other side has no knowledge of the MTU without mss 2021-06-10 07:21:43 but 2021-06-10 07:22:02 it did fix the issue I had with httpbin, where it was also the return packet that was too big 2021-06-10 07:22:26 so I guess setting the MTU on the interface also sets the mss in the tcp/ip headers? 2021-06-10 08:05:24 i'd like to replace the ubuntu host with alpine somehow 2021-06-10 08:13:02 We still have vm3 to test it out, right/ 2021-06-10 08:13:19 yup 2021-06-10 08:13:54 meh... my ssh sessions died while eating breakfast 2021-06-10 08:14:29 i think we should ask IBM if they can help us make alpine an option for their cloud/vm offering 2021-06-10 09:03:40 so this was a MTU / MSS issue? 2021-06-10 12:32:01 algitbot: liar 2021-06-10 14:06:51 ikke: something is kaput 2021-06-10 14:06:56 http_4 | [W 2021/06/10 08:03:11] Error on fd nil. timeout 2021-06-10 16:41:06 hi o/ i briefly discussed this with ncopa earlier today, but i wanted to bring up the issue of the nonfree/proprietary google captcha for signing up on the alpine gitlab 2021-06-10 16:41:45 i did a bit of digging, and it seems like gitlab has an "invisible captcha" feature, acting as sort of a honeypot for trapping bots 2021-06-10 16:42:19 i was wondering if that feature was tested/considered for the alpine gitlab, and how effective it was, before turning on google's recaptcha? 2021-06-10 16:44:40 relevant links: https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/31625 https://gitlab.com/gitlab-org/gitlab/-/commits/master/app/controllers/concerns/invisible_captcha_on_signup.rb 2021-06-10 16:56:00 I've read the hackernews post where this was suggested to sytze 2021-06-10 16:56:40 Oh, no, that was something different 2021-06-10 16:58:09 bandali: do you happen to know what version of gitlab this was added? 2021-06-10 16:58:31 ha, i'm not familiar. but i think this one can either be enabled or may already be enabled. so i think it'd be interesting if y'all could look into it, and if it is indeed enabled, turn off recaptcha for a while and see how effective the invisible captcha/honeypot is 2021-06-10 16:58:41 ikke, hm i don't, but i can try to find out 2021-06-10 16:59:08 If our instance supports it, I'm willing to try it out and see how effective it is 2021-06-10 16:59:24 note that we haven't received any spam for quite some time, so it seems the current situation is effective 2021-06-10 16:59:37 thanks. can you share which gitlab version alpine is running? 2021-06-10 17:00:24 13.9 2021-06-10 17:01:10 alrighty, thanks. i'll let you know if i can find out which version this other captcha was added in 2021-06-10 17:04:37 bandali: I've found the settins 2021-06-10 17:04:55 i think 12.2 or 12.3 ? not 100% sure 2021-06-10 17:05:00 ikke, oh neat! 2021-06-14 07:02:28 clandmeter: did you fix that ^? 2021-06-14 07:46:42 yes 2021-06-14 07:54:04 ikke: other one should also solve sson 2021-06-14 07:57:47 but those mirrors are on its max 2021-06-14 07:57:53 we are nearing 1.5TB 2021-06-14 07:59:29 Yeah, I can imagine 2021-06-14 08:02:11 We need to figure out how to deal with that in the long term 2021-06-14 08:03:48 distributed filesystems :) 2021-06-14 08:17:43 is the VPN reliable enough or are you thinking of keeping it in one DC? what about volunteer mirrors, will they keep up? 2021-06-14 08:18:44 we have 3 new mirrors ready, but not yet time to activate them 2021-06-14 08:18:57 they are on linode 2021-06-14 08:19:56 ikke: i notice that git pull on gitlab/aports is kind of slow initially, is that something we can fix? 2021-06-14 08:25:08 commitGraphs are already enabled 2021-06-14 10:42:08 it looks like pulling from github is faster 2021-06-14 11:11:18 oh reminds me, i got an email from riscv dev project 2021-06-14 11:15:20 ncopa, Ariadne, ikke, do we need an FPGA for riscv64? 2021-06-14 11:15:42 uhhh 2021-06-14 11:15:46 idk 2021-06-14 11:15:51 like what exactly are they offering 2021-06-14 11:15:52 lol 2021-06-14 11:16:06 not sure how an fpga is going to help us? 2021-06-14 11:16:18 i dont think anything 2021-06-14 11:16:24 thats why im asking :D 2021-06-14 11:16:38 In the coming weeks, we'll be reaching out to start placing boards. We need to understand if the project you're contemplating requires an FPGA. 2021-06-14 11:16:42 We need a generic CPU, not something specifically hardware accelerated 2021-06-14 11:16:56 yes, we just need CPU 2021-06-14 11:17:02 fast as possible CPU 2021-06-14 11:17:12 if somebody wants to do some hardcore riscv64 stuff, it could come in handy ;-) 2021-06-14 11:17:46 ill *not* press the button to request one. 2021-06-14 11:17:53 :) 2021-06-14 11:18:20 i find it weird projects like docker didnt yet get one. 2021-06-14 11:26:44 if we have time we could add forth interpreter in this fpga :) 2021-06-15 14:02:07 i rebooted nld5 due to kernel upgrade 2021-06-15 14:09:02 πŸ‘ 2021-06-15 14:09:04 thanks! 2021-06-15 16:05:04 clandmeter: one thing that is still not working properly is webhook notifications for alpine-mksite (I guess because it's not directly under alpine/)' 2021-06-15 17:57:40 I don't understand why it's not published. The webhook receives the event. The mqtt publish script doesn't seems like it should filter it out 2021-06-15 17:57:46 but I don't see anything published 2021-06-15 17:58:10 webhook_1 | [webhook] 2021/06/15 17:55:25 [801c86] command output: push message has been send 2021-06-15 17:58:48 clandmeter: can it be mqtt permissions? 2021-06-15 17:59:49 uhm 2021-06-15 18:00:04 not sure 2021-06-15 18:00:08 just woke up 2021-06-15 18:01:22 https://tpaste.us/qgD5 2021-06-15 18:01:36 I don't see this message being published on mqtt 2021-06-15 18:02:54 So now I need to manually trigger the rebuild each time 2021-06-15 18:03:05 you mean it does not list here? 2021-06-15 18:03:09 or does not build? 2021-06-15 18:03:19 It's never received 2021-06-15 18:03:26 by? 2021-06-15 18:03:28 mosquitto_sub -t '#' does not list it 2021-06-15 18:03:41 oh ok 2021-06-15 18:05:22 When someone makes a commit (so the commit list on a.o is updated), the site gets automatically updated 2021-06-15 18:05:30 a commit to aports* 2021-06-15 18:05:41 but when you push to the alpine-mksite repo, nothing happens 2021-06-15 18:15:04 ikke: its not in production? 2021-06-15 18:15:18 what is not in production? The post? 2021-06-15 18:15:24 yes 2021-06-15 18:15:27 correct 2021-06-15 18:16:16 now it is 2021-06-15 18:17:59 but the site is not updated ;-) 2021-06-15 18:19:43 topic git/# is anonymous 2021-06-15 18:22:02 i dont think its git topic that gets used? 2021-06-15 18:23:08 there is nothing else that would match 2021-06-15 18:23:11 i dont think its send at all 2021-06-15 18:23:26 only 3 types are send 2021-06-15 18:23:35 push tag_push or merge_request 2021-06-15 18:23:59 zucht 2021-06-15 18:24:01 i need to wake up 2021-06-15 18:25:00 these are push events :) 2021-06-15 18:25:40 i know 2021-06-15 18:25:48 like i said, i have sleep in my eyes 2021-06-15 18:26:02 hence the :) 2021-06-15 18:28:15 ok so the script should send a msg on gitlab/push/%s 2021-06-15 18:29:18 right? 2021-06-15 18:30:53 ah 2021-06-15 18:31:24 yes, or git/#, but neither are sent 2021-06-15 18:31:33 i think they are 2021-06-15 18:31:42 but we dont see them 2021-06-15 18:31:48 due to permissions? 2021-06-15 18:31:54 if we dont use a user/pass 2021-06-15 18:32:01 yes 2021-06-15 18:32:12 there should be much more msgs 2021-06-15 18:32:23 and i remember now that i disable it 2021-06-15 18:32:39 because it will also publish hidden data 2021-06-15 18:32:55 well if we would have private repos 2021-06-15 18:33:12 right 2021-06-15 18:33:14 i think there is an infra user and pass 2021-06-15 18:33:19 use that and you would see it 2021-06-15 18:33:23 ok 2021-06-15 18:33:39 you have that info? 2021-06-15 18:35:01 not sure 2021-06-15 18:37:35 yep 2021-06-15 18:37:41 its much more verbose now 2021-06-15 18:38:56 and how is this determined? 2021-06-15 18:39:11 I see the acl file 2021-06-15 18:39:24 gitlab is not anon 2021-06-15 18:39:36 ok 2021-06-15 18:39:44 to get on the anon, you need to define it 2021-06-15 18:40:12 see, takes time to wake up. 2021-06-15 18:40:18 :) 2021-06-15 18:40:34 i need to paint my house, its good i made my mistakes here first. 2021-06-15 18:40:40 so how to solve this for the site builders, provide it with username / passwords? 2021-06-15 18:40:54 yes 2021-06-15 18:41:00 or define a topic for it 2021-06-15 18:41:11 so its anonymous 2021-06-15 19:11:25 what needs to be done for v3.14 show up on https://pkgs.alpinelinux.org/packages ? 2021-06-15 19:13:01 ncopa: the config needs to be updated 2021-06-15 19:13:58 maybe it can extract it from https://alpinelinux.org/releases.json ? 2021-06-15 19:14:24 yeah, good idea 2021-06-15 20:27:34 that was the whole idea why i requested releases.json :D 2021-06-15 20:27:39 :D 2021-06-15 20:28:47 alpine-www update is working now clandmeter 2021-06-15 20:30:04 anyone can help with s/packet.net/equinnix metal/? https://twitter.com/w8emv/status/1404885496875163653 2021-06-15 20:31:35 I can update the 3.14 post 2021-06-15 20:35:41 ikke: i dont think its just a config update 2021-06-15 20:35:48 for pkgs. 2021-06-15 20:35:53 clandmeter: for pkgs.a.o? 2021-06-15 20:35:55 we had that issue before 2021-06-15 20:36:02 Yes, I've read the logs back 2021-06-15 20:36:15 you said you stopped the update container and ran the update manually in tmux 2021-06-15 20:36:22 which is what I'm doing now 2021-06-15 20:37:04 Adding: v3.14/main/armhf/rsync-zsh-completion-5.8-r2 2021-06-16 04:42:32 clandmeter: fyi, pkgs.a.o now shows v3.14 2021-06-16 06:18:48 ikke: thx 2021-06-16 06:20:23 i think the master and production branches had diverged of alpine-mksite (not sure why). i think i have fixed it 2021-06-16 06:21:06 i added a commit yesterday 2021-06-16 11:03:39 something weird is happening with tpaste 2021-06-16 11:04:48 mps: can you download your patch? https://tpaste.us/K51P 2021-06-16 11:11:04 502 bad gateway 2021-06-16 11:11:12 https://tpaste.us/byde 2021-06-16 11:11:16 ikke: ^ 2021-06-16 11:11:40 this works 2021-06-16 11:11:55 now works previous one 2021-06-16 11:12:03 I guess clandmeter restarted the http services 2021-06-16 11:12:41 ikke: the error does not go away 2021-06-16 11:12:53 +# CONFIG_SCSI_ESAS2R is not set 2021-06-16 11:12:53 +# COcurl: (92) HTTP/2 stream 0 was not closed cleanly: PROTOCOL_ERROR (err 1) 2021-06-16 11:13:01 thats what ii get from that paste via curl 2021-06-16 11:13:07 and the error is still on tpaste 2021-06-16 11:16:55 both links working for me but since month maybe randomly getting 502 bad gateway too 2021-06-16 22:18:33 rebooting it again? 2021-06-16 22:19:02 yep something is wrong with lxc, not sure what. should be back shortly 2021-06-16 22:19:09 ok 2021-06-16 22:21:29 i was playing with binfmt settings and lxc was acting strange. i could stop the container but cannot start it again. 2021-06-16 22:21:31 its back 2021-06-16 22:23:14 weird 2021-06-16 22:23:17 the container is broken 2021-06-17 10:15:23 ikke: i have adjusted the dhcp range on nld5 2021-06-17 10:15:41 just we have some more addresses to reserve for static 2021-06-17 10:15:50 Alright 2021-06-17 10:16:00 rv64 does not like dhcpc 2021-06-17 10:16:14 I wonder if its related to qemu user 2021-06-17 10:16:20 maybe mps can test it 2021-06-17 10:16:41 i gave mps a rv64 container 2021-06-17 10:17:03 πŸ‘ 2021-06-17 10:17:27 if you want one, you can check history how to do the oneliner :) 2021-06-17 10:18:17 one thing thats weird is that yesterday after playing with binfmt settings, the container got corrupted and i have no idea how that happend. 2021-06-17 10:22:24 clandmeter: I'm there. thank you 2021-06-17 10:29:47 clandmeter: did that binfmt change fix the profile issue? 2021-06-17 10:31:44 it works in local chroot 2021-06-17 10:32:07 but not on new lxc clandmeter just created 2021-06-17 10:33:06 chroot will keep your existing env 2021-06-17 10:33:34 ikke: no only broke things 2021-06-17 10:33:39 oh 2021-06-17 10:38:33 yes, that is what i noticed in local chroot 2021-06-17 13:45:25 'make -j16' in riscv64 lxc runs single threaded 2021-06-17 13:46:10 oh no, I was wrong, now it 'fired up' 2021-06-17 13:51:25 configure is single threaded 2021-06-17 13:55:23 yes 2021-06-17 13:56:11 building u-boot worked multi but tcsh build was slow and i didn't looked at ps 2021-06-17 13:56:16 so not sure 2021-06-17 13:57:46 btw, ssh to mps account where tcsh is login shell worked fine, i.e. it sources all cshrc files 2021-06-17 22:18:50 clandmeter: I've added 3.13 and 3.14 to mirrors.a.o, apparenty we forgot to add it (and we should update it to use the releases.yml anyway 2021-06-18 06:30:10 I'm investigating https://gitlab.alpinelinux.org/alpine/aports/-/issues/12767 2021-06-18 06:32:08 something is weird 2021-06-18 06:32:35 curl-* exists on build-3-14-ppc64le, but it does not exist on dl-master 2021-06-18 06:43:28 its the delay-updates issue 2021-06-18 06:43:40 files are not getting rsynced properly 2021-06-18 06:43:52 i wonder if its the PMTU issue 2021-06-18 06:47:59 Try lowering the mtu on the interface of the container to see if it helps 2021-06-18 06:54:12 so... 2021-06-18 06:54:33 i was this close to send an angry email to ibm 2021-06-18 06:54:46 but then i realized I havent eaten breakfast yet 2021-06-18 06:56:43 i am not happy for doing MTU hacks 2021-06-18 06:56:47 this will bite us again 2021-06-18 06:57:16 maybe we should set up clamp-mss on the host if it is not already done 2021-06-18 06:57:27 i'd also like to have alpine hosts, but dont know how 2021-06-18 07:17:04 must be some way to get OOB acccess 2021-06-18 07:55:07 i dont know if i have energy for ppc64le now 2021-06-18 07:55:41 i am this close to kick ppc64le out the window 2021-06-18 07:55:50 would be good if someone else follows up 2021-06-18 07:56:07 I can 2021-06-18 07:56:13 we didnt get any response from ibm 2021-06-18 07:56:16 Subject: Re: Alpine pp64le host unreachable 2021-06-18 07:56:16 Date: Sat, 5 Jun 2021 11:57:55 +0200 2021-06-18 07:56:20 We did get one 2021-06-18 07:56:26 did we? 2021-06-18 07:56:27 I at least 2021-06-18 07:56:45 ah, we did 2021-06-18 07:56:46 sorry 2021-06-18 07:56:53 my email filter 2021-06-18 07:56:55 But more like a generic acknowledgement 2021-06-18 07:57:39 but i find it weird, because MTU on ibm host is 1500 and mtu on dl-master is also 1500 2021-06-18 07:58:40 the rsync --delay-updates creates a .~tmp~ directory 2021-06-18 07:59:41 ikke: what i did so far: stop mqtt-exec.aports-build on ppc64le-3-14-ppc64le 2021-06-18 08:00:32 i deleted .~tmp~ on dl-master, deleted /tmp/upload* on build-3-14-ppc64le (to force upload even if nothing got built) 2021-06-18 08:00:42 i ran sh -x /usr/bin/aports-build manually 2021-06-18 08:00:52 and watched that it hangs on rsync 2021-06-18 08:01:06 the .~tmp~ got recreated on dl-master 2021-06-18 08:01:10 and thats it 2021-06-18 08:01:41 i havented done any tcpdump or anything, but I suspect there is a PMTU issue 2021-06-18 08:01:50 have you diagnosed it? do you know where the issue is? 2021-06-18 08:02:08 no, i havent diagnosed it, but I suspect its PMTU 2021-06-18 08:02:10 traceroute --mtu might help figure out exactly where 2021-06-18 08:02:29 then you can force it to the lowest common denominator 2021-06-18 08:02:40 on both ends? 2021-06-18 08:03:05 first on the box that's having issues, but down the line both if troubleshooting requires it 2021-06-18 08:03:07 i know i can manually reduce MTU everywhere, but I dislike those workarounds 2021-06-18 08:03:38 it's preferable to do it directly on the physical/virtual switches and routers, i've been fighting some bad MTU lately at work 2021-06-18 08:03:58 it often differs between network appliance vendors and (virtual) switches by default 2021-06-18 08:05:19 hum, we dont have traceroute with --mtu 2021-06-18 08:05:27 the telltale sign of bad MTU is that traffic works one way but not the other, and traffic with smaller packets tends to work fine 2021-06-18 08:05:29 we do have a `mtu` utility 2021-06-18 08:05:51 but it claims that host is not up 2021-06-18 08:05:59 Host is not up 2021-06-18 08:05:59 build-3-14-ppc64le [~]# mtu -d dl-master.alpinelinux.org 2021-06-18 08:06:09 do we have tracepath from iputils? 2021-06-18 08:06:23 it should do MTU discovery as well 2021-06-18 08:06:51 we do 2021-06-18 08:06:53 the only potential issue with tracepath is that it doesn't allow you to select the source interface, it'll follow the local routing table 2021-06-18 08:07:23 5: 10.12.253.89 0.185ms pmtu 1476 2021-06-18 08:07:55 i think that is in IBMs network 2021-06-18 08:08:51 1476 is 1500 - 24, it might be a GRE tunnel as its header adds 24 bytes 2021-06-18 08:09:38 i wonder how we fix this 2021-06-18 08:09:47 is that all tracepath shows? 2021-06-18 08:09:56 no. 2021-06-18 08:11:02 build-3-14-ppc64le [~]# tracepath -n dl-master.alpinelinux.org | tpaste 2021-06-18 08:11:02 https://tpaste.us/6PZZ 2021-06-18 08:11:05 At least as a workaround setting mtu on the interface fixed it 2021-06-18 08:11:14 try ifconfig eth0 mtu 1476 up 2021-06-18 08:11:15 yeah 2021-06-18 08:11:42 the only thing you can do is pretty much a workaround as it has to be changed on the network equipment handling the bad paths 2021-06-18 08:13:48 ncopa mentioned clamp-mss 2021-06-18 08:14:06 but i giess clam-mss does not help in this case 2021-06-18 08:14:27 since the lower MTU is between the hosts 2021-06-18 08:14:35 MSS is only for TCP segments while MTU is where IP packets are fragmented 2021-06-18 08:14:50 in either case, i think we should report our findings to IBM 2021-06-18 08:14:54 i agree 2021-06-18 08:15:13 reducing MTU on the path is okish, as long as PMTU packets are not filtered 2021-06-18 08:15:28 i know fastly filters PMTU packets 2021-06-18 08:15:58 it's fairly common for 'public services' to filter them, but if they use GRE tunnels there's at least some unusual wonkiness to it 2021-06-18 08:16:34 i'm just wondering if this is between the host and the first hop or between intermediate hops 2021-06-18 08:16:43 if it's the latter, they've probably experienced similar issues themselves 2021-06-18 08:16:58 i think they have 2021-06-18 08:17:18 They mentioned issues with openstack 2021-06-18 08:17:46 ikke will you follow up with IBM? i think i need to go work on other things now 2021-06-18 08:18:32 i'd be ok with setting MTU down to 1476 on the network interface for now, even if i dont like it 2021-06-18 08:18:42 we probably need to do it on all 3 hosts 2021-06-18 08:19:54 at least things would work until they fix their stuff 2021-06-18 08:19:58 the usual "failsafe MTU" is 1400 2021-06-18 08:20:17 that'll reduce it past the header size of all common encapsulation protocols 2021-06-18 08:20:37 but if you're sure the lowest one is 1476 it's GRE and that should be fine 2021-06-18 08:21:32 1476 fixed another mtu issue we encountered 2021-06-18 08:21:40 With http 2021-06-18 08:22:45 (if it's MPLS the MTU can be even lower if you hit another route because it adds N bytes per MPLS label, but you can usually see those with 'mtr -e') 2021-06-18 08:33:57 nice debugging, i need to follow this channel more often :) 2021-06-18 09:01:37 danieli: thank you for your help and ggood tips 2021-06-18 09:01:49 np :) 2021-06-18 11:03:16 ncopa: did you already set mtu on certain interfaces? 2021-06-18 11:05:01 The host vms all seem to have mtu 1476 on the outgoing interfaces 2021-06-18 11:08:20 no, i didnt 2021-06-18 11:09:50 ubuntu@alpine-linux-02:~$ ip link show dev ibmveth2 2021-06-18 11:09:50 link/ether fa:4c:3b:05:af:20 brd ff:ff:ff:ff:ff:ff 2021-06-18 11:09:50 2: ibmveth2: mtu 1500 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000 2021-06-18 11:09:57 looks like 1500 to me? 2021-06-18 11:12:13 bmveth2: mtu 1476 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000 2021-06-18 11:12:25 That's what I see 2021-06-18 11:12:30 on all 3 hosts 2021-06-18 11:31:28 ncopa-desktop:~$ for i in 1 2 3; do ssh ubuntu@gbr3-vm$i.alpinelinux.org ip link 2021-06-18 11:31:28 sho dev ibmveth2; done | tpaste 2021-06-18 11:31:28 https://tpaste.us/V7PN 2021-06-18 11:31:54 i only see mtu 1476 on gbr3-vm1 2021-06-18 12:28:50 i think its a question of time before we will have to deal with crytominer abuse in our gitlab 2021-06-18 12:31:17 they issue jobs in ci? 2021-06-18 12:53:13 that's how i understand it. i know free providers need to close down due to crypto miner abuse 2021-06-18 13:05:53 Yes, indeed 2021-06-18 13:06:41 github already made changes to cope with this 2021-06-18 15:09:35 ncopa: apparently I had all 3 aliases pointed to vm1 :D 2021-06-18 15:23:41 clandmeter: hmm, mirrors.a.o is still not showing 3.13 and 3.14 2021-06-18 15:23:45 Am I missing something? 2021-06-18 19:24:15 ikke: probably 2021-06-18 19:25:48 where does it run? 2021-06-18 19:25:58 netbox info is incorrect? 2021-06-18 19:26:53 deu1 afaik 2021-06-18 19:38:36 its a git clone permission issue 2021-06-18 19:38:51 did permissions of mirrors repo change? 2021-06-18 19:39:46 the compose has an ssh key 2021-06-18 19:39:50 not sure which user it belongs to 2021-06-18 19:39:55 maybe the infrastructure user 2021-06-18 19:40:05 i need to go out bbl 2021-06-18 19:40:10 you can check compose logs 2021-06-18 22:02:32 its running now 2021-06-18 22:02:42 it was not using the correct remote 2021-06-18 22:16:27 πŸ‘ 2021-06-19 10:30:26 clandmeter: tried to build a new gitlab for 13.11, but bundle / gem is giving some issues 2021-06-19 12:20:17 Ok 2021-06-19 12:20:51 New Alpine release? 2021-06-19 15:44:25 still uses 3.13 2021-06-19 17:29:12 Just setting the MTU on the ppc64le host does not fix uploading packages with --delay-updates 2021-06-19 17:29:24 either on the host, or the lxc container 2021-06-19 17:37:49 and the fact that removing --delay-updates making it work, leads to me beleiving MTU is not the issue there 2021-06-20 07:46:07 strange behavior with builders this morning 2021-06-20 07:46:27 anything specific? 2021-06-20 07:47:50 build.a.o says cannot upload files 2021-06-20 07:50:27 I see 2021-06-20 08:10:14 Trying to figure out why the builders suddenly cannot upload packages 2021-06-20 08:39:19 It's not the rsync command that is failing 2021-06-20 08:40:22 https://tpaste.us/nWvM 2021-06-20 08:49:19 aaaah, syntax error in aports 2021-06-20 08:49:29 sh: ./APKBUILD: line 52: syntax error: unterminated quoted string 2021-06-20 08:50:13 What APKBUILD? 2021-06-20 08:52:37 trying to find out 2021-06-20 08:53:16 Oh it doesn't mention it? 2021-06-20 08:53:19 nope 2021-06-20 08:55:40 lingot 2021-06-20 08:56:18 Well I definitely touched that package yesterday, but I don't see my changes there currently 2021-06-20 08:56:19 94b9b31c7e5693433b88ab6a87126676196eb7e1 2021-06-20 08:56:24 :P 2021-06-20 08:56:34 - json-c-dev alsa-lib-dev pulseaudio-dev jack-dev fftw-dev" 2021-06-20 08:56:35 + json-c-dev alsa-lib-dev pulseaudio-dev jack-dev fftw-dev 2021-06-20 08:56:43 Oh yeah I lost a line there 2021-06-20 08:56:46 Will fix, give me a sec 2021-06-20 08:57:16 πŸ‘ 2021-06-20 08:57:20 Done 2021-06-20 08:57:28 Not sure how that line got lost 2021-06-20 08:58:00 Yup it works again, great success! 2021-06-20 09:06:21 Fixed 2 issues in one go :P 2021-06-20 09:06:34 (squashed 2 flies in one clap :P) 2021-06-20 09:10:42 I submitted two MRs for Alpine regards osinfo-db: https://gitlab.com/libosinfo/osinfo-db/-/merge_requests/321 and https://gitlab.com/libosinfo/osinfo-db/-/merge_requests/322 - once those will be merged, how can I update our osindo-db package? I see that is hosted in dev.a.o 2021-06-20 09:11:13 btw: good morning :) 2021-06-20 09:12:15 fcolista: you should have an account on dev.alpinelinux.org 2021-06-20 09:12:18 well, I see you have one 2021-06-20 09:12:24 i have 2021-06-20 09:12:50 I mean, I should download the git where the patches are merged and put in dev.a.o 2021-06-20 09:13:22 and update the pkgver with the date i'll do that 2021-06-20 09:14:44 The repo does have tags: https://gitlab.com/libosinfo/osinfo-db/-/archive/v20210531/osinfo-db-v20210531.tar.gz 2021-06-20 09:15:15 yup 2021-06-20 09:15:30 we have pkgver=20210105 2021-06-20 09:36:51 https://build.alpinelinux.org/buildlogs/build-edge-aarch64/testing/clojure/clojure-1.10.3-r0.log 2021-06-20 09:37:03 look like is stuck 2021-06-20 09:37:33 Killed 2021-06-20 09:37:34 >>> ERROR: clojure: build failed 2021-06-20 09:38:16 lets see will retry help 2021-06-20 09:39:01 no, aarch64 is not restarted 2021-06-20 09:39:17 probably the previous build 2021-06-20 09:39:45 can it be killed 2021-06-20 09:39:57 done 2021-06-20 09:40:13 but it needs to be fixed, probably will hang the next time 2021-06-20 09:40:17 omg, I'm using non CoC words (killed) :) 2021-06-20 10:37:40 I killed it last time 2021-06-20 10:37:57 ikke: ^ 2021-06-20 10:38:38 right 2021-06-20 10:38:49 Hangs for half a day last time 2021-06-20 10:39:35 clandmeter: can you help debugging an rsync issue? 2021-06-20 10:39:59 I would but it's father's day 2021-06-20 10:40:03 aha 2021-06-20 10:40:09 Need to travel :) 2021-06-20 10:40:31 I saw backlog 2021-06-20 10:40:43 So it's rsync that is the issue? 2021-06-20 10:40:46 yeah 2021-06-20 10:41:03 it updates when I remove --delay-updates 2021-06-20 10:44:19 maybe I can try to reproduce it 2021-06-20 10:46:56 Only on ppc? 2021-06-20 10:48:17 no 2021-06-20 10:48:25 x86 as well 2021-06-20 10:48:39 and maybe others 2021-06-20 10:53:16 Does master mirror say anything? 2021-06-20 10:54:48 That's the dst right 2021-06-20 10:55:24 sautes 2021-06-20 10:55:28 yes* 2021-06-20 10:57:08 clandmeter: the rsync output is as if it's uploading the files, but effectively nothing happens 2021-06-20 10:57:34 What happens if you change dst? 2021-06-20 10:57:54 That's a bit hard I guess 2021-06-21 06:46:02 ikke: im having difficulties getting on ppc host 2021-06-21 06:46:05 or hosts 2021-06-21 06:46:16 not sure which address or hostname to use? 2021-06-21 06:46:49 gbr3-vm[1-3].a.o 2021-06-21 06:47:27 you did not add my keys? 2021-06-21 06:47:43 Username ubuntu 2021-06-21 06:47:52 oh yes 2021-06-21 06:48:36 Maybe add that to netbox? 2021-06-21 06:49:06 right 2021-06-21 06:49:10 which box has upload issues? 2021-06-21 06:49:18 vm1? 2021-06-21 06:56:21 ikke: which repo was out of date? 2021-06-21 07:03:31 Right now I know v3. 14/community/x86/ 2021-06-21 11:46:44 hmm, can somebody make sure build-3-14-mips64 actually tracks 3.14-stable 2021-06-21 12:14:27 ok 2021-06-21 12:15:21 looks like it is 2021-06-21 19:18:30 dns on gbr3-vm1 is borked 2021-06-21 19:18:42 the server that is returned by dhcp does not resolve 2021-06-21 19:19:01 I have set RESOLV_CONF=no for now and manually set the dns servers 2021-06-22 12:45:19 speaking of spam on channels, that could help: https://oftc.net/AntiSpamBot/ 2021-06-22 12:46:46 but then not registered people will get +v if understand it correctly 2021-06-22 12:49:37 but ops still will see spam so... whaaatever 2021-06-22 12:56:11 MY-R: As of June 21st, 2021, it seems to be done for now. AntiSpamBot is offline until needed again. 2021-06-22 13:04:45 clandmeter: yeah... still that bot wouldnt be comfortable for channel 2021-06-22 17:31:31 build-edge-aarch64 is stuck on community/java-lz4 1.7.1-r1 2021-06-22 17:41:10 Something is wrong with all these java things 2021-06-22 17:45:04 agree, and I mean 'in principle all these java things' :) 2021-06-23 08:07:33 Hi! 2021-06-23 08:07:48 cat blabla.txt | curl -F 'tpaste=<-' https://tpaste.us/ 2021-06-23 08:07:50 curl: (35) gnutls_handshake() failed: An illegal parameter has been received. 2021-06-23 08:08:40 it is from host with curl compiled with gnutls, and from Alpine host curl working fine 2021-06-23 08:12:03 with http://sprunge.us/ working fine 2021-06-23 08:12:53 echo foobar | tpaste 2021-06-23 08:12:55 https://tpaste.us/5VO9 2021-06-23 08:12:56 ah nah, httpS sprunge doesnt work either, ok so I will better recompile curl without gnutls, sorry for noise! 2021-06-23 08:13:17 weird because I remember it was working before with gnutls curl 2021-06-23 08:26:12 sprunge doesnt support httpS, omg... whatever, will recompile curl with openssl and will see 2021-06-23 08:43:52 btw curl with openssl working fine with tpaste.us, looks like gnutls version just suck ;) 2021-06-24 11:06:42 Trying to debug rsync with: rsync -rui --delete-delay --delay-updates --debug=all4 -M --debug=all4 community/x86/APKINDEX.tar.gz dl-master.alpinelinux.org:alpine/v3.14/community/x86/ 2021-06-24 11:06:49 but that's not reveiling 2021-06-24 11:06:54 revealing* 2021-06-24 11:08:30 https://tpaste.us/5VON 2021-06-24 11:23:20 the transfer itself looks fine, is the problem that specific files don't get synchronized and are missing completely? 2021-06-24 11:24:04 danieli: My feeling is that the file is never moved to the final destination (with --delay-updates) 2021-06-24 11:24:49 gen mapped .~tmp~/APKINDEX.tar.gz of size 1457261 2021-06-24 11:26:43 danieli: I see that .~tmp~/APKINDEX.tar.gz still exists 2021-06-24 11:26:52 looks like you have to tell rsync to clean them up 2021-06-24 11:26:54 or do it manually 2021-06-24 11:27:19 it's only that file that still exists, and that's the file that's not up-to-date 2021-06-24 11:27:28 it's described under '--partial-dir=DIR' in the manual 2021-06-24 11:28:04 This is not the same afaik 2021-06-24 11:28:46 ah yes, i misread, that's only for the resumption aspect and lq.~tmp~rq 2021-06-24 11:30:12 So the final operation, moving that file to the actual file, is somehow not working 2021-06-24 16:56:27 strange, packages on x86_64 were not synced, but when I synced it manually with --delay-updates, everything was uploaded 2021-06-24 16:58:11 ooh, I guess because of failing builds 2021-06-25 10:29:43 what is the status of rsync problem? 2021-06-25 10:29:49 is it reported upstream? 2021-06-25 10:36:06 No, not yet, wanted to get more details 2021-06-25 10:36:12 or a way to reproduce 2021-06-25 10:37:07 But maybe it's good to open an issue anyway 2021-06-25 10:48:05 hi 2021-06-25 10:48:24 ikke: its for sure an rsync issue? 2021-06-25 10:48:55 as far as I can see, it is 2021-06-25 10:49:11 - removing --delay-updates fixes the issue 2021-06-25 10:49:38 - I see a file called .~tmp~/APKINDEX.tar.gz left behind with --delay-updates 2021-06-25 10:49:47 so it seems the file is synchronized, but never moved into the final destination 2021-06-25 10:54:17 its only the APKINDEX that is affected? 2021-06-25 10:54:46 no 2021-06-25 11:16:58 I did check the bugtracker to see if similar issues were already reported 2021-06-25 11:21:30 hmm, x86 edge is now up-to-date 2021-06-25 11:22:07 oh, it was 3.14 2021-06-26 16:33:07 ceph ^ 2021-06-28 15:04:32 clandmeter: even when removing the .~tmp~/ dir, ppc64le still has sync issues (hanging during sync) 2021-06-28 15:04:44 That might still be related to pmtu issues 2021-06-28 15:05:04 nod 2021-06-28 15:05:22 what happens if we sync over vpn? 2021-06-28 15:05:34 uhm mirror is not on vpn... 2021-06-28 15:12:35 ikke: try: sysctl net.ipv4.ip_no_pmtu_disc=1 and watch what will happen :P 2021-06-28 15:24:37 mss / mtu is even 1376 to dl-master 2021-06-28 15:25:58 but lowering the mtu on the interface does not fix it 2021-06-28 15:29:19 dl-master -> ppc64le mtu 1448 2021-06-28 15:30:12 1448 both ways 2021-06-28 15:48:05 ok, it seems these files do sync, but just taking very long 2021-06-28 15:48:45 or stalling 2021-06-28 15:49:54 would it matter if we use some sort of vpn? 2021-06-28 15:50:44 the vpn traffic itself would hit the mtu issues, I guess 2021-06-28 15:50:49 unless we fix it there 2021-06-28 15:51:21 What I see is that a file syncs to 51% with 32M/s, and then hangs 2021-06-28 15:51:25 that's not an MTU issue 2021-06-28 16:16:54 client_loop: send disconnect: Broken pipe 2021-06-28 16:16:56 2021-06-28 16:16:58 rsync: [sender] write error: Broken pipe (32) 2021-06-28 16:17:02 Looks like a stateful firewall 2021-06-28 16:23:20 routing loop? 2021-06-28 16:23:34 It seems to just happen with larger files 2021-06-28 17:00:40 ikke: can you use some keepalive setting like ssh has? 2021-06-28 17:00:56 clandmeter: but there is enough traffic going on 2021-06-28 17:01:20 so then its not because of statefulness? 2021-06-28 17:03:00 ikke: the time it hangs is always similar? 2021-06-28 17:06:05 no 2021-06-28 17:06:17 bullet-doc, that hanged earlier, just sycned without issue now 2021-06-28 17:06:29 but before it would hang, multiple times 2021-06-28 17:07:54 we are talking about ppc only right? 2021-06-28 17:08:28 afaik, yes 2021-06-28 17:08:36 I have not enountered this yet on other arches 2021-06-28 17:08:45 or builders, I must say 2021-06-28 17:09:38 maybe we should report it again, this is taking a lot of time to debug and its not the first time we have network issues on their network. 2021-06-28 17:10:06 another issue is dns 2021-06-28 17:10:26 the default resolver on one of the hosts stopped resolving 2021-06-28 17:10:46 network, dns, alpine install. those are the 3 major issues? 2021-06-28 17:12:04 pmtu 2021-06-28 17:12:08 well, that's network 2021-06-28 17:12:14 but a separate issue 2021-06-28 17:12:18 nod 2021-06-28 17:12:43 from their last email, they showed they do not want to drop support for alpine. 2021-06-28 17:13:06 so my suggestion is to let them fix their issues instead of us. 2021-06-28 17:13:19 ahuh 2021-06-28 17:13:26 if you agree :) 2021-06-28 17:13:45 Would certainly be nice if they fixed it 2021-06-28 17:13:54 if you prefer debugging this stuff until you get grey hair.... 2021-06-28 17:14:27 is that a translatable? :) 2021-06-28 17:14:30 -a 2021-06-28 17:49:02 now everything seems fine 2021-06-30 01:45:48 there is a gitlab bug in 3.12.x (I think fixed in .3?) which causes Safari to fail to render diffs. e.g. https://gitlab.com/gitlab-org/gitlab/-/issues/332005 https://gitlab.com/gitlab-org/gitlab/-/issues/331692 2021-06-30 01:46:09 3.12.5 definitely works, and 14.x works too if y'all want to upgrade to latest 2021-06-30 01:46:37 13.12.** not 3.12.** 2021-06-30 12:07:10 Thalheim: we prefer not run the latest and greatest 2021-06-30 12:07:27 as gitlab has lots of nice little bugs 2021-06-30 12:07:59 ikke: did you mention we had issues with the latest minor tag? 2021-06-30 12:08:34 We are running 13.9 now 2021-06-30 12:08:51 ah ok 2021-06-30 12:08:55 which version failed? 2021-06-30 12:09:00 Was trying to build 13.11.5 2021-06-30 12:09:27 it failed on what? 2021-06-30 12:09:53 It could ot find the rake gem, from memory 2021-06-30 15:36:54 clandmeter: 2021-06-30 15:36:56 bundler: failed to load command: rake (/usr/local/bundle/bin/rake) 2021-06-30 15:36:58 Bundler::GemNotFound: Could not find rake-13.0.3 in any of the sources 2021-06-30 19:52:03 clandmeter: `bundle exec rake gettext:compile RAILS_ENV=production` is the command that fails 2021-06-30 19:52:25 ok 2021-06-30 19:52:51 trying to find which version of rake (if any) is installed 2021-06-30 19:54:59 rake 13.0.1 2021-06-30 19:55:27 sounds like a bug in deps? 2021-06-30 19:56:03 Fetching rake 13.0.3 2021-06-30 19:56:05 Installing rake 13.0.3 2021-06-30 19:56:07 hmm 2021-06-30 19:57:09 so it is installed? 2021-06-30 19:57:14 or you did that manually? 2021-06-30 19:57:26 rake 13.0.3 is installed in /usr/local/bundle, while 13.0.1 is installed in /usr/local/lib/ruby/gems/2.7.0 2021-06-30 19:57:35 the latter is earlier in the search path 2021-06-30 19:57:44 /root/.gem/ruby/2.7.0:/usr/local/lib/ruby/gems/2.7.0:/usr/local/bundle 2021-06-30 19:59:08 I did not install it manually 2021-06-30 19:59:46 i wonder why its installing 2 versions of rake 2021-06-30 20:00:15 I don't see other mentions of rake in the log 2021-06-30 20:01:17 what if we jump to 13.12? 2021-06-30 20:03:18 let me try 2021-06-30 20:04:43 im also trying, but its been too long. :) 2021-06-30 20:06:31 well, I just bump the version and run docker-compose build :P 2021-06-30 20:06:57 yep 2021-06-30 20:06:59 same here 2021-06-30 20:09:24 we use bundle to install deps into similar locations for multiple apps 2021-06-30 20:09:34 maybe they use diff rake versions? 2021-06-30 20:09:40 yes, possibly 2021-06-30 20:11:54 but its still strange that bundle cannot find the correct version. i dont think it looks for rake by using something from PATH 2021-06-30 20:12:22 There is a Gempaht 2021-06-30 20:12:28 bundle env returns it 2021-06-30 20:13:18 it looks like the ruby deps take longer than before, i guess gitlab is still growing... 2021-06-30 20:14:07 not unlikely 2021-06-30 20:18:13 build failed 2021-06-30 20:18:31 bundler: failed to load command: rake (/usr/local/bundle/bin/rake) 2021-06-30 20:18:58 on 13.12? 2021-06-30 20:20:32 nod 2021-06-30 20:21:02 hmm 2021-06-30 20:21:07 here it's still running 2021-06-30 20:22:04 my i7 is faster :) 2021-06-30 20:23:16 :) 2021-06-30 20:23:36 I have to do with an i5 2021-06-30 20:32:41 yup, failed for me as well :P 2021-06-30 20:33:08 clandmeter: were you already digging? 2021-06-30 20:33:17 kind of 2021-06-30 20:33:32 i have no idea what im doing 2021-06-30 20:33:38 heh 2021-06-30 20:33:41 join the club 2021-06-30 20:35:13 i made a docker commit 2021-06-30 20:35:22 hah, I did exactly the same before :P 2021-06-30 20:35:27 entered the broken container 2021-06-30 20:35:35 and have the same error 2021-06-30 20:35:42 both versions are installed 2021-06-30 20:35:47 yes, exactly 2021-06-30 20:35:49 but bundler is too stupid 2021-06-30 20:35:50 in different dirs, right? 2021-06-30 20:36:10 i didntg check the dirs 2021-06-30 20:36:14 ok 2021-06-30 20:36:14 but gem shows both 2021-06-30 20:44:57 ah 2021-06-30 20:45:17 i wonder if our base images ships it 2021-06-30 20:46:27 yup 2021-06-30 20:46:31 rake (13.0.1) 2021-06-30 20:46:33 aha 2021-06-30 20:47:59 hmm 2021-06-30 20:48:12 13.0.1 is in /usr/local/bin/rake 2021-06-30 20:48:17 you mention its not 2021-06-30 20:50:48 ls /usr/local/lib/ruby/gems/2.7.0/gems/r* 2021-06-30 20:50:59 racc-1.4.16/ rake-13.0.1/ rdoc-6.2.1/ readline-0.0.2/ readline-ext-0.1.0/ reline-0.1.5/ rexml-3.2.3.1/ rss-0.2.8/ 2021-06-30 20:51:46 Hmm, the output for /usr/local/bundle was overwritten :( 2021-06-30 20:52:26 ok 2021-06-30 20:53:11 so the container has /usr/local/lib vs usr/local/bundle for bundler 2021-06-30 20:53:23 Can we just override gempath? 2021-06-30 20:53:29 or is that going to break things/ 2021-06-30 20:53:36 well 2021-06-30 20:53:42 this is how ruby wants it 2021-06-30 20:53:46 its set by them not us 2021-06-30 20:53:51 yeah 2021-06-30 20:54:06 maybe ask in #ruby how this is supposed to work 2021-06-30 20:54:06 i wonder if this is a bug 2021-06-30 20:54:28 it should be "easy" to reproduce 2021-06-30 20:54:33 right 2021-06-30 20:54:42 notice "easy" 2021-06-30 20:54:44 its ruby 2021-06-30 20:54:47 I know 2021-06-30 20:54:55 in the easy case, it might just work 2021-06-30 20:55:11 i wonder if there is alraedy an issue 2021-06-30 20:55:26 did we change image recently? 2021-06-30 20:55:29 No 2021-06-30 20:55:42 2.7 is already for some time i ugess? 2021-06-30 20:56:30 Feb 2021-06-30 20:57:03 used it for 13.6, 13,7, 13.9 2021-06-30 20:57:14 and now with >13.11 it fails 2021-06-30 20:57:33 maybe because gitlab bumped the version requirement? 2021-06-30 20:57:39 yep 2021-06-30 20:57:42 same mind here 2021-06-30 20:57:53 can you check the gemspec? 2021-06-30 20:58:00 probably used same version 2021-06-30 20:58:52 which was the last that worked? 2021-06-30 20:59:02 13.9.6 2021-06-30 20:59:04 that we use 2021-06-30 20:59:16 not sure about 13.10 2021-06-30 21:01:34 rake is not a dep 2021-06-30 21:01:48 hmm 2021-06-30 21:01:50 not directly at least 2021-06-30 21:01:58 right, it's in the lockfile though 2021-06-30 21:02:06 yeah 2021-06-30 21:03:09 https://gitlab.com/gitlab-org/gitlab/-/blob/13-9-stable-ee/Gemfile.lock#L948 2021-06-30 21:03:28 what do we have on our running one? 2021-06-30 21:04:09 heh 2021-06-30 21:04:13 both versions 2021-06-30 21:04:13 rake (13.0.3, 13.0.1) 2021-06-30 21:04:15 yea 2021-06-30 21:04:39 so why is this suddenly breaking 2021-06-30 21:05:03 "no space left on device" :| 2021-06-30 21:05:08 and i think path shouldnt matter 2021-06-30 21:05:13 which device? 2021-06-30 21:05:21 My local docker volume 2021-06-30 21:05:31 you need to upgrade 2021-06-30 21:05:32 building gitlab takes a lot of space 2021-06-30 21:05:49 I already added 5G extra 2021-06-30 21:06:16 i guess something in gitlab changed and cannot find the correct rake 2021-06-30 21:06:18 clandmeter: need to upgrade? 2021-06-30 21:06:45 you need to upgrade your disks 2021-06-30 21:06:59 It's by design 2021-06-30 21:07:11 I put docker on a separate volume to prevent it from eating up all disk space 2021-06-30 21:07:31 you do that so you bump into limits all the time ;-) 2021-06-30 21:07:40 yes 2021-06-30 21:07:48 like on our mirrors 2021-06-30 21:08:08 everytime i resize its like, its almost time to upgrade :) 2021-06-30 21:08:13 yes, but better than running at the limits when the disk is 100% in use 2021-06-30 21:08:40 which will happen soon anyway 2021-06-30 21:08:43 i guess the path should matter 2021-06-30 21:08:59 should not* 2021-06-30 21:09:16 The path does indicate some kind of hierarchy 2021-06-30 21:09:26 or is it pure a list of locations to look for gems? 2021-06-30 21:09:36 i think so 2021-06-30 21:09:39 im not 100% sure 2021-06-30 21:10:02 it will select a gem from whichever locations are available 2021-06-30 21:10:46 maybe a dependency conflict which causes bundler to deselect 13.0.3? 2021-06-30 21:10:50 and the gem is inside the bundle 2021-06-30 21:12:19 its probably because we use a single bundle for multiple projects 2021-06-30 21:12:33 probably not supported 2021-06-30 21:13:11 I suppose 2021-06-30 21:13:18 though 2021-06-30 21:13:24 gems are versioned 2021-06-30 21:13:57 it should be able to select the versions that it needs for each project 2021-06-30 21:19:42 i see some small differences in bundle env 2021-06-30 21:21:10 think we need to debug this another day 2021-06-30 21:21:13 its getting late 2021-06-30 21:21:20 yup 2021-06-30 21:24:33 path 2021-06-30 21:24:33 Set for your local app (/usr/local/bundle/config): "vendor/bundle" 2021-06-30 21:24:41 this is not set on production 2021-06-30 21:24:55 https://stackoverflow.com/questions/23801899/bundlergemnotfound-could-not-find-rake-10-3-2-in-any-of-the-sources