2024-03-01 08:53:10 https://appealforassistance.notion.site/Appeal-for-Assistance-abcca31346e944b38e09ebabb4208152 2024-03-01 10:58:05 clandmeter: would it possible to get a login to the milk pioneer boxes? 2024-03-01 11:19:17 ncopa: 172.16.30.2,3 2024-03-01 11:20:35 thanks 2024-03-01 11:20:44 where is the kernel sources/patches? 2024-03-01 11:21:25 oh, we have a gitlab runner up already? 2024-03-01 11:21:30 yes 2024-03-01 11:21:38 one on each box 2024-03-01 11:22:06 but they are not enabled by default? 2024-03-01 11:22:17 they dont show up on the build jobs 2024-03-01 11:28:04 hmm, they should pick up jobs 2024-03-01 11:28:34 does not show up in https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/61489 2024-03-01 11:28:54 the job is still called emulated 2024-03-01 11:29:41 and still needs to be manually triggered? 2024-03-01 11:30:05 yes, we still needed to adjust the .gitlab-ci.yml 2024-03-01 11:31:58 maybe we can rename it (remove the -emulated in name), and enable it by default 2024-03-01 11:32:04 and make sure the tests runs properly 2024-03-01 11:32:17 that was the idea, but I guess carlo didn't get to that yet 2024-03-01 11:33:36 i see that there are nvme related kernel messages on one of the boxes, but not the other 2024-03-01 11:33:49 i suppose those are the "frequent IO stalls" 2024-03-01 11:34:06 yeah, carlo mentioned there being some cheap chinese nvmes 2024-03-01 11:34:40 i suppose those could be replaced with something better? 2024-03-01 11:35:59 He was not sure if it was the disks or the pci drivers 2024-03-01 11:51:24 https://github.com/sophgo/linux-riscv/issues/104#issuecomment-1939251254 2024-03-01 11:51:27 yup 2024-03-01 12:01:42 i wonder if we should maintain the sophgo kernel similar to how we do with rpi 2024-03-01 12:02:26 that we create a diff from current release 2024-03-01 12:02:52 makes it easier to rebase to current release 2024-03-01 12:02:59 which is 6.1.79 2024-03-01 12:03:17 maybe I should try rebase the patches for the linux-6.6.y branch 2024-03-01 12:05:28 seems like there have been no commits to the sg2042-dev branch the last two months 2024-03-01 12:05:40 but 2260-pld has commits 2024-03-01 12:15:56 ncopa: clandmeter maintain linux-sophgo in testing 2024-03-01 12:16:27 i saw. I was looking into upgrading it to 6.1.79 2024-03-01 12:17:23 i have a patch on top of 6.1.79 for it now: https://dev.alpinelinux.org/archive/sophgo-patches/ 2024-03-01 12:17:48 built same way as https://dev.alpinelinux.org/archive/rpi-patches/ 2024-03-01 12:18:48 you want it in main? 2024-03-01 12:18:57 not necessarily 2024-03-01 12:19:12 i just want a convenient way to keep it up to date 2024-03-01 12:19:44 then you can add patch to linux-sophgo I think 2024-03-01 12:20:01 thats what im doing 2024-03-01 12:20:14 i mean, i do it similar to what I do with rpi 2024-03-01 12:20:45 I thought to use their patcthes for 6.8 and and add to 6.8 mainline when it will be released 2024-03-01 12:21:15 6.8 have a lot features for riscv 2024-03-01 12:21:16 meaning I checkout a new branch based on linux-sophgo 6.1.y (currently 6.1.72) and then I merge in v6.1.79 2024-03-01 12:21:38 ncopa: I think so 2024-03-01 12:22:11 but for me access to sophgo lxc doesn't work for some time 2024-03-01 12:22:16 after that i do a diff against v6.1.79 and up load it to dev.a.o archive 2024-03-01 12:22:34 i created a new lxc for me on the other host 2024-03-01 12:22:38 sounds like sound workflow 2024-03-01 12:22:54 i already do something similar for rpi 2024-03-01 12:23:13 linux-starfive is similar 2024-03-02 06:08:10 related to what you were talk8ng about https://www.cnx-software.com/2024/03/01/scaleway-hosted-risc-v-servers/ 2024-03-02 08:43:42 so they probably have 16 3U chassis with each chassis having 6 cluster boards holding 7 riscv module. impressive. 2024-03-02 08:43:51 the rack is probably not very deep 2024-03-02 09:53:42 ncopa: i think the 6.6 branch has issues with numa 2024-03-02 09:53:56 or atleast related to cpu 2024-03-02 09:54:26 as smp seems non functional as only cpu0 will come online 2024-03-02 13:50:13 6.6 needs to be 6.8 2024-03-04 11:03:56 clandmeter: so which kernel branch do you think we should aim for? 2024-03-04 11:04:01 6.1.y? 2024-03-04 11:09:58 we could try 6.8 which will be released next monday probably 2024-03-04 11:10:08 with patches ofc 2024-03-04 13:41:59 ncopa: i think for now we should stick to 6.1 until there is a msg about proper 6.8 support 2024-03-04 13:48:58 some info about progress on sophgo hardware: https://github.com/sophgocommunity/SG2042-Newsletter 2024-03-04 19:08:23 ncopa: sophgo just rebased on top on .80 2024-03-04 19:18:17 ah, great! I was thinking refactor the APKBUILD a bit to use an external patch similar to what we do with RPI 2024-03-06 11:05:47 clandmeter: I'd like to get the riscv64 machines up and running this and next week. get the CI enabled by default and have the official builder running. 2024-03-06 11:06:14 How do I rename the gitlab runner (remove the -emulated suffix) and enable it to also run tests? 2024-03-06 11:19:24 ncopa: the job name is defined in .gitlab-ci-yaml 2024-03-06 11:19:35 .gitlab-ci.yml 2024-03-06 11:21:26 thank you! I'm renaming it 2024-03-06 11:22:29 do you mind if I remove the `when: manual`? 2024-03-06 11:22:45 No 2024-03-06 11:23:20 ok. Lets see what breaks :) 2024-03-06 11:29:41 would be nice if I can connect to my lxc there 2024-03-06 11:31:37 On the pioneer boxes? Why can't you? 2024-03-06 11:32:19 172.16.30.101/102 2024-03-06 11:36:19 ikke: why? because I use old IP address ;p 2024-03-06 11:36:40 Something is weird with the riscv64 CI: https://gitlab.alpinelinux.org/alpine/aports/-/jobs/1294938 2024-03-06 11:37:07 it cannot set permissions on the files 2024-03-06 11:37:21 ikke: thanks 2024-03-06 11:37:38 ncopa: we had similar issues with armv7 and armhf, upgrading the OS (docker?) fixed it 2024-03-06 11:37:49 ncopa: possibly related to the musl 1.2.5 upgrade 2024-03-06 11:38:20 The problem is that these boxes are running edge, so that's not the problem 2024-03-06 11:38:24 musl-1.2.4_git20230717-r6 < 1.2.5-r0 2024-03-06 11:38:28 hmm 2024-03-06 11:38:33 its not musl 2024-03-06 11:38:58 but there are new docker available 2024-03-06 11:39:03 I'll try upgrade docker 2024-03-06 11:39:16 i'll upgrade the system 2024-03-06 11:44:36 updating docker appeared to solve it 2024-03-06 11:44:38 thanks! 2024-03-06 11:45:38 ncopa: It's musl in the container ofcourse that's the problem, not on the host 2024-03-06 11:45:59 ah, ofc :) 2024-03-06 11:46:48 It's similar to the issues we had a couple of years ago with new syscalls being blocked by docker with EPERM 2024-03-06 11:46:55 fchmodat2(3, "direnv-2.34.0/test/scenarios/symlink-dir/bar", 0755, AT_SYMLINK_NOFOLLOW) = -1 EPERM (Operation not permitted) 2024-03-06 11:47:03 right 2024-03-06 11:47:13 im glad it was fixed to quick 2024-03-06 11:47:38 Interestingly we do not have the issue on all arches 2024-03-06 11:48:01 ncopa: I'll add monitoring this evening 2024-03-06 11:48:46 awesome! 2024-03-06 15:29:23 PCI or nvme driver is problem for occasional delay 2024-03-06 15:48:12 do we run check() on riscv64? or only on the CI? 2024-03-06 15:48:16 *now 2024-03-06 16:01:01 AFAIK the build-edge-riscv64 is still emulated and without tests 2024-03-06 16:11:18 hmhmhm, the Go riscv64 CI seems to run check but it doesn't install checkdepends (causing check() to fail) https://gitlab.alpinelinux.org/alpine/aports/-/jobs/1295040/raw 2024-03-06 16:12:42 aaaahhhhh 2024-03-06 16:13:02 there is an switch/case which unsets checkdepends on riscv64 :D 2024-03-06 16:16:55 where? 2024-03-06 16:17:25 https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/61727/diffs?commit_id=edf251bbd0f5859a03defe38f6181d9236fc9768 2024-03-06 16:17:38 sorry, I meant within the Go aport (in case that wasn't clear) 2024-03-06 16:17:47 aha 2024-03-06 16:18:11 but cool stuff that we can run check() on the CI now! 2024-03-06 16:18:35 excited to see how many bugs we find in upstream software with this :) 2024-03-06 16:23:50 I was trying to remember how (where) we disabled checks in CI in the first place 2024-03-06 16:24:19 On the builder, we set ABUILD_BOOTSTRAP=1 2024-03-06 21:01:14 yup 2024-03-06 21:01:27 btw, there is now a 6.6 branch https://github.com/sophgo/linux-riscv/tree/sg2042-dev-6.6 2024-03-06 21:01:35 but it does not carry many patches 2024-03-06 21:06:39 ncopa: sure, but please not im very busy and will not be in office all the time. 2024-03-06 21:07:04 so in case the boxes hang somebody will need to reset them 2024-03-06 21:07:49 there is a wifi socket between the 2 boxes, so you can power off/on them remotely 2024-03-07 20:02:27 how many riscv64 runners do we have? one on each box or only a single on one of the boxes? 2024-03-07 20:02:49 There should be 2, but I only see one active 2024-03-07 20:03:33 thats why im asking 2024-03-07 20:10:10 the .3 is running a job 2024-03-07 20:10:10 Not sure why yet 2024-03-07 20:10:43 did you just restart the running on .2? 2024-03-07 20:11:17 the *runner 2024-03-07 20:11:47 yes 2024-03-07 20:13:53 they both show up now 2024-03-07 20:16:46 bit it does not appear to take any jobs? 2024-03-07 20:17:39 tags were wrong, spaces instead of commas 2024-03-07 20:18:07 i just saw that 2024-03-07 20:18:16 how, where are those set? 2024-03-07 20:18:27 docker-alpine ci-build riscv64 2024-03-07 20:18:36 Or do you mean, technically 2024-03-07 20:18:40 yes 2024-03-07 20:19:07 if I would to fix that, where do i change the tags for that specific runner 2024-03-07 20:19:41 https://gitlab.alpinelinux.org/admin/runners/190#/ 2024-03-07 20:19:45 edit button top right 2024-03-07 20:20:24 aha, admin area, runners. got it 2024-03-07 20:20:52 These are instance runners, so you manage them in the admin pannel 2024-03-07 20:21:56 let me try cancel one of the pending jobs and restart 2024-03-07 20:22:31 I saw it already picking up jobs 2024-03-07 20:24:30 ha, the other job got picked. thats hy the one i tried to start still went to pending :) 2024-03-07 20:24:48 ok good. they are both running now 2024-03-07 20:25:20 If the boxes can handle it, we could even increase it to 2 concurrent jobs per runner 2024-03-07 20:25:46 The runners are set for 2, but it's globally limited at 1 (default you can / could not change that with a cli argument) 2024-03-07 20:28:56 hm 2024-03-07 20:29:34 they have 64 cpus, and i think the single core performance may be bottleneck 2024-03-07 20:30:40 might help if we have cpu usage statistics graphs 2024-03-07 20:30:51 we can leave them as is for now i think 2024-03-07 20:31:08 now we have two, which should double the capacity 2024-03-07 20:32:32 nod 2024-03-09 11:59:23 the cpu's support numa, atleast with this kernel 2024-03-09 11:59:30 so you could partition them 2024-03-15 09:37:43 our build-edge-riscv64 is still qemu emulated. Can i move it to one of the milk-v machines? 2024-03-15 09:45:05 not sure everything is 100% with firewall and things. docker vs lxc 2024-03-15 09:45:25 havnt touched them for a few weeks due to work 2024-03-15 10:02:23 i have a ncopa-edge-riscv64 there in lxc, and it appears to work ok 2024-03-15 13:06:39 be my guest to set one up 2024-03-15 13:06:45 not sure your keys are added 2024-03-15 13:08:56 ah i see they are added 2024-03-18 09:43:50 Hello there! Quick and simple question: I have prepped a "bootstrap container" on my RISC-V board, VisionFive2, in anticipation that I could use that to build my way through the 3.19 aports branch - but, I can't figure out how to bootstrap abuild properly - I do have a key, but couldn't find out how to create a repo - so I could reprodduce the alpine:3.19 docker image to use it with k3s and many other containers that use it as 2024-03-18 09:43:51 a base. So how do I properly set up a repo to store finalized APKs so I can build a lot of packages from aports/main ? Thanks! 2024-03-18 09:43:52 Image is here https://hub.docker.com/r/ingwiephoenix/openadk-abuild 2024-03-18 09:44:08 I was directed here from the #alpine-devel channel - mainly because of # 2024-03-18 09:44:24 ...of +M being set and NickServ not playing nice with my Matrix login. 2024-03-18 09:50:36 IngwiePhoenix[m]: have you seen https://arvanta.net/alpine/alpine-on-visionfive/ 2024-03-18 09:51:28 No, i haven't - never showed up when I tried to find out why there was no tagged release for riscv64. Thanks, will take a look! 2024-03-18 10:02:56 Interesting, they use the buildrepo command - which i haven't found in either apk-tools nor abuild. Has it been moved or deprecated perhaps? 2024-03-18 10:03:01 ref https://gitlab.alpinelinux.org/nmeum/alpine-visionfive/-/blob/main/build-pkgs.sh?ref_type=heads 2024-03-18 10:03:46 oh, nvm. i kept trying to use "repobuild". That one is on me... xD 2024-03-18 10:05:50 tagged release doesn't exists because we don't have hardware builder for riscv. maybe we will have it for 3.20 release 2024-03-18 10:06:40 IngwiePhoenix[m]: all needed package for VF2 we have in aports 2024-03-18 10:09:10 Setting up a screen right now - will start the build shortly. Constantly running into the issues of the alpine:3.19 tag not being there for containers is annoying, so I will just see to make it myself now lol. 2024-03-18 10:11:42 also, all packages are from upstreams except kernel for VF2 because kernel still needs some number of patches, about 25 2024-03-18 10:21:56 I know. :) Been running a custom 6.6.0 (their upstream branch) for a while now. 2024-03-18 10:25:51 Hm... buildrepo wasn't the solution either. 2024-03-18 10:25:52 ERROR: Unable to read database state: No such file or directory 2024-03-18 10:25:52 ERROR: Failed to open apk database: No such file or directory 2024-03-18 10:26:30 I want to build a repo, so there literally isn't one (well, there is edge, but i am trying to gatget 3.19...). How can I tell apk to shut up and fall back to just building the missing deps instead? 2024-03-18 16:07:32 IngwiePhoenix[m]: you could just retag the :edge image as :3.19... I doubt you'd have any issues 2024-03-18 16:12:20 bootstrapping a new release is a complicated process that I'm not sure is thoroughly publicly documented 2024-03-19 05:00:03 Documented it is not - but I am very interested in learning how it works. 2024-03-19 05:00:30 Sure - the 3.19 tagged container is what I am after, mainly. But I would also like to learn more about how the different OSes are put together and built. 2024-03-19 05:00:53 So... Bootstrapping the entire 3.19 Alpine seems like a good rabbit hole to go down. ^^ 2024-03-19 05:02:01 The process does not seem to be too complicated, the tooling is pretty nice. Just a few things are missing (setting up the initial repository, pointing abuild/buildrepo at that, building missing dependencies.) 2024-03-19 05:02:52 I kinda wish there was like an apk-src that just invoked abuild each time an unmet dependency is discovered; would probably help. But - anyway. What other tools and documents will I need to get this project off the ground? 2024-03-19 05:03:15 I do have a key, aports, abuild, apk and a suitable toolchain. Anything else? 2024-03-22 19:27:28 I'm testing linux-starfive 6.8 for few days and didn't noticed any problem so I think it should pushed to aports 2024-03-22 22:34:00 Push it 2024-03-23 08:01:17 Pushed it yesterday around 20 hours CET, now we wait for builder to all 2024-03-23 09:26:57 There seems some discussion about the vector capabilities of the pioneer cpu 2024-03-23 09:27:38 upstream has it disabled it in some branch 2024-03-23 09:29:07 while Linux has just introduced vector support for crypto stuff 2024-03-23 09:34:33 clandmeter: this is for 6.9 linux 2024-03-23 09:47:06 in any case, vector will be disabled in the future it seems 2024-03-23 09:47:41 this pioneer thead is only doing vector v0.71 2024-03-23 09:48:06 while software would assume v1.0 support, so it would crash applications 2024-03-23 09:48:19 https://lore.kernel.org/linux-riscv/20240223-tidings-shabby-607f086cb4d7@spud/ 2024-03-23 17:15:22 that doesn't affect you if you're using riscv,isa-extensions and it won't affect "xtheadvector" if you have a compiler that supports that 2024-03-29 11:24:17 Oh, it is our kernel that says it has RVV 1.0 while it in fact does not really have it? https://code.videolan.org/videolan/dav1d/-/merge_requests/1629#note_431594 2024-03-29 12:37:18 im trying to patch ffmpeg's RVV assembly to it works on sophgo 2024-03-29 12:38:31 i dont really know what I'm doing, but I doubt I can make it worse than it is... 2024-03-29 12:44:05 ok next instruction is also invalid 2024-03-29 12:44:14 vlsseg8e8.v v16, (a1), a2 2024-03-29 12:44:24 i think its time to give up :-/ 2024-03-29 13:12:33 clandmeter: i think I will rebuild kernel to latest sophgo git, and apply the disable RVV patch 2024-03-29 13:13:03 and then try to reboot at least one of the machines and see how it goes 2024-03-29 14:26:13 ncopa: ok go ahead 2024-03-29 14:26:21 its still based on .80 2024-03-29 14:26:33 yeah 2024-03-29 14:26:53 i have it built. im afraid of upgrade and reboot 2024-03-29 14:26:54 I would not try the other newer branch 2024-03-29 14:27:27 Its not onsite for me, so yes its a bit risky 2024-03-29 14:27:34 btw 2024-03-29 14:27:41 there is another issue 2024-03-29 14:28:03 lets wait with reboot til you are onsite 2024-03-29 14:28:25 if the rumors are true, which i think is, the dtb is not loaded on kernel load 2024-03-29 14:28:45 zsbl will load the dtb from fw 2024-03-29 14:30:11 and when u-root boots the os it does not load the kernel shipped dtb but just uses the fw version 2024-03-29 14:31:42 i have not verified this behavior myself, so we would need to test it. 2024-03-29 14:49:50 \o/ my ffmpeg patch seems to work