2021-03-01 11:23:59 ncopa: builder disks are quite full 2021-03-01 11:24:09 I've made some space, but maybe there is something more that can be removed 2021-03-01 14:22:18 hi 2021-03-01 14:22:20 which builder? 2021-03-01 14:22:37 nld9 primarily 2021-03-01 14:22:49 But also usa4 2021-03-01 14:24:17 i think we can archive the build-2-* 2021-03-01 14:24:33 and build-edge-uclibc* 2021-03-01 14:26:03 reminds me, we need a new host for ancient.alpinelinux.org 2021-03-01 14:26:10 the archive the old releases 2021-03-01 14:26:28 only needs big disks 2021-03-01 14:27:08 currently it is only ~200G IIRC but I'd like to archive some more old releases to free space for new 2021-03-01 15:05:57 i have cleaned up nld9 and usa4 a bit 2021-03-01 15:07:20 deleted old temp dirs isotmp.* 2021-03-01 15:07:27 deleted *.xdelta files 2021-03-01 15:07:34 deleted *_rc* 2021-03-01 15:07:53 and aports/*/*/src aports/*/*/pkg 2021-03-01 15:08:03 I also cleaned up the compress folder on distfiles 2021-03-01 15:08:11 good, thanks 2021-03-01 15:09:21 were we getting a new, big, server for our two arm build servers? 2021-03-01 15:10:26 Yes, one single build server 2021-03-01 18:09:09 ncopa: https://ibb.co/1rVQ0jS :-) 2021-03-02 09:40:51 It looks like we're not the only ones with disk space issues 2021-03-04 21:08:50 New gitlab security release 2021-03-04 21:35:15 hmm, anoying. Apparently the source refers to gitlab-workhorse 8.58.4, but the latest tag available is 8.58.2 2021-03-05 09:46:27 ikke: #12494 Do you know if that's just fallout from switching to git.a.o to gitlab or if Gitlab actually changes checksums of tags over time (possibly if git versions change)? 2021-03-05 09:48:11 Cogitri: I have a feeling gitlab is less stable as github 2021-03-05 09:48:30 Yeah :/ 2021-03-05 20:09:42 CIs are stuck 2021-03-05 20:25:18 mps: did you pay them? 2021-03-05 20:26:30 ikke: ? 2021-03-05 20:26:46 mps: something about a sense of humor :P 2021-03-05 20:27:30 heh, strange sense, I thought I really have to pay some company 2021-03-05 20:28:01 No, more like the runners 2021-03-05 20:28:33 but I think you deserve 'something' for hard work on them 2021-03-10 12:35:51 I'm gonna shut down and decommision ancient.alpinelinux.org today. I hve the data backed up here locally if we ever find somewhere else to host it 2021-03-10 12:37:57 Ok 2021-03-10 12:56:03 sorry guys, i am currently unable to help. 2021-03-11 11:30:45 clandmeter: We'll try to cover as best as we can :-) 2021-03-12 10:13:01 yeah, no worries 2021-03-12 21:26:21 ugh 2021-03-13 12:26:38 Do you use Ansible for deployments or something else? 2021-03-13 12:29:02 https://gitlab.alpinelinux.org/alpine/infra/ansible 2021-03-13 12:29:29 That 404s, I guess I don't have the permissions to look at that 2021-03-13 12:29:44 apparentl;y 2021-03-13 12:29:46 apparently 2021-03-13 12:29:56 Thanks for the info anyway, was just curious what you use since I figured manually installing things works for now but will be a massive pain in case I have to re-install at some point 2021-03-13 12:30:07 As in manually installing things on my servers 2021-03-13 12:30:22 We mostly do things manually still 2021-03-13 12:30:35 but yes, having something like that would be helpful 2021-03-13 12:30:51 not only for new installations, but also to keep the existing infra in line 2021-03-13 12:31:19 Yup 2021-03-14 19:05:31 clandmeter: we desperately need some kind of spam filter for the mirror requests 2021-03-14 19:08:30 We do? 2021-03-14 19:08:44 yes 2021-03-14 19:09:03 I just cleaned up a bunch of spam issues again 2021-03-14 19:09:22 (gmail normally filters out the spam, but they do end up on gitlab) 2021-03-21 10:08:17 ikke: would it be possible to set up a gitlab org + repo for collecting proposals for alpineconf (which i tentatively proposed for the weekend of may 15-16) 2021-03-21 12:08:37 Ariadne: sure, can do that 2021-03-21 12:09:03 Ariadne: what's the proposed structure? 2021-03-21 12:09:14 proposed structure for what? 2021-03-21 12:09:20 the org? 2021-03-21 12:09:22 yes 2021-03-21 12:09:31 i'm happy to manage it for now 2021-03-21 12:09:43 So @teams/alpineconf? 2021-03-21 12:09:46 but yes, would be good to plan something less ad-hoc next time 2021-03-21 12:09:47 yes 2021-03-21 12:09:49 sounds good 2021-03-21 12:10:03 And the repo itself? 2021-03-21 12:10:12 alpine/conf? 2021-03-21 12:10:15 or something else 2021-03-21 12:10:19 alpine/alpineconf-cfp 2021-03-21 12:10:21 ok 2021-03-21 12:10:22 or something 2021-03-21 12:14:48 Created the group and repo, internally at first until we're ready to make it public 2021-03-21 12:22:16 go ahead and make it public i guess 2021-03-21 12:24:04 actually, i guess i will add directions first 2021-03-21 12:24:05 ;) 2021-03-21 12:24:59 Yeah, that was the idea 2021-03-21 12:39:34 done 2021-03-21 12:41:09 Public now 2021-03-21 12:45:09 hopefully we can get 2 days worth of content (i think we can) 2021-03-25 13:22:28 we are running out of diskspace and i will likely make a release later today 2021-03-25 13:30:14 i don thtink we have space for alpine 3.14 either 2021-03-25 13:32:08 I saw a recent package that's about 800M 2021-03-25 13:34:16 We need some plan, each release grows 2021-03-25 13:37:01 yeah 2021-03-25 13:37:20 with more rust and go stuff things will grow faster 2021-03-25 13:39:04 We could see how much using -w -s for go would help 2021-03-25 14:11:25 not much 2021-03-25 14:11:56 it is a little bit so might be worth 2021-03-25 14:23:19 usa4 '14:22:58 up 97 days, 14:53, 0 users, load average: 331.99, 324.73, 320.05' 2021-03-25 14:23:29 331 2021-03-25 14:29:01 nice :) 2021-03-25 14:29:18 i suppose we'd make good use of those new arm machines 2021-03-25 14:29:56 heh, now load average: 468.59, 412.32, 363.13 2021-03-25 14:31:05 building rpi kernel and openssl for every stable branch 2021-03-25 14:31:21 oh wait, load average: 545.77, 450.12, 380.49 2021-03-25 14:31:34 waiting for 1000 :) 2021-03-25 14:34:57 i intend to push 3.13.3 today. i hope it does not blow up the nld5-dev1 diskspace 2021-03-25 14:35:06 oh, gnome 40 is coming to edge as well 2021-03-25 14:36:03 ncopa: note its a dedicated mirror volume that is full 2021-03-25 14:36:24 i hope its not important.... 2021-03-25 16:54:01 nl3.alpinelinux.org is backed by it, and it's not updated atm 2021-03-27 10:23:21 clandmeter: We have an issue atm building gitlab images due to a gem (mimemagic) having pulled all old versions and put out a relicensed version (GPL-2.0) 2021-03-27 10:24:09 https://gitlab.com/gitlab-org/gitlab/-/merge_requests/57487/diffs 2021-03-27 14:47:55 managed to install usa9, but network is not working (it worked in the rescue os, but after boot, ping gives invalid address) 2021-03-27 18:09:36 Is Alpine's GitLab is configured in some highly-available manner, or is it just one instance? (apologies to Carlo for the redundant message) 2021-03-27 19:00:26 Thalheim: just a single instance, nothing HA 2021-03-27 22:41:26 thanks. I'm looking into ways to (avoiding their "reference architecture") set up a simpler implementation 2021-03-28 06:01:52 Thalheim: we run gitlab in docker 2021-03-28 06:01:57 works well for us 2021-03-28 06:02:09 we build our own docker images 2021-03-28 07:32:59 ikke: I do that as well (GL inside docker) and the only issue I had was a failed upgrade that I'd managed to recover. in theory, /var/log/gitlab, /etc/gitlab, and /var/opt/gitlab are the only important directories. 2021-03-28 07:34:44 However if there might be a catastrophic failure, it's not clear how to recover gracefully even if the risk of actually /losing/ data is low 2021-03-28 07:40:52 We store the actual data outside of docker 2021-03-28 07:41:20 We regularly recreate all the containers 2021-03-28 07:41:35 but bind mount the data / config folders 2021-03-28 07:42:45 The host itself is backed-up every night 2021-03-28 08:26:25 I may play with https://criu.org/Docker sometime soon to try live migration, snapshot/restore, etc. and report results. 2021-03-30 10:43:57 seems that our lxc containers does not run with alpine 3.13 2021-03-30 10:44:11 it could be kernel 5.10 that triggers it or it could be lxc-4.0.6 2021-03-30 10:44:47 the workaround is to comment out or remove `lxc.cap.drop = sys_admin` from the config 2021-03-30 10:44:49 ACTION is glad he did not attempt to upgrade infra hosts yet 2021-03-30 10:44:57 oh _remove_ it 2021-03-30 10:45:14 but isn't that a security issue? 2021-03-30 10:45:20 possibly yes 2021-03-30 10:45:36 i wonder if we should investigate to move to lxd or something 2021-03-30 10:46:01 interesting, I have lxc 4.0.6 on arm64 and x86_64 on 3.13, no problems 2021-03-30 10:46:19 do you have lxc.cap.drop = sys_admin ? 2021-03-30 10:46:19 hmm 2021-03-30 10:46:25 no 2021-03-30 10:46:33 thats why 2021-03-30 10:46:40 we restrict the sys_admin 2021-03-30 10:46:46 remove the restriction and it works 2021-03-30 10:47:21 i think with sys_admin capabilities you basically have access to the host 2021-03-30 10:50:02 so i wonder if we might want create a VM for the developer containers 2021-03-30 10:50:29 or maybe investigate if we could use lxd 2021-03-30 10:50:51 lxc has support to 2025 as i understand 2021-03-30 10:51:13 lxd sounds a lot more complex 2021-03-30 10:52:03 But why would lxd work if lxc already is not working? 2021-03-30 10:52:19 "LXD isn't a rewrite of LXC, in fact it's building on top of LXC to provide a new, better user experience." 2021-03-30 10:53:44 2025? I have impression that world will end before that 2021-03-30 10:54:04 because lxc is currently broken 2021-03-30 10:54:23 basically, my question is: do we try fix lxc or do we look for something new 2021-03-30 10:54:28 as I read lxd is more 'complicated' than lxc, though didn't tested it 2021-03-30 10:54:38 it is also implemented in go 2021-03-30 10:54:55 is that means good or bad? 2021-03-30 10:54:57 which would have been a problem when mips64 did not support go 2021-03-30 10:54:59 bad 2021-03-30 10:55:15 it means we need a go compiler for every new arch 2021-03-30 10:55:20 so, proper solution is to try to fix lxc 2021-03-30 10:55:39 or provide vms as dev machines 2021-03-30 10:55:56 but vms are heavier 2021-03-30 10:56:00 vms? qemu you mean? 2021-03-30 10:56:03 yes 2021-03-30 10:56:04 yeah 2021-03-30 10:56:16 right now we have dev containers mixed with builder containers on a single host 2021-03-30 10:56:22 for arches other than x86_64 2021-03-30 10:56:28 yes 2021-03-30 10:56:44 So if we cannot drop sys_admin, this is a huge security issue 2021-03-30 10:56:45 we could also provide a single vm with all the dev containers 2021-03-30 10:56:47 then we should try to fix lxc, imo 2021-03-30 10:57:20 A vm would make our network setup more complicated 2021-03-30 10:57:23 wait, 'single vm with all the dev containers' 2021-03-30 10:57:33 what does this means? 2021-03-30 10:57:58 it means that instead of running lxc on the iron, we run lxc in a vm 2021-03-30 10:58:08 qemu -> devs lxcs 2021-03-30 10:58:14 correct 2021-03-30 10:58:25 it has more overhead 2021-03-30 10:58:34 yes 2021-03-30 10:59:23 hmmm, not 'one day' decision 2021-03-30 10:59:57 well, we need return the current arm machines in a couple of weeks so we can not think too long about it 2021-03-30 11:00:52 i guess first step is to figure out if problem was introduced with lxc 4.0.6 or 5.10 kernel 2021-03-30 11:00:59 We could try different kernels and lxc versions to narrow it down 2021-03-30 11:01:04 ha! 2021-03-30 11:01:11 yeah :) 2021-03-30 11:01:21 let me try lxc-4.0.5 on my desktop 2021-03-30 11:01:36 We can install linux-lts@3.12-main 2021-03-30 11:02:37 I can test later this day lxc with cap_sys_admin on my local machine 2021-03-30 11:04:26 ikke: on this nld box where is my x86_64 lxc kernel is 4.19.x 2021-03-30 11:04:36 yes, that one is not upgraded yet 2021-03-30 11:04:47 4.19.80-0-vanilla 2021-03-30 11:05:08 It's AL 3.10, so we need to really upgrade it 2021-03-30 11:05:13 can you test lxc 4.0.6 there 2021-03-30 11:05:15 mknod: dev/zero: File exists 2021-03-30 11:05:16 lxc-create: a2: lxccontainer.c: create_run_template: 1616 Failed to create container from template 2021-03-30 11:05:33 ncopa: yes, ran into that as well 2021-03-30 11:06:04 ncopa-desktop:/var/lib/lxc# lxc-start -n a2 -F 2021-03-30 11:06:04 lxc-start: a2: conf.c: lxc_mount_auto_mounts: 728 Cross-device link - Failed to mount "/sys/fs/cgroup" 2021-03-30 11:06:10 ok, so its related to the kernel 2021-03-30 11:06:44 should I install the 3.12 kernel? 2021-03-30 11:06:47 to test 2021-03-30 11:07:06 would be good 2021-03-30 11:07:11 where? 2021-03-30 11:07:14 i was thinking in a vm 2021-03-30 11:07:20 usa9 2021-03-30 11:07:27 it's fast enough to switch 2021-03-30 11:07:28 lets do it in a vm 2021-03-30 11:07:30 ok 2021-03-30 11:15:19 i have created a vm with alpine 3.12 2021-03-30 11:15:30 ok 2021-03-30 11:15:30 created an lxc container 2021-03-30 11:15:35 it works as expected 2021-03-30 11:15:46 upgraed lxc and lxc-libs to v3.13 repo 2021-03-30 11:15:51 and it still works 2021-03-30 11:16:07 Ok, so definitely kernel? 2021-03-30 11:16:20 im pretty sure it is 2021-03-30 11:16:39 fun 2021-03-30 11:16:43 can you also try linux-edge? 2021-03-30 11:20:12 same problem 2021-03-30 11:20:25 ok 2021-03-30 11:23:56 ok. we need to report this upstream 2021-03-30 11:24:03 i wil be kinda busy rest of the day 2021-03-30 11:35:25 ikke: i think we need enable firewall for lxcbr0. the containers does not get ip 2021-03-30 11:36:05 I thought I did, but I'll check later 2021-03-30 11:41:25 Mar 30 11:40:21 usa9-dev1 daemon.warn dnsmasq-dhcp[7022]: Error sending DHCP packet to 172.16.23.108: Operation not permitted 2021-03-30 16:24:40 ikke: i think i fixed the firewall issue on usa9 2021-03-30 16:45:43 Yes, I notcied 2021-03-30 16:46:02 what was it? 2021-03-30 18:44:02 still lxc issues? 2021-03-30 18:48:13 ikke: so the disk space is 2x 800G? 2021-03-30 18:48:23 close to 900 2021-03-30 18:48:49 2x960GB 2021-03-30 18:49:16 clandmeter: yes, on the 5.10 kernel, containers only start with CAP_SYS_ADMIN 2021-03-30 18:49:44 so space is kind of limited on such performance box 2021-03-30 18:51:01 would be nice to do lvm raid0 to double the size 2021-03-30 18:51:17 just lvm, right? 2021-03-30 18:51:22 I just add the pv to the vg 2021-03-30 18:51:25 only nvme0 is used 2021-03-30 18:51:36 yes, I wanted to wait what ncopa thought about it 2021-03-30 18:51:49 but he's ok to just extend the space 2021-03-30 18:52:07 yes you can create a lv with raid0 with two members 2021-03-30 18:52:11 VG #PV #LV #SN Attr VSize VFree 2021-03-30 18:52:14 vg0 2 2 0 wz--n- <1.75t 894.25g 2021-03-30 18:52:46 Not sure if raid0 is necessary? 2021-03-30 18:52:47 under the hood it would just use regular linux raid 2021-03-30 18:52:54 nvme is already quite fast 2021-03-30 18:53:11 its not for speed 2021-03-30 18:53:15 but? 2021-03-30 18:53:38 well ok yes the adv is speed comapred to regular lv :) 2021-03-30 18:55:00 but you loose redundancy with regular lv, just like with raid0, but i think the performance would be a bit better. 2021-03-30 18:55:30 yes, both options are not redundant 2021-03-30 18:55:38 so we are running debian now :) 2021-03-30 18:55:54 heh, yes 2021-03-30 18:56:04 least bad option :P 2021-03-30 18:56:58 so raid0? 2021-03-30 18:57:16 i dont mind, you can also keep it simple. 2021-03-30 18:57:33 probably depend on the use case of the lv 2021-03-30 18:57:33 Can I change the current LVs to raid0? 2021-03-30 18:57:50 for the os, i dont think it matters at all. 2021-03-30 18:58:17 i think you can convert it 2021-03-30 18:58:22 man lvm-raid iirc 2021-03-30 18:58:30 apparently there are 2 ways 2021-03-30 18:58:35 VG #PV #LV #SN Attr VSize VFree 2021-03-30 18:58:40 https://serverfault.com/a/1018353/1615 2021-03-30 18:59:56 so you have mdadm raid or device mapper raid 2021-03-30 19:00:52 https://man7.org/linux/man-pages/man7/lvmraid.7.html 2021-03-30 19:00:58 this is more accurate and has more details 2021-03-30 19:02:57 it's still in active development, when i last checked. but thats more than a year ago. 2021-03-30 19:04:10 hmm, why does apk add man not work anymore? 2021-03-30 19:04:28 man-pages? 2021-03-30 19:04:31 mandoc 2021-03-30 19:04:52 aha, and mandoc 2021-03-30 19:05:18 but why not just have man like a virual? 2021-03-30 19:05:49 confusing, I think 2021-03-30 19:06:19 mandoc or man-db 2021-03-30 19:06:28 apk add man is confusing? 2021-03-30 19:06:50 yes 2021-03-30 19:07:07 because we have mandoc and man-db 2021-03-30 19:08:05 it's confusing somebody removed the feature of being able to `apk add man` :p 2021-03-30 19:08:34 well, it is confusing whatever we do 2021-03-30 19:09:18 clandmeter: Logical volume vg0/lv_root is already of requested type linear 2021-03-30 19:09:42 lvconvert --type raid0 -m1 /dev/vg0/lv_root 2021-03-30 19:11:10 I need striped 2021-03-30 19:13:24 lvconvert --type striped vg/lv 2021-03-30 19:14:38 clandmeter: if you want/like I will add 'provides=man" to mandoc (and will not care for confusing other users) 2021-03-30 19:14:39 --stripes not allowed for LV vg0/lv_root when converting from linear to raid1. :( 2021-03-30 19:14:58 (everything for friends) 2021-03-30 19:15:23 mps: nah, i dont want to open another can of worms :) 2021-03-30 19:15:43 ikke: raid1? 2021-03-30 19:16:06 I guess because I specified -m1 earlier? 2021-03-30 19:32:03 https://www.linuxquestions.org/questions/showthread.php?p=6108054 2021-03-30 19:38:32 I cannot get it to convert to raid0 / striped 2021-03-31 06:11:56 good morning 2021-03-31 07:50:40 morning 2021-03-31 08:04:27 o/ 2021-03-31 10:13:46 ikke, clandmeter: i'd like to test libvirt on the new arm machine, to manage the vms 2021-03-31 10:14:16 why? 2021-03-31 10:15:34 because i believe it makes it easy/convenient to create/destroy virtual machines 2021-03-31 10:15:37 also temp vms 2021-03-31 10:15:48 i tested a terraform plugin locally 2021-03-31 10:16:16 and its kinda convenient to create vms with virt-manager 2021-03-31 10:16:25 which works over ssh as well 2021-03-31 10:18:18 so a builder will technically depend on it in the future? 2021-03-31 10:18:37 potentially 2021-03-31 10:18:50 not necessarily 2021-03-31 10:19:07 i think for now i'd only like to test it a bit 2021-03-31 10:19:18 and i'd like to test if we can do 32bit vms 2021-03-31 10:19:47 why would 32bit vms not work? 2021-03-31 10:20:21 maybe CPU doesn't support 32bit mode 2021-03-31 10:20:26 thats what qemu is about? 2021-03-31 10:20:50 yes, but without 32bit mode on host qemu is slow 2021-03-31 10:21:01 i think they work, but i want test that they actually do 2021-03-31 10:21:13 and it might be that some CPUs require 64bit kernel 2021-03-31 10:21:34 i actually believe it works, i just want verify it 2021-03-31 10:21:35 I dont follow 2021-03-31 10:21:37 https://marcin.juszkiewicz.com.pl/2016/01/17/running-32-bit-arm-virtual-machine-on-aarch64-hardware/ 2021-03-31 10:22:09 did you try it in lxc? 2021-03-31 10:22:52 lxc is different, as you know better than me 2021-03-31 10:23:21 if you want to know what the cpu can do, you need to run it bare metal 2021-03-31 10:23:23 I think ncopa means true VM 2021-03-31 10:23:37 32 bit works in lxc 2021-03-31 10:24:01 i tested it 2021-03-31 10:24:18 so what else do you want to test? 2021-03-31 10:24:23 if userspace works 2021-03-31 10:24:27 32 bit kernel in a vm 2021-03-31 10:25:45 lscpu => CPU op-mode(s): 32-bit, 64-bit 2021-03-31 10:26:08 on bare metal 2021-03-31 10:29:20 I guess we did not find out yet why LXC containers don't run without CAP_SYS_ADMIN on 5.10? 2021-03-31 10:30:08 CONFIG_COMPAT and other things in make menuconfig submenu 2021-03-31 10:30:24 ^ not about lxc 2021-03-31 10:30:44 ikke: we didnt figure that out yet, indeed 2021-03-31 10:31:21 I found patch on debian but didn't tested because mails there are vague 2021-03-31 10:36:43 im still confused why qemu-system-arm would not work on aarch64. i do wonder what the performance will be (but thats another question) :) 2021-03-31 10:38:09 and personally im not a huge fan of libvirt. its an xml mess surounding qemu 2021-03-31 10:38:22 clandmeter: qemu-system-arm works on aarch64 (and x86_64) but it could be slow if host doesn't support 32 bit EL0 2021-03-31 10:38:25 around* 2021-03-31 10:38:45 I agree about libvirt, it is a mess 2021-03-31 10:38:47 mps: yes, but that was not the question. question was, will it work. 2021-03-31 10:38:55 EL0? 2021-03-31 10:39:09 i remember when we used libvirt last time, and had to patch it for our iptables rules 2021-03-31 10:39:15 Emulation Level, iirc 2021-03-31 10:40:02 i have used libvirt on my desktop for a while, and its kinda nice. lots of tooling available around it 2021-03-31 10:40:37 yes you take away the fun of qemu and plug and pray 2021-03-31 10:41:02 well, i guess you hide qemu in a few layers of xml :) 2021-03-31 10:41:10 but if you like to start using some tools/cli/interfaces, it probably is the only way. 2021-03-31 10:41:22 yes thats what i dont like 2021-03-31 10:41:44 you dont know exactly whats going on. 2021-03-31 10:41:53 i guess if we are fine with only running 2 vms for CI, libvirt is overkill 2021-03-31 10:42:08 sorry, Execution Level, not emulation 2021-03-31 10:42:13 im ok with libvirt btw, even if i dont like it :) 2021-03-31 10:42:19 if we want run more, i think libvirt might be an option 2021-03-31 10:42:27 and at this stage, i'd just like to test it 2021-03-31 10:42:46 Even with 2 vms, it might be worth it (compared to our current solution) :) 2021-03-31 10:42:56 if it turns out that everythign "just works", we can consider it 2021-03-31 10:43:07 if its painful, we can drop it 2021-03-31 10:44:07 lets say if libvirt is alpine new standard, ill build a few more houses :) 2021-03-31 10:44:12 haha :D 2021-03-31 10:44:26 and i dont mind using a script to start a vm. 2021-03-31 10:44:47 it can be polished though 2021-03-31 10:44:52 like what jirutka did 2021-03-31 10:45:04 Would be nice if it could be an actual service 2021-03-31 10:45:21 i think jirutka did some openrc script for it 2021-03-31 10:45:28 right 2021-03-31 10:45:34 thats what jirutka did with qemu-openrc 2021-03-31 10:45:42 that one, yes 2021-03-31 10:46:06 but i liked to know what qemu was doing, qemu has a lot of changes over time. 2021-03-31 10:46:29 I made (as an experiment) a bash script where you could declare qemu parameters declaretively 2021-03-31 10:46:46 clandmeter: my vote is for script (though I know we are not voting) 2021-03-31 10:47:50 sooner or later a script will not be enough. 2021-03-31 10:47:59 for me reading and understanding script is not easy, but a lot lot easier than xml 2021-03-31 10:48:01 but we are still in sooner 2021-03-31 10:48:34 At least as a service would be an improvement 2021-03-31 10:50:01 actually I run lxcs with init.d openrc scripts 2021-03-31 15:17:47 ncopa: forgot to tell, you can check if cpu support EL0 by looking 'dmesg | grep EL0' 2021-03-31 15:18:46 '[ 0.004441] CPU features: detected: 32-bit EL0 Support' 2021-03-31 15:19:16 and/or '[ 0.004459] CPU features: detected: 32-bit EL1 Support' 2021-03-31 15:19:17 mps: Ah, it does 2021-03-31 15:19:39 good news :) 2021-03-31 15:20:07 It just mentions EL0, not EL1 2021-03-31 15:20:28 well, you check it also ;) 2021-03-31 15:20:46 grep EL1 returns nothing 2021-03-31 15:22:21 iirc EL0 is enough 2021-03-31 15:26:05 ikke: https://medium.com/@om.nara/aarch64-exception-levels-60d3a74280e6 2021-03-31 15:26:14 short about this 2021-03-31 15:32:42 i have been messing with docker in lxc on ubuntu.... baning my head in docker not working 2021-03-31 15:33:22 turns out to be that faccessat2 libseccomp issue, thats not fixedin the ubuntu docker package 2021-03-31 15:34:20 classic, security and easy to use doesn't play well together 2021-03-31 15:36:25 indeed 2021-03-31 15:38:25 as they say "you can't have virgin wife and a lot of children" :) 2021-03-31 15:39:35 or as Steve Bellowin told "security is complex, isn't it"