2026-02-09 16:20:25 tomalok: https://gitlab.alpinelinux.org/alpine/aports/-/issues/17949 2026-02-09 16:20:57 The files mentioned start with `/usr/local/`. Can you confirm our cloud images do not install anything there? 2026-02-09 17:31:59 It's a docker image, not cloud image 2026-02-20 23:57:18 hey, I think something is wrong with ec2 images for at least 3.22.3, alpine-3.22.3-x86_64-uefi-cloudinit-r0 2026-02-20 23:57:54 when I boot a VM on e.g. eu-central-1, using ami-08ffdcdf2ab81054f, I get: 2026-02-20 23:59:28 /home/alpine # apk info -v | grep linux-virt 2026-02-20 23:59:28 /home/alpine # uname -r 2026-02-20 23:59:28 linux-virt-6.12.74-r0 2026-02-20 23:59:29 6.12.67-0-virt 2026-02-20 23:59:36 is it supposed to be like that? 2026-02-21 00:00:10 the main reason I'm asking, is when I create a golden image on top of that AMI, I try to boot the VM, and init falls back to busybox shell, failing to mount the root fs 2026-02-21 00:01:20 the VM runs on nvme (exactly same as the official ami it was based on), but fails to identify the nvme volume attached; nothing in `blkid` 2026-02-21 00:02:08 and it seems initramfs contains /lib/modules/* for an older kernel versions that is currently running pre-switching roots 2026-02-21 00:02:53 Oh, the missed lines were: # apk info -v | grep linux-virt 2026-02-21 00:02:56 linux-virt-6.12.74-r0 2026-02-21 00:03:02 and then # uname -r 2026-02-21 00:03:03 6.12.67-0-virt 2026-02-21 00:47:55 Yeah, things get better when I install mkinitfs from 3.23 2026-02-21 00:48:40 at least now I can see that when the kernel is upgraded, there is a trigger call to update initramfs 2026-02-21 00:48:53 and the VM will actually boot as inteded 2026-02-21 00:57:06 I see the same issue on 3.21 as well, 3.20 and 3.23 do not seem to be affected 2026-02-21 01:04:34 (or maybe i'm just lucky that no kernel upgrades are happening in my builds for those targets?) 2026-02-21 01:30:19 ah, even `apk fix mkinitfs` in a pristine image fixes the problem 2026-02-23 23:09:34 ^^^ will appreciate someone who's savvy in the image build process to take a look why the trigger will not trigger on the pristine images (as it probably should) 2026-02-24 01:51:42 thresh: the image build process _should_ be updating initramfs when installing/updating APKs -- if it fails, it should not be able to proceed to later steps. i have another report of some strangeness going on with EC2 bare metal instances that I will also need to take a look at... when i have time :/ 2026-02-24 02:08:52 2026-02-24T02:07:20Z: alpine.qemu.3.22-x86_64-uefi-cloudinit-vm-aws: (131/146) Installing linux-virt (6.12.74-r0) 2026-02-24 02:09:13 2026-02-24T02:07:28Z: alpine.qemu.3.22-x86_64-uefi-cloudinit-vm-aws: Executing mkinitfs-3.12.0-r0.trigger 2026-02-24 02:09:13 2026-02-24T02:07:28Z: alpine.qemu.3.22-x86_64-uefi-cloudinit-vm-aws: * ==> initramfs: creating /boot/initramfs-virt for 6.12.74-0-virt 2026-02-24 02:10:35 thanks for looking at it 2026-02-24 02:11:33 and shortly after that, it gets done again 2026-02-24 02:11:37 2026-02-24T02:07:42Z: ==> alpine.qemu.3.22-x86_64-uefi-cloudinit-vm-aws: > Installing Bootloader < 2026-02-24 02:11:37 2026-02-24T02:07:43Z: alpine.qemu.3.22-x86_64-uefi-cloudinit-vm-aws: ==> initramfs: creating /boot/initramfs-virt for 6.12.74-0-virt 2026-02-24 02:12:10 (taking into account some kernel commandline / module stuff) 2026-02-24 02:14:00 that's just a fresh build -- don't have the time at the moment to launch some instances from the current images and dig deeper 2026-02-24 02:14:29 np, shall I open a ticket so it wont slip through? 2026-02-24 02:16:04 an issue on alpine-cloud-images repo would be right next to the one for the bare metal stuff i need to poke at too 2026-02-24 02:18:05 will do