Unlock LUKS non-root filesystem partition on boot

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












I have a system with an unencrypted / partition on Ubuntu 16.04, but with a LUKS encrypted ZFS zpool on 3 partitions. In order for the system to boot properly I want for the LUKS encrypted volumes to be unlocked before ZFS and other services (like database, web, email etc.) start, and this needs to be able to done remotely, through SSH.



With the 3 partitions added to the /etc/crypttab, the system boots and just after the initramfs stage will wait for unlock (and prompt for passwords). However, the usual way of accomplishing LUKS unlock at boot remotely is done through dropbear in initramfs, however because the 3 partitions are not in fstab the system will 'fall through' initramfs so to speak and continue to systemd. This is undesirable in this case as systemd prioritizes crypttab over the OpenSSH or dropbear meaning remote unlocking is disabled.



A dirty hack that works is simply adding a sleep 300 in initramfs, giving you time to login through dropbear and unlock, however this too is undesirable. I see foresee two options to fix this, but am not sure which would be best and I do not know how to implement either:



  • Changhing systemd boot order, to make sure something like networking and OpenSSH are up before crypttab, enabling remote or local unlocking.


  • Having initramfs wait for non-trivial partitions to be unlocked before proceeding to systemd.







share|improve this question




















  • Just add/modify unit files so that the decryption happens before the rest.
    – Ignacio Vazquez-Abrams
    Apr 2 at 20:38














up vote
0
down vote

favorite












I have a system with an unencrypted / partition on Ubuntu 16.04, but with a LUKS encrypted ZFS zpool on 3 partitions. In order for the system to boot properly I want for the LUKS encrypted volumes to be unlocked before ZFS and other services (like database, web, email etc.) start, and this needs to be able to done remotely, through SSH.



With the 3 partitions added to the /etc/crypttab, the system boots and just after the initramfs stage will wait for unlock (and prompt for passwords). However, the usual way of accomplishing LUKS unlock at boot remotely is done through dropbear in initramfs, however because the 3 partitions are not in fstab the system will 'fall through' initramfs so to speak and continue to systemd. This is undesirable in this case as systemd prioritizes crypttab over the OpenSSH or dropbear meaning remote unlocking is disabled.



A dirty hack that works is simply adding a sleep 300 in initramfs, giving you time to login through dropbear and unlock, however this too is undesirable. I see foresee two options to fix this, but am not sure which would be best and I do not know how to implement either:



  • Changhing systemd boot order, to make sure something like networking and OpenSSH are up before crypttab, enabling remote or local unlocking.


  • Having initramfs wait for non-trivial partitions to be unlocked before proceeding to systemd.







share|improve this question




















  • Just add/modify unit files so that the decryption happens before the rest.
    – Ignacio Vazquez-Abrams
    Apr 2 at 20:38












up vote
0
down vote

favorite









up vote
0
down vote

favorite











I have a system with an unencrypted / partition on Ubuntu 16.04, but with a LUKS encrypted ZFS zpool on 3 partitions. In order for the system to boot properly I want for the LUKS encrypted volumes to be unlocked before ZFS and other services (like database, web, email etc.) start, and this needs to be able to done remotely, through SSH.



With the 3 partitions added to the /etc/crypttab, the system boots and just after the initramfs stage will wait for unlock (and prompt for passwords). However, the usual way of accomplishing LUKS unlock at boot remotely is done through dropbear in initramfs, however because the 3 partitions are not in fstab the system will 'fall through' initramfs so to speak and continue to systemd. This is undesirable in this case as systemd prioritizes crypttab over the OpenSSH or dropbear meaning remote unlocking is disabled.



A dirty hack that works is simply adding a sleep 300 in initramfs, giving you time to login through dropbear and unlock, however this too is undesirable. I see foresee two options to fix this, but am not sure which would be best and I do not know how to implement either:



  • Changhing systemd boot order, to make sure something like networking and OpenSSH are up before crypttab, enabling remote or local unlocking.


  • Having initramfs wait for non-trivial partitions to be unlocked before proceeding to systemd.







share|improve this question












I have a system with an unencrypted / partition on Ubuntu 16.04, but with a LUKS encrypted ZFS zpool on 3 partitions. In order for the system to boot properly I want for the LUKS encrypted volumes to be unlocked before ZFS and other services (like database, web, email etc.) start, and this needs to be able to done remotely, through SSH.



With the 3 partitions added to the /etc/crypttab, the system boots and just after the initramfs stage will wait for unlock (and prompt for passwords). However, the usual way of accomplishing LUKS unlock at boot remotely is done through dropbear in initramfs, however because the 3 partitions are not in fstab the system will 'fall through' initramfs so to speak and continue to systemd. This is undesirable in this case as systemd prioritizes crypttab over the OpenSSH or dropbear meaning remote unlocking is disabled.



A dirty hack that works is simply adding a sleep 300 in initramfs, giving you time to login through dropbear and unlock, however this too is undesirable. I see foresee two options to fix this, but am not sure which would be best and I do not know how to implement either:



  • Changhing systemd boot order, to make sure something like networking and OpenSSH are up before crypttab, enabling remote or local unlocking.


  • Having initramfs wait for non-trivial partitions to be unlocked before proceeding to systemd.









share|improve this question











share|improve this question




share|improve this question










asked Apr 2 at 20:36









brickmasterj

456




456











  • Just add/modify unit files so that the decryption happens before the rest.
    – Ignacio Vazquez-Abrams
    Apr 2 at 20:38
















  • Just add/modify unit files so that the decryption happens before the rest.
    – Ignacio Vazquez-Abrams
    Apr 2 at 20:38















Just add/modify unit files so that the decryption happens before the rest.
– Ignacio Vazquez-Abrams
Apr 2 at 20:38




Just add/modify unit files so that the decryption happens before the rest.
– Ignacio Vazquez-Abrams
Apr 2 at 20:38










1 Answer
1






active

oldest

votes

















up vote
0
down vote













I am currently setting up such a system, but in debian stretch. And I am doing my experimentation in a VM before I actually set up the physical computer itself. I have a very similar setup working in the VM.



2 disks in zpool mirror for /, but not /boot. /boot is on md0. zpool is on top of LUKS. Using EFI.



root@zstaging:~# cat /proc/1/comm
systemd

root@zstaging:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
+-sda1 8:1 0 256M 0 part
¦ +-md0 9:0 0 256M 0 raid1 /boot
+-sda2 8:2 0 256M 0 part /boot/efi
+-sda3 8:3 0 19.5G 0 part
¦ +-disk0_crypt 253:0 0 19.5G 0 crypt
+-sda9 8:9 0 9M 0 part
sdb 8:16 0 20G 0 disk
+-sdb1 8:17 0 256M 0 part
¦ +-md0 9:0 0 256M 0 raid1 /boot
+-sdb2 8:18 0 256M 0 part
+-sdb3 8:19 0 19.5G 0 part
¦ +-disk1_crypt 253:1 0 19.5G 0 crypt
+-sdb9 8:25 0 9M 0 part
sdc 8:32 0 20G 0 disk
sr0 11:0 1 1.8G 0 rom

root@zstaging:~# zpool status
pool: rpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
disk0_crypt ONLINE 0 0 0
disk1_crypt ONLINE 0 0 0

errors: No known data errors


Relevant files and setup:



root@zstaging:~# cat /etc/fstab
UUID=648bfa4b-1b5f-480a-bb26-b3abffb4a6de /boot auto defaults 0 0
PARTUUID=1673f966-173b-4128-84d5-4e8d5810efef /boot/efi vfat defaults 0 1

root@zstaging:~# cat /etc/crypttab
disk0_crypt UUID=26194846-ba49-4e53-ab0b-857b0dad2021 none luks
disk1_crypt UUID=ef44b66a-8706-4be2-bd12-a30d40de9669 none luks

root@zstaging:~# cat /etc/initramfs-tools/conf.d/cryptroot
target=disk0_crypt,source=UUID=26194846-ba49-4e53-ab0b-857b0dad2021,key=none,rootdev
target=disk1_crypt,source=UUID=ef44b66a-8706-4be2-bd12-a30d40de9669,key=none,rootdev

set CRYPTSETUP=y in /etc/cryptsetup-initramfs/conf-hook

# vi /etc/default/grub
replace GRUB_CMDLINE_LINUX="" with GRUB_CMDLINE_LINUX="boot=zfs"
Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console

# update-initramfs -u -k all
# update-grub
# grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=debian --recheck --no-floppy


Everything works great. Get prompt for unlocking the two disks one at a time, and off we go. It is NOT necessary to list any zfs stuff in /etc/fstab. Yes, initramfs etc. WILL complain about /etc/fstab, but it all works fine, so far



I can even remote unlock it



# apt-get install dropbear-initramfs busybox
# vi /etc/dropbear-initramfs/authorized_keys
Paste my SSH pubkey

# chmod 400 /etc/dropbear-initramfs/authorized_keys
# update-initramfs -u

in another machine that has my SSH keys, add a custom section in ~/.ssh/config:
Host zstaging_unlock
HostName <ip_of_zstaging>
User root
HostKeyAlias zstaging_unlock

to remote unlock zstaging, from remote machine:
$ ssh zstaging_unlock
and follow prompts


HTH






share|improve this answer




















    Your Answer







    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "106"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: false,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );








     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f435131%2funlock-luks-non-root-filesystem-partition-on-boot%23new-answer', 'question_page');

    );

    Post as a guest






























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    0
    down vote













    I am currently setting up such a system, but in debian stretch. And I am doing my experimentation in a VM before I actually set up the physical computer itself. I have a very similar setup working in the VM.



    2 disks in zpool mirror for /, but not /boot. /boot is on md0. zpool is on top of LUKS. Using EFI.



    root@zstaging:~# cat /proc/1/comm
    systemd

    root@zstaging:~# lsblk
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    sda 8:0 0 20G 0 disk
    +-sda1 8:1 0 256M 0 part
    ¦ +-md0 9:0 0 256M 0 raid1 /boot
    +-sda2 8:2 0 256M 0 part /boot/efi
    +-sda3 8:3 0 19.5G 0 part
    ¦ +-disk0_crypt 253:0 0 19.5G 0 crypt
    +-sda9 8:9 0 9M 0 part
    sdb 8:16 0 20G 0 disk
    +-sdb1 8:17 0 256M 0 part
    ¦ +-md0 9:0 0 256M 0 raid1 /boot
    +-sdb2 8:18 0 256M 0 part
    +-sdb3 8:19 0 19.5G 0 part
    ¦ +-disk1_crypt 253:1 0 19.5G 0 crypt
    +-sdb9 8:25 0 9M 0 part
    sdc 8:32 0 20G 0 disk
    sr0 11:0 1 1.8G 0 rom

    root@zstaging:~# zpool status
    pool: rpool
    state: ONLINE
    scan: none requested
    config:

    NAME STATE READ WRITE CKSUM
    rpool ONLINE 0 0 0
    mirror-0 ONLINE 0 0 0
    disk0_crypt ONLINE 0 0 0
    disk1_crypt ONLINE 0 0 0

    errors: No known data errors


    Relevant files and setup:



    root@zstaging:~# cat /etc/fstab
    UUID=648bfa4b-1b5f-480a-bb26-b3abffb4a6de /boot auto defaults 0 0
    PARTUUID=1673f966-173b-4128-84d5-4e8d5810efef /boot/efi vfat defaults 0 1

    root@zstaging:~# cat /etc/crypttab
    disk0_crypt UUID=26194846-ba49-4e53-ab0b-857b0dad2021 none luks
    disk1_crypt UUID=ef44b66a-8706-4be2-bd12-a30d40de9669 none luks

    root@zstaging:~# cat /etc/initramfs-tools/conf.d/cryptroot
    target=disk0_crypt,source=UUID=26194846-ba49-4e53-ab0b-857b0dad2021,key=none,rootdev
    target=disk1_crypt,source=UUID=ef44b66a-8706-4be2-bd12-a30d40de9669,key=none,rootdev

    set CRYPTSETUP=y in /etc/cryptsetup-initramfs/conf-hook

    # vi /etc/default/grub
    replace GRUB_CMDLINE_LINUX="" with GRUB_CMDLINE_LINUX="boot=zfs"
    Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT
    Uncomment: GRUB_TERMINAL=console

    # update-initramfs -u -k all
    # update-grub
    # grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=debian --recheck --no-floppy


    Everything works great. Get prompt for unlocking the two disks one at a time, and off we go. It is NOT necessary to list any zfs stuff in /etc/fstab. Yes, initramfs etc. WILL complain about /etc/fstab, but it all works fine, so far



    I can even remote unlock it



    # apt-get install dropbear-initramfs busybox
    # vi /etc/dropbear-initramfs/authorized_keys
    Paste my SSH pubkey

    # chmod 400 /etc/dropbear-initramfs/authorized_keys
    # update-initramfs -u

    in another machine that has my SSH keys, add a custom section in ~/.ssh/config:
    Host zstaging_unlock
    HostName <ip_of_zstaging>
    User root
    HostKeyAlias zstaging_unlock

    to remote unlock zstaging, from remote machine:
    $ ssh zstaging_unlock
    and follow prompts


    HTH






    share|improve this answer
























      up vote
      0
      down vote













      I am currently setting up such a system, but in debian stretch. And I am doing my experimentation in a VM before I actually set up the physical computer itself. I have a very similar setup working in the VM.



      2 disks in zpool mirror for /, but not /boot. /boot is on md0. zpool is on top of LUKS. Using EFI.



      root@zstaging:~# cat /proc/1/comm
      systemd

      root@zstaging:~# lsblk
      NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
      sda 8:0 0 20G 0 disk
      +-sda1 8:1 0 256M 0 part
      ¦ +-md0 9:0 0 256M 0 raid1 /boot
      +-sda2 8:2 0 256M 0 part /boot/efi
      +-sda3 8:3 0 19.5G 0 part
      ¦ +-disk0_crypt 253:0 0 19.5G 0 crypt
      +-sda9 8:9 0 9M 0 part
      sdb 8:16 0 20G 0 disk
      +-sdb1 8:17 0 256M 0 part
      ¦ +-md0 9:0 0 256M 0 raid1 /boot
      +-sdb2 8:18 0 256M 0 part
      +-sdb3 8:19 0 19.5G 0 part
      ¦ +-disk1_crypt 253:1 0 19.5G 0 crypt
      +-sdb9 8:25 0 9M 0 part
      sdc 8:32 0 20G 0 disk
      sr0 11:0 1 1.8G 0 rom

      root@zstaging:~# zpool status
      pool: rpool
      state: ONLINE
      scan: none requested
      config:

      NAME STATE READ WRITE CKSUM
      rpool ONLINE 0 0 0
      mirror-0 ONLINE 0 0 0
      disk0_crypt ONLINE 0 0 0
      disk1_crypt ONLINE 0 0 0

      errors: No known data errors


      Relevant files and setup:



      root@zstaging:~# cat /etc/fstab
      UUID=648bfa4b-1b5f-480a-bb26-b3abffb4a6de /boot auto defaults 0 0
      PARTUUID=1673f966-173b-4128-84d5-4e8d5810efef /boot/efi vfat defaults 0 1

      root@zstaging:~# cat /etc/crypttab
      disk0_crypt UUID=26194846-ba49-4e53-ab0b-857b0dad2021 none luks
      disk1_crypt UUID=ef44b66a-8706-4be2-bd12-a30d40de9669 none luks

      root@zstaging:~# cat /etc/initramfs-tools/conf.d/cryptroot
      target=disk0_crypt,source=UUID=26194846-ba49-4e53-ab0b-857b0dad2021,key=none,rootdev
      target=disk1_crypt,source=UUID=ef44b66a-8706-4be2-bd12-a30d40de9669,key=none,rootdev

      set CRYPTSETUP=y in /etc/cryptsetup-initramfs/conf-hook

      # vi /etc/default/grub
      replace GRUB_CMDLINE_LINUX="" with GRUB_CMDLINE_LINUX="boot=zfs"
      Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT
      Uncomment: GRUB_TERMINAL=console

      # update-initramfs -u -k all
      # update-grub
      # grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=debian --recheck --no-floppy


      Everything works great. Get prompt for unlocking the two disks one at a time, and off we go. It is NOT necessary to list any zfs stuff in /etc/fstab. Yes, initramfs etc. WILL complain about /etc/fstab, but it all works fine, so far



      I can even remote unlock it



      # apt-get install dropbear-initramfs busybox
      # vi /etc/dropbear-initramfs/authorized_keys
      Paste my SSH pubkey

      # chmod 400 /etc/dropbear-initramfs/authorized_keys
      # update-initramfs -u

      in another machine that has my SSH keys, add a custom section in ~/.ssh/config:
      Host zstaging_unlock
      HostName <ip_of_zstaging>
      User root
      HostKeyAlias zstaging_unlock

      to remote unlock zstaging, from remote machine:
      $ ssh zstaging_unlock
      and follow prompts


      HTH






      share|improve this answer






















        up vote
        0
        down vote










        up vote
        0
        down vote









        I am currently setting up such a system, but in debian stretch. And I am doing my experimentation in a VM before I actually set up the physical computer itself. I have a very similar setup working in the VM.



        2 disks in zpool mirror for /, but not /boot. /boot is on md0. zpool is on top of LUKS. Using EFI.



        root@zstaging:~# cat /proc/1/comm
        systemd

        root@zstaging:~# lsblk
        NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
        sda 8:0 0 20G 0 disk
        +-sda1 8:1 0 256M 0 part
        ¦ +-md0 9:0 0 256M 0 raid1 /boot
        +-sda2 8:2 0 256M 0 part /boot/efi
        +-sda3 8:3 0 19.5G 0 part
        ¦ +-disk0_crypt 253:0 0 19.5G 0 crypt
        +-sda9 8:9 0 9M 0 part
        sdb 8:16 0 20G 0 disk
        +-sdb1 8:17 0 256M 0 part
        ¦ +-md0 9:0 0 256M 0 raid1 /boot
        +-sdb2 8:18 0 256M 0 part
        +-sdb3 8:19 0 19.5G 0 part
        ¦ +-disk1_crypt 253:1 0 19.5G 0 crypt
        +-sdb9 8:25 0 9M 0 part
        sdc 8:32 0 20G 0 disk
        sr0 11:0 1 1.8G 0 rom

        root@zstaging:~# zpool status
        pool: rpool
        state: ONLINE
        scan: none requested
        config:

        NAME STATE READ WRITE CKSUM
        rpool ONLINE 0 0 0
        mirror-0 ONLINE 0 0 0
        disk0_crypt ONLINE 0 0 0
        disk1_crypt ONLINE 0 0 0

        errors: No known data errors


        Relevant files and setup:



        root@zstaging:~# cat /etc/fstab
        UUID=648bfa4b-1b5f-480a-bb26-b3abffb4a6de /boot auto defaults 0 0
        PARTUUID=1673f966-173b-4128-84d5-4e8d5810efef /boot/efi vfat defaults 0 1

        root@zstaging:~# cat /etc/crypttab
        disk0_crypt UUID=26194846-ba49-4e53-ab0b-857b0dad2021 none luks
        disk1_crypt UUID=ef44b66a-8706-4be2-bd12-a30d40de9669 none luks

        root@zstaging:~# cat /etc/initramfs-tools/conf.d/cryptroot
        target=disk0_crypt,source=UUID=26194846-ba49-4e53-ab0b-857b0dad2021,key=none,rootdev
        target=disk1_crypt,source=UUID=ef44b66a-8706-4be2-bd12-a30d40de9669,key=none,rootdev

        set CRYPTSETUP=y in /etc/cryptsetup-initramfs/conf-hook

        # vi /etc/default/grub
        replace GRUB_CMDLINE_LINUX="" with GRUB_CMDLINE_LINUX="boot=zfs"
        Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT
        Uncomment: GRUB_TERMINAL=console

        # update-initramfs -u -k all
        # update-grub
        # grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=debian --recheck --no-floppy


        Everything works great. Get prompt for unlocking the two disks one at a time, and off we go. It is NOT necessary to list any zfs stuff in /etc/fstab. Yes, initramfs etc. WILL complain about /etc/fstab, but it all works fine, so far



        I can even remote unlock it



        # apt-get install dropbear-initramfs busybox
        # vi /etc/dropbear-initramfs/authorized_keys
        Paste my SSH pubkey

        # chmod 400 /etc/dropbear-initramfs/authorized_keys
        # update-initramfs -u

        in another machine that has my SSH keys, add a custom section in ~/.ssh/config:
        Host zstaging_unlock
        HostName <ip_of_zstaging>
        User root
        HostKeyAlias zstaging_unlock

        to remote unlock zstaging, from remote machine:
        $ ssh zstaging_unlock
        and follow prompts


        HTH






        share|improve this answer












        I am currently setting up such a system, but in debian stretch. And I am doing my experimentation in a VM before I actually set up the physical computer itself. I have a very similar setup working in the VM.



        2 disks in zpool mirror for /, but not /boot. /boot is on md0. zpool is on top of LUKS. Using EFI.



        root@zstaging:~# cat /proc/1/comm
        systemd

        root@zstaging:~# lsblk
        NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
        sda 8:0 0 20G 0 disk
        +-sda1 8:1 0 256M 0 part
        ¦ +-md0 9:0 0 256M 0 raid1 /boot
        +-sda2 8:2 0 256M 0 part /boot/efi
        +-sda3 8:3 0 19.5G 0 part
        ¦ +-disk0_crypt 253:0 0 19.5G 0 crypt
        +-sda9 8:9 0 9M 0 part
        sdb 8:16 0 20G 0 disk
        +-sdb1 8:17 0 256M 0 part
        ¦ +-md0 9:0 0 256M 0 raid1 /boot
        +-sdb2 8:18 0 256M 0 part
        +-sdb3 8:19 0 19.5G 0 part
        ¦ +-disk1_crypt 253:1 0 19.5G 0 crypt
        +-sdb9 8:25 0 9M 0 part
        sdc 8:32 0 20G 0 disk
        sr0 11:0 1 1.8G 0 rom

        root@zstaging:~# zpool status
        pool: rpool
        state: ONLINE
        scan: none requested
        config:

        NAME STATE READ WRITE CKSUM
        rpool ONLINE 0 0 0
        mirror-0 ONLINE 0 0 0
        disk0_crypt ONLINE 0 0 0
        disk1_crypt ONLINE 0 0 0

        errors: No known data errors


        Relevant files and setup:



        root@zstaging:~# cat /etc/fstab
        UUID=648bfa4b-1b5f-480a-bb26-b3abffb4a6de /boot auto defaults 0 0
        PARTUUID=1673f966-173b-4128-84d5-4e8d5810efef /boot/efi vfat defaults 0 1

        root@zstaging:~# cat /etc/crypttab
        disk0_crypt UUID=26194846-ba49-4e53-ab0b-857b0dad2021 none luks
        disk1_crypt UUID=ef44b66a-8706-4be2-bd12-a30d40de9669 none luks

        root@zstaging:~# cat /etc/initramfs-tools/conf.d/cryptroot
        target=disk0_crypt,source=UUID=26194846-ba49-4e53-ab0b-857b0dad2021,key=none,rootdev
        target=disk1_crypt,source=UUID=ef44b66a-8706-4be2-bd12-a30d40de9669,key=none,rootdev

        set CRYPTSETUP=y in /etc/cryptsetup-initramfs/conf-hook

        # vi /etc/default/grub
        replace GRUB_CMDLINE_LINUX="" with GRUB_CMDLINE_LINUX="boot=zfs"
        Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT
        Uncomment: GRUB_TERMINAL=console

        # update-initramfs -u -k all
        # update-grub
        # grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=debian --recheck --no-floppy


        Everything works great. Get prompt for unlocking the two disks one at a time, and off we go. It is NOT necessary to list any zfs stuff in /etc/fstab. Yes, initramfs etc. WILL complain about /etc/fstab, but it all works fine, so far



        I can even remote unlock it



        # apt-get install dropbear-initramfs busybox
        # vi /etc/dropbear-initramfs/authorized_keys
        Paste my SSH pubkey

        # chmod 400 /etc/dropbear-initramfs/authorized_keys
        # update-initramfs -u

        in another machine that has my SSH keys, add a custom section in ~/.ssh/config:
        Host zstaging_unlock
        HostName <ip_of_zstaging>
        User root
        HostKeyAlias zstaging_unlock

        to remote unlock zstaging, from remote machine:
        $ ssh zstaging_unlock
        and follow prompts


        HTH







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Apr 4 at 4:43









        deyab

        11




        11






















             

            draft saved


            draft discarded


























             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f435131%2funlock-luks-non-root-filesystem-partition-on-boot%23new-answer', 'question_page');

            );

            Post as a guest













































































            Popular posts from this blog

            How to check contact read email or not when send email to Individual?

            Bahrain

            Postfix configuration issue with fips on centos 7; mailgun relay