Importing ZFS pool takes forever on boot

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP












0














I recently moved my Gentoo (OpenRC) root to ZFS, and so far everything works great, except for one thing: importing zpool takes a long time (one-two minutes) on boot.

After the import is finished it says:



Import of gentoo succeeded...
[other unimportant stuff]
/newroot is a mountpoint
chroot: can't execute '/usr/bin/test': No such file or directory


Also, zpool status shows "sda" instead of disk/by-id entry. I can't figure out how to make it use /dev/disk/by-id instead, since this is rootfs and I can't re-import it while the system is running. Re-importing the pool from a LiveUSB doesn't change anything.



Here's how my zpool status looks:



 pool: gentoo 
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
gentoo ONLINE 0 0 0
sda ONLINE 0 0 0

errors: No known data errors









share|improve this question





















  • Importing can be slow if it has to read metadata for many filesystems, since import has to scan all the filesystems to decide which ones to mount / where to mount them. Do you have lots (i.e. thousands) of filesystems / zvols on the pool? I don't think snapshots contribute to that, but I suppose it's possible.
    – Dan
    Dec 14 at 5:48










  • @Dan I have 8 sub-volumes and one snapshot currently
    – Wolfgang
    Dec 14 at 9:01















0














I recently moved my Gentoo (OpenRC) root to ZFS, and so far everything works great, except for one thing: importing zpool takes a long time (one-two minutes) on boot.

After the import is finished it says:



Import of gentoo succeeded...
[other unimportant stuff]
/newroot is a mountpoint
chroot: can't execute '/usr/bin/test': No such file or directory


Also, zpool status shows "sda" instead of disk/by-id entry. I can't figure out how to make it use /dev/disk/by-id instead, since this is rootfs and I can't re-import it while the system is running. Re-importing the pool from a LiveUSB doesn't change anything.



Here's how my zpool status looks:



 pool: gentoo 
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
gentoo ONLINE 0 0 0
sda ONLINE 0 0 0

errors: No known data errors









share|improve this question





















  • Importing can be slow if it has to read metadata for many filesystems, since import has to scan all the filesystems to decide which ones to mount / where to mount them. Do you have lots (i.e. thousands) of filesystems / zvols on the pool? I don't think snapshots contribute to that, but I suppose it's possible.
    – Dan
    Dec 14 at 5:48










  • @Dan I have 8 sub-volumes and one snapshot currently
    – Wolfgang
    Dec 14 at 9:01













0












0








0







I recently moved my Gentoo (OpenRC) root to ZFS, and so far everything works great, except for one thing: importing zpool takes a long time (one-two minutes) on boot.

After the import is finished it says:



Import of gentoo succeeded...
[other unimportant stuff]
/newroot is a mountpoint
chroot: can't execute '/usr/bin/test': No such file or directory


Also, zpool status shows "sda" instead of disk/by-id entry. I can't figure out how to make it use /dev/disk/by-id instead, since this is rootfs and I can't re-import it while the system is running. Re-importing the pool from a LiveUSB doesn't change anything.



Here's how my zpool status looks:



 pool: gentoo 
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
gentoo ONLINE 0 0 0
sda ONLINE 0 0 0

errors: No known data errors









share|improve this question













I recently moved my Gentoo (OpenRC) root to ZFS, and so far everything works great, except for one thing: importing zpool takes a long time (one-two minutes) on boot.

After the import is finished it says:



Import of gentoo succeeded...
[other unimportant stuff]
/newroot is a mountpoint
chroot: can't execute '/usr/bin/test': No such file or directory


Also, zpool status shows "sda" instead of disk/by-id entry. I can't figure out how to make it use /dev/disk/by-id instead, since this is rootfs and I can't re-import it while the system is running. Re-importing the pool from a LiveUSB doesn't change anything.



Here's how my zpool status looks:



 pool: gentoo 
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
gentoo ONLINE 0 0 0
sda ONLINE 0 0 0

errors: No known data errors






linux filesystems gentoo zfs openrc






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Dec 13 at 22:12









Wolfgang

11




11











  • Importing can be slow if it has to read metadata for many filesystems, since import has to scan all the filesystems to decide which ones to mount / where to mount them. Do you have lots (i.e. thousands) of filesystems / zvols on the pool? I don't think snapshots contribute to that, but I suppose it's possible.
    – Dan
    Dec 14 at 5:48










  • @Dan I have 8 sub-volumes and one snapshot currently
    – Wolfgang
    Dec 14 at 9:01
















  • Importing can be slow if it has to read metadata for many filesystems, since import has to scan all the filesystems to decide which ones to mount / where to mount them. Do you have lots (i.e. thousands) of filesystems / zvols on the pool? I don't think snapshots contribute to that, but I suppose it's possible.
    – Dan
    Dec 14 at 5:48










  • @Dan I have 8 sub-volumes and one snapshot currently
    – Wolfgang
    Dec 14 at 9:01















Importing can be slow if it has to read metadata for many filesystems, since import has to scan all the filesystems to decide which ones to mount / where to mount them. Do you have lots (i.e. thousands) of filesystems / zvols on the pool? I don't think snapshots contribute to that, but I suppose it's possible.
– Dan
Dec 14 at 5:48




Importing can be slow if it has to read metadata for many filesystems, since import has to scan all the filesystems to decide which ones to mount / where to mount them. Do you have lots (i.e. thousands) of filesystems / zvols on the pool? I don't think snapshots contribute to that, but I suppose it's possible.
– Dan
Dec 14 at 5:48












@Dan I have 8 sub-volumes and one snapshot currently
– Wolfgang
Dec 14 at 9:01




@Dan I have 8 sub-volumes and one snapshot currently
– Wolfgang
Dec 14 at 9:01















active

oldest

votes











Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f487879%2fimporting-zfs-pool-takes-forever-on-boot%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown






























active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes















draft saved

draft discarded
















































Thanks for contributing an answer to Unix & Linux Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f487879%2fimporting-zfs-pool-takes-forever-on-boot%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown






Popular posts from this blog

How to check contact read email or not when send email to Individual?

Bahrain

Postfix configuration issue with fips on centos 7; mailgun relay