What is causing my EFI partition to become corrupt when booting CentOS after installation?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












I am attempting to install the latest CentOS 7.5 x64 on small form factor PC ASUS Eee Box EB1037. It is an Intel Celeron J1900 (Bay Trail) with an onboard NVIDIA GeForce GT 820M. The installtion media will lock up unless Nouveau is first disabled. This is fine. But after installation and subsequent reboots, the EFI partition seems to become corrupt.



This question is NOT about troubleshooting how to boot but rather understanding why exactly this boot failure is corrupting the EFI partition and causing GRUB to fail.



Here is the installation procedure:



  • Burned CentOS 7.5 to USB

  • Boot to USB installer (grub bootloader)

  • Edit grub option to add "nouveau.modeset=0"

enter image description here



  • Set time zone

  • Software selection: Minimal install (no changes)

  • Network & Hostname: Set hostname

  • Set manual partitions as "standard partitions" (no LVM) and automatic partition layout

enter image description here



  • Installation continues

  • Set root password and user account (as an administrator)

enter image description here



  • Installation completes

  • Reboot

  • Hard disk GRUB appears

enter image description here



I did not change any of the GRUB settings (such as disabling Nouveau). See the default settings here:



enter image description here



Attempted to boot CentOS with these defaults and it hung as expected (since I did not disable Nouveau). All I could see was a black screen. The monitor was on but the keyboard indicators and backlight, as well as the optical mouse LED were all off. Keyboard was unresponsible to ctrl-alt-del.



Performed a hard reset by holding the power button. System booted up to the hard disk GRUB menu a second time with no problems. Tried to boot using defaults again and it locked the same as before (as expected, as I still haven't disabled Nouveau).



Note that I still have the CentOS USB installer inserted. Upon this THIRD reboot (after the previous two post-install reboots), the system takes me to the USB GRUB instead of the hard disk one. Odd. Popped out the CentOS USB and rebooted with ctrl-alt-del.



Now I see a message from GRUB flash on the screen briefly indicated thing it cannot read the EFI partition:



enter image description here



After a moment it disappears and I see this:



enter image description here



The system is now no longer bootable to the EFI partition.



Why is this happening? How is the EFI partition corrupting?



Additional Information



Secure Boot is Enabled in the BIOS and cannot be disabled but is set to "Other OS".



enter image description here



There is only ONE SATA port inside the unit and it is populated by a Samsung 850 Pro 500GB SSD. Despite being set to AHCI and visible as SATA1 and the only disk connected to the system, CentOS identifies it as sdb instead of sda, possibly because it thinks that the USB install media is sda. It does not present the USB drive as a second disk during installation, however, and displays the Samsung SSD as the only visible drive.



enter image description here



GRUB sees the attached CentOS install USB media as (hd0) and the onboard SATA as (hd1) when both as inserted. The onboard SATA is seen as (hd0) when the USB media is removed. Interestingly, the onboard SATA is seen as sd by the CentOS installer but hd by GRUB.



enter image description here



enter image description here



Highlights



  • System has an Nvidia graphics processor (Optimus?)

  • Secure Boot is ENABLED (cannot be disabled)

  • BIOS presents USB disks as attached SATA disks? (sda during installation, hd0 in GRUB)


PLEASE NOTE



I can already get the system to boot by removing the USB stick after installation, setting nouveau.modeset=0 and updating GRUB afterwards at /boot/efi/EFI/centos/grub.cfg.



The question is to understand what is corrupting the EFI partition!



Photo of the system booted:



enter image description here










share|improve this question























  • I'm installing to the internal SATA disk. There is no recovery partition.
    – Zhro
    Nov 23 at 1:02










  • Then why is it refering to the disk your installing to as sdb not sda?
    – Michael Prokopec
    Nov 23 at 1:03











  • This doesn't explain why the EFI partition is corrupting or why it boots successfully two times prior. I reinstalled CentOS and confirmed that the system boots whether root=hd1,gpt2 or root=hd0,gpt2 (though I'm not certain why). Possibly because it's using the UUID later on the line.
    – Zhro
    Nov 23 at 2:00










  • OK, after you make those changes, will it continue to boot normaly?
    – Michael Prokopec
    Nov 23 at 2:55














up vote
0
down vote

favorite












I am attempting to install the latest CentOS 7.5 x64 on small form factor PC ASUS Eee Box EB1037. It is an Intel Celeron J1900 (Bay Trail) with an onboard NVIDIA GeForce GT 820M. The installtion media will lock up unless Nouveau is first disabled. This is fine. But after installation and subsequent reboots, the EFI partition seems to become corrupt.



This question is NOT about troubleshooting how to boot but rather understanding why exactly this boot failure is corrupting the EFI partition and causing GRUB to fail.



Here is the installation procedure:



  • Burned CentOS 7.5 to USB

  • Boot to USB installer (grub bootloader)

  • Edit grub option to add "nouveau.modeset=0"

enter image description here



  • Set time zone

  • Software selection: Minimal install (no changes)

  • Network & Hostname: Set hostname

  • Set manual partitions as "standard partitions" (no LVM) and automatic partition layout

enter image description here



  • Installation continues

  • Set root password and user account (as an administrator)

enter image description here



  • Installation completes

  • Reboot

  • Hard disk GRUB appears

enter image description here



I did not change any of the GRUB settings (such as disabling Nouveau). See the default settings here:



enter image description here



Attempted to boot CentOS with these defaults and it hung as expected (since I did not disable Nouveau). All I could see was a black screen. The monitor was on but the keyboard indicators and backlight, as well as the optical mouse LED were all off. Keyboard was unresponsible to ctrl-alt-del.



Performed a hard reset by holding the power button. System booted up to the hard disk GRUB menu a second time with no problems. Tried to boot using defaults again and it locked the same as before (as expected, as I still haven't disabled Nouveau).



Note that I still have the CentOS USB installer inserted. Upon this THIRD reboot (after the previous two post-install reboots), the system takes me to the USB GRUB instead of the hard disk one. Odd. Popped out the CentOS USB and rebooted with ctrl-alt-del.



Now I see a message from GRUB flash on the screen briefly indicated thing it cannot read the EFI partition:



enter image description here



After a moment it disappears and I see this:



enter image description here



The system is now no longer bootable to the EFI partition.



Why is this happening? How is the EFI partition corrupting?



Additional Information



Secure Boot is Enabled in the BIOS and cannot be disabled but is set to "Other OS".



enter image description here



There is only ONE SATA port inside the unit and it is populated by a Samsung 850 Pro 500GB SSD. Despite being set to AHCI and visible as SATA1 and the only disk connected to the system, CentOS identifies it as sdb instead of sda, possibly because it thinks that the USB install media is sda. It does not present the USB drive as a second disk during installation, however, and displays the Samsung SSD as the only visible drive.



enter image description here



GRUB sees the attached CentOS install USB media as (hd0) and the onboard SATA as (hd1) when both as inserted. The onboard SATA is seen as (hd0) when the USB media is removed. Interestingly, the onboard SATA is seen as sd by the CentOS installer but hd by GRUB.



enter image description here



enter image description here



Highlights



  • System has an Nvidia graphics processor (Optimus?)

  • Secure Boot is ENABLED (cannot be disabled)

  • BIOS presents USB disks as attached SATA disks? (sda during installation, hd0 in GRUB)


PLEASE NOTE



I can already get the system to boot by removing the USB stick after installation, setting nouveau.modeset=0 and updating GRUB afterwards at /boot/efi/EFI/centos/grub.cfg.



The question is to understand what is corrupting the EFI partition!



Photo of the system booted:



enter image description here










share|improve this question























  • I'm installing to the internal SATA disk. There is no recovery partition.
    – Zhro
    Nov 23 at 1:02










  • Then why is it refering to the disk your installing to as sdb not sda?
    – Michael Prokopec
    Nov 23 at 1:03











  • This doesn't explain why the EFI partition is corrupting or why it boots successfully two times prior. I reinstalled CentOS and confirmed that the system boots whether root=hd1,gpt2 or root=hd0,gpt2 (though I'm not certain why). Possibly because it's using the UUID later on the line.
    – Zhro
    Nov 23 at 2:00










  • OK, after you make those changes, will it continue to boot normaly?
    – Michael Prokopec
    Nov 23 at 2:55












up vote
0
down vote

favorite









up vote
0
down vote

favorite











I am attempting to install the latest CentOS 7.5 x64 on small form factor PC ASUS Eee Box EB1037. It is an Intel Celeron J1900 (Bay Trail) with an onboard NVIDIA GeForce GT 820M. The installtion media will lock up unless Nouveau is first disabled. This is fine. But after installation and subsequent reboots, the EFI partition seems to become corrupt.



This question is NOT about troubleshooting how to boot but rather understanding why exactly this boot failure is corrupting the EFI partition and causing GRUB to fail.



Here is the installation procedure:



  • Burned CentOS 7.5 to USB

  • Boot to USB installer (grub bootloader)

  • Edit grub option to add "nouveau.modeset=0"

enter image description here



  • Set time zone

  • Software selection: Minimal install (no changes)

  • Network & Hostname: Set hostname

  • Set manual partitions as "standard partitions" (no LVM) and automatic partition layout

enter image description here



  • Installation continues

  • Set root password and user account (as an administrator)

enter image description here



  • Installation completes

  • Reboot

  • Hard disk GRUB appears

enter image description here



I did not change any of the GRUB settings (such as disabling Nouveau). See the default settings here:



enter image description here



Attempted to boot CentOS with these defaults and it hung as expected (since I did not disable Nouveau). All I could see was a black screen. The monitor was on but the keyboard indicators and backlight, as well as the optical mouse LED were all off. Keyboard was unresponsible to ctrl-alt-del.



Performed a hard reset by holding the power button. System booted up to the hard disk GRUB menu a second time with no problems. Tried to boot using defaults again and it locked the same as before (as expected, as I still haven't disabled Nouveau).



Note that I still have the CentOS USB installer inserted. Upon this THIRD reboot (after the previous two post-install reboots), the system takes me to the USB GRUB instead of the hard disk one. Odd. Popped out the CentOS USB and rebooted with ctrl-alt-del.



Now I see a message from GRUB flash on the screen briefly indicated thing it cannot read the EFI partition:



enter image description here



After a moment it disappears and I see this:



enter image description here



The system is now no longer bootable to the EFI partition.



Why is this happening? How is the EFI partition corrupting?



Additional Information



Secure Boot is Enabled in the BIOS and cannot be disabled but is set to "Other OS".



enter image description here



There is only ONE SATA port inside the unit and it is populated by a Samsung 850 Pro 500GB SSD. Despite being set to AHCI and visible as SATA1 and the only disk connected to the system, CentOS identifies it as sdb instead of sda, possibly because it thinks that the USB install media is sda. It does not present the USB drive as a second disk during installation, however, and displays the Samsung SSD as the only visible drive.



enter image description here



GRUB sees the attached CentOS install USB media as (hd0) and the onboard SATA as (hd1) when both as inserted. The onboard SATA is seen as (hd0) when the USB media is removed. Interestingly, the onboard SATA is seen as sd by the CentOS installer but hd by GRUB.



enter image description here



enter image description here



Highlights



  • System has an Nvidia graphics processor (Optimus?)

  • Secure Boot is ENABLED (cannot be disabled)

  • BIOS presents USB disks as attached SATA disks? (sda during installation, hd0 in GRUB)


PLEASE NOTE



I can already get the system to boot by removing the USB stick after installation, setting nouveau.modeset=0 and updating GRUB afterwards at /boot/efi/EFI/centos/grub.cfg.



The question is to understand what is corrupting the EFI partition!



Photo of the system booted:



enter image description here










share|improve this question















I am attempting to install the latest CentOS 7.5 x64 on small form factor PC ASUS Eee Box EB1037. It is an Intel Celeron J1900 (Bay Trail) with an onboard NVIDIA GeForce GT 820M. The installtion media will lock up unless Nouveau is first disabled. This is fine. But after installation and subsequent reboots, the EFI partition seems to become corrupt.



This question is NOT about troubleshooting how to boot but rather understanding why exactly this boot failure is corrupting the EFI partition and causing GRUB to fail.



Here is the installation procedure:



  • Burned CentOS 7.5 to USB

  • Boot to USB installer (grub bootloader)

  • Edit grub option to add "nouveau.modeset=0"

enter image description here



  • Set time zone

  • Software selection: Minimal install (no changes)

  • Network & Hostname: Set hostname

  • Set manual partitions as "standard partitions" (no LVM) and automatic partition layout

enter image description here



  • Installation continues

  • Set root password and user account (as an administrator)

enter image description here



  • Installation completes

  • Reboot

  • Hard disk GRUB appears

enter image description here



I did not change any of the GRUB settings (such as disabling Nouveau). See the default settings here:



enter image description here



Attempted to boot CentOS with these defaults and it hung as expected (since I did not disable Nouveau). All I could see was a black screen. The monitor was on but the keyboard indicators and backlight, as well as the optical mouse LED were all off. Keyboard was unresponsible to ctrl-alt-del.



Performed a hard reset by holding the power button. System booted up to the hard disk GRUB menu a second time with no problems. Tried to boot using defaults again and it locked the same as before (as expected, as I still haven't disabled Nouveau).



Note that I still have the CentOS USB installer inserted. Upon this THIRD reboot (after the previous two post-install reboots), the system takes me to the USB GRUB instead of the hard disk one. Odd. Popped out the CentOS USB and rebooted with ctrl-alt-del.



Now I see a message from GRUB flash on the screen briefly indicated thing it cannot read the EFI partition:



enter image description here



After a moment it disappears and I see this:



enter image description here



The system is now no longer bootable to the EFI partition.



Why is this happening? How is the EFI partition corrupting?



Additional Information



Secure Boot is Enabled in the BIOS and cannot be disabled but is set to "Other OS".



enter image description here



There is only ONE SATA port inside the unit and it is populated by a Samsung 850 Pro 500GB SSD. Despite being set to AHCI and visible as SATA1 and the only disk connected to the system, CentOS identifies it as sdb instead of sda, possibly because it thinks that the USB install media is sda. It does not present the USB drive as a second disk during installation, however, and displays the Samsung SSD as the only visible drive.



enter image description here



GRUB sees the attached CentOS install USB media as (hd0) and the onboard SATA as (hd1) when both as inserted. The onboard SATA is seen as (hd0) when the USB media is removed. Interestingly, the onboard SATA is seen as sd by the CentOS installer but hd by GRUB.



enter image description here



enter image description here



Highlights



  • System has an Nvidia graphics processor (Optimus?)

  • Secure Boot is ENABLED (cannot be disabled)

  • BIOS presents USB disks as attached SATA disks? (sda during installation, hd0 in GRUB)


PLEASE NOTE



I can already get the system to boot by removing the USB stick after installation, setting nouveau.modeset=0 and updating GRUB afterwards at /boot/efi/EFI/centos/grub.cfg.



The question is to understand what is corrupting the EFI partition!



Photo of the system booted:



enter image description here







centos system-installation grub uefi nouveau






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 23 at 8:51

























asked Nov 22 at 20:50









Zhro

342313




342313











  • I'm installing to the internal SATA disk. There is no recovery partition.
    – Zhro
    Nov 23 at 1:02










  • Then why is it refering to the disk your installing to as sdb not sda?
    – Michael Prokopec
    Nov 23 at 1:03











  • This doesn't explain why the EFI partition is corrupting or why it boots successfully two times prior. I reinstalled CentOS and confirmed that the system boots whether root=hd1,gpt2 or root=hd0,gpt2 (though I'm not certain why). Possibly because it's using the UUID later on the line.
    – Zhro
    Nov 23 at 2:00










  • OK, after you make those changes, will it continue to boot normaly?
    – Michael Prokopec
    Nov 23 at 2:55
















  • I'm installing to the internal SATA disk. There is no recovery partition.
    – Zhro
    Nov 23 at 1:02










  • Then why is it refering to the disk your installing to as sdb not sda?
    – Michael Prokopec
    Nov 23 at 1:03











  • This doesn't explain why the EFI partition is corrupting or why it boots successfully two times prior. I reinstalled CentOS and confirmed that the system boots whether root=hd1,gpt2 or root=hd0,gpt2 (though I'm not certain why). Possibly because it's using the UUID later on the line.
    – Zhro
    Nov 23 at 2:00










  • OK, after you make those changes, will it continue to boot normaly?
    – Michael Prokopec
    Nov 23 at 2:55















I'm installing to the internal SATA disk. There is no recovery partition.
– Zhro
Nov 23 at 1:02




I'm installing to the internal SATA disk. There is no recovery partition.
– Zhro
Nov 23 at 1:02












Then why is it refering to the disk your installing to as sdb not sda?
– Michael Prokopec
Nov 23 at 1:03





Then why is it refering to the disk your installing to as sdb not sda?
– Michael Prokopec
Nov 23 at 1:03













This doesn't explain why the EFI partition is corrupting or why it boots successfully two times prior. I reinstalled CentOS and confirmed that the system boots whether root=hd1,gpt2 or root=hd0,gpt2 (though I'm not certain why). Possibly because it's using the UUID later on the line.
– Zhro
Nov 23 at 2:00




This doesn't explain why the EFI partition is corrupting or why it boots successfully two times prior. I reinstalled CentOS and confirmed that the system boots whether root=hd1,gpt2 or root=hd0,gpt2 (though I'm not certain why). Possibly because it's using the UUID later on the line.
– Zhro
Nov 23 at 2:00












OK, after you make those changes, will it continue to boot normaly?
– Michael Prokopec
Nov 23 at 2:55




OK, after you make those changes, will it continue to boot normaly?
– Michael Prokopec
Nov 23 at 2:55










1 Answer
1






active

oldest

votes

















up vote
0
down vote



accepted










The name EFIBOOTgrubx64.efi tells me the system is not using the CentOS default UEFI boot path, but the fallback one. But the fallback boot path is EFIBOOTbootx64.efi, which would be occupied by the SecureBoot shim. So it would seem the shim is loaded, but it is failing to perform the next step: the loading of the actual GRUB bootloader from the fallback directory.



My theory:



  • the installation set up the bootloader in the usual fashion: EFICentOSshimx64.efi is the SecureBoot shim bootloader, and EFICentOSgrubx64.efi is the actual GRUB bootloader. The path EFICentOSshimx64.efi was registered into UEFI NVRAM boot variables. The installer also (attempted to) set up a second copy with shim in the default fallback/removable media boot path EFIBOOTbootx64.efi and GRUB as EFIBOOTgrubx64.efi.

  • in the first reboot that was triggered by the installer, the NVRAM boot variables were intact and the firmware executed a "warm reboot", booting the kernel successfully using EFICentOSshimx64.efi and EFICentOSgrubx64.efi. This boot attempt then resulted in a hang because Nouveau was not disabled.

  • Then, something caused the firmware to forget the NVRAM boot variables, causing the system to attempt a boot from the fallback path EFIBOOTbootx64.efi instead. That happens when you tell UEFI to boot from a specific disk but don't specify a bootloader path. For some reason or another, this allows the fallback copy of the SecureBoot shim to be loaded, but then fails in loading EFIBOOTgrubx64.efi. Note that it doesn't say the file is corrupted: it is saying that the file just does not exist.

Now, you should probably use efibootmgr -v to view your UEFI boot variables as they exist now, and write down the current set-up, or at least the CentOS boot entry, so that you will be able to reproduce it if it is lost ever again. In that situation, you might either boot into rescue mode from CentOS installation media and use the efibootmgr command to fix the NVRAM variables, or perhaps just type in the correct settings using the UEFI "boot settings" menu, if it allows that. (Sadly, most UEFI implementations I've seen won't.)



You should also verify that the fallback GRUB bootloader is intact. The file should be accessible as /boot/efi/EFI/BOOT/grubx64.efi in Linux. Verify that the file exists and is identical to /boot/efi/EFI/CentOS/grubx64.efi.



I don't really know what caused the UEFI NVRAM boot variables to be lost between the first reboot and the third one. There are various buggy UEFI implementations out there. Or did you perhaps reset the "BIOS settings" as part of troubleshooting the hang that turned out to be caused by Nouveau? Resetting the UEFI "BIOS settings" may or may not reset the NVRAM boot variables too, depending on UEFI implementation.



If it turns out the occasional loss of UEFI NVRAM boot variables is a firmware bug, you might check for a BIOS upgrade: run dmidecode -s bios-version to see the current version. According to ASUS support pages, the most up-to-date UEFI BIOS for your system is version 1301. ASUS typically includes an update feature into the UEFI BIOS itself; if that's true on your system, you just need to save the update file onto the EFI system partition (= anywhere under /boot/efi in CentOS), go to BIOS settings, activate the update tool from there, and tell it where the update file is.



One possible reason for NVRAM corruption is the efi-pstore kernel module. If it is enabled (or built into the CentOS standard kernel) and the feature to store kernel log into pstore on a kernel panic is active, this may have filled the NVRAM to 100% with a series of variables containing the kernel log. This might have caused the firmware to detect the variable storage as corrupt and reinitialized the NVRAM boot variables automatically.



If the fallback /boot/efi/EFI/BOOT/grubx64.efi was actually undamaged, the failure to boot from the fallback path might have been caused by a bug in the SecureBoot shim, or by over-zealous enforcement of Secure Boot in the HDD fallback boot path (technically an UEFI firmware bug undocumented feature that makes it incompatible with the SecureBoot shim). An update to the SecureBoot shim might help in that case.






share|improve this answer






















  • Interesting. Confirmed that Secure Boot is enabled (with no option to disable) in the BIOS but set to "Other OS" instead of "Windows UEFI mode". I am already on the latest BIOS for this board (v1301).
    – Zhro
    Nov 23 at 8:31










  • If BIOS allows you to manage Secure Boot keys, deleting the default PK key should switch Secure Boot into "Setup Mode", which has essentially the same effect as disabling Secure Boot. It should also allow you full access to Secure Boot NVRAM variables from the OS, so you could create and install your own Secure Boot key if you wish.
    – telcoM
    Nov 23 at 8:48










  • I've updated my question with more information and attached an image of the Secure Boot screen from my BIOS. It appears (by default) that the Secure Boot state is "Enabled" PK state is "Unloaded". I switched to the Windows Secure boot, cleared all keys, and now Secure Boot says "disabled". Interesting. But is this even related?
    – Zhro
    Nov 23 at 8:53











  • That means the "Other OS" selection is apparently equivalent to switching to Secure Boot Setup Mode, so it might be possible to simplify your set-up by eliminating the shim altogether. If you switched from "Other OS" to "Windows UEFI mode", I'd guess it would initially enable the default PK, and you might or might not be able to further configure Secure Boot keys then. But that's not important right now; did you get the efibootmgr -v output? A basic understanding of UEFI NVRAM variables is an important part of UEFI boot troubleshooting.
    – telcoM
    Nov 23 at 9:07










  • I seem to have made the mistake of clearing the Secure Boot state and this seems to have resolved the issue entirely. I can no longer replicate the EFI issue from my question and cannot continue debugging. I don't know how to restore to the state the system was in previously. I am accepting your answer as you seem to have been on the right track with Secure Boot. I would have liked to have debugged this further.
    – Zhro
    Nov 23 at 10:16











Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f483540%2fwhat-is-causing-my-efi-partition-to-become-corrupt-when-booting-centos-after-ins%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
0
down vote



accepted










The name EFIBOOTgrubx64.efi tells me the system is not using the CentOS default UEFI boot path, but the fallback one. But the fallback boot path is EFIBOOTbootx64.efi, which would be occupied by the SecureBoot shim. So it would seem the shim is loaded, but it is failing to perform the next step: the loading of the actual GRUB bootloader from the fallback directory.



My theory:



  • the installation set up the bootloader in the usual fashion: EFICentOSshimx64.efi is the SecureBoot shim bootloader, and EFICentOSgrubx64.efi is the actual GRUB bootloader. The path EFICentOSshimx64.efi was registered into UEFI NVRAM boot variables. The installer also (attempted to) set up a second copy with shim in the default fallback/removable media boot path EFIBOOTbootx64.efi and GRUB as EFIBOOTgrubx64.efi.

  • in the first reboot that was triggered by the installer, the NVRAM boot variables were intact and the firmware executed a "warm reboot", booting the kernel successfully using EFICentOSshimx64.efi and EFICentOSgrubx64.efi. This boot attempt then resulted in a hang because Nouveau was not disabled.

  • Then, something caused the firmware to forget the NVRAM boot variables, causing the system to attempt a boot from the fallback path EFIBOOTbootx64.efi instead. That happens when you tell UEFI to boot from a specific disk but don't specify a bootloader path. For some reason or another, this allows the fallback copy of the SecureBoot shim to be loaded, but then fails in loading EFIBOOTgrubx64.efi. Note that it doesn't say the file is corrupted: it is saying that the file just does not exist.

Now, you should probably use efibootmgr -v to view your UEFI boot variables as they exist now, and write down the current set-up, or at least the CentOS boot entry, so that you will be able to reproduce it if it is lost ever again. In that situation, you might either boot into rescue mode from CentOS installation media and use the efibootmgr command to fix the NVRAM variables, or perhaps just type in the correct settings using the UEFI "boot settings" menu, if it allows that. (Sadly, most UEFI implementations I've seen won't.)



You should also verify that the fallback GRUB bootloader is intact. The file should be accessible as /boot/efi/EFI/BOOT/grubx64.efi in Linux. Verify that the file exists and is identical to /boot/efi/EFI/CentOS/grubx64.efi.



I don't really know what caused the UEFI NVRAM boot variables to be lost between the first reboot and the third one. There are various buggy UEFI implementations out there. Or did you perhaps reset the "BIOS settings" as part of troubleshooting the hang that turned out to be caused by Nouveau? Resetting the UEFI "BIOS settings" may or may not reset the NVRAM boot variables too, depending on UEFI implementation.



If it turns out the occasional loss of UEFI NVRAM boot variables is a firmware bug, you might check for a BIOS upgrade: run dmidecode -s bios-version to see the current version. According to ASUS support pages, the most up-to-date UEFI BIOS for your system is version 1301. ASUS typically includes an update feature into the UEFI BIOS itself; if that's true on your system, you just need to save the update file onto the EFI system partition (= anywhere under /boot/efi in CentOS), go to BIOS settings, activate the update tool from there, and tell it where the update file is.



One possible reason for NVRAM corruption is the efi-pstore kernel module. If it is enabled (or built into the CentOS standard kernel) and the feature to store kernel log into pstore on a kernel panic is active, this may have filled the NVRAM to 100% with a series of variables containing the kernel log. This might have caused the firmware to detect the variable storage as corrupt and reinitialized the NVRAM boot variables automatically.



If the fallback /boot/efi/EFI/BOOT/grubx64.efi was actually undamaged, the failure to boot from the fallback path might have been caused by a bug in the SecureBoot shim, or by over-zealous enforcement of Secure Boot in the HDD fallback boot path (technically an UEFI firmware bug undocumented feature that makes it incompatible with the SecureBoot shim). An update to the SecureBoot shim might help in that case.






share|improve this answer






















  • Interesting. Confirmed that Secure Boot is enabled (with no option to disable) in the BIOS but set to "Other OS" instead of "Windows UEFI mode". I am already on the latest BIOS for this board (v1301).
    – Zhro
    Nov 23 at 8:31










  • If BIOS allows you to manage Secure Boot keys, deleting the default PK key should switch Secure Boot into "Setup Mode", which has essentially the same effect as disabling Secure Boot. It should also allow you full access to Secure Boot NVRAM variables from the OS, so you could create and install your own Secure Boot key if you wish.
    – telcoM
    Nov 23 at 8:48










  • I've updated my question with more information and attached an image of the Secure Boot screen from my BIOS. It appears (by default) that the Secure Boot state is "Enabled" PK state is "Unloaded". I switched to the Windows Secure boot, cleared all keys, and now Secure Boot says "disabled". Interesting. But is this even related?
    – Zhro
    Nov 23 at 8:53











  • That means the "Other OS" selection is apparently equivalent to switching to Secure Boot Setup Mode, so it might be possible to simplify your set-up by eliminating the shim altogether. If you switched from "Other OS" to "Windows UEFI mode", I'd guess it would initially enable the default PK, and you might or might not be able to further configure Secure Boot keys then. But that's not important right now; did you get the efibootmgr -v output? A basic understanding of UEFI NVRAM variables is an important part of UEFI boot troubleshooting.
    – telcoM
    Nov 23 at 9:07










  • I seem to have made the mistake of clearing the Secure Boot state and this seems to have resolved the issue entirely. I can no longer replicate the EFI issue from my question and cannot continue debugging. I don't know how to restore to the state the system was in previously. I am accepting your answer as you seem to have been on the right track with Secure Boot. I would have liked to have debugged this further.
    – Zhro
    Nov 23 at 10:16















up vote
0
down vote



accepted










The name EFIBOOTgrubx64.efi tells me the system is not using the CentOS default UEFI boot path, but the fallback one. But the fallback boot path is EFIBOOTbootx64.efi, which would be occupied by the SecureBoot shim. So it would seem the shim is loaded, but it is failing to perform the next step: the loading of the actual GRUB bootloader from the fallback directory.



My theory:



  • the installation set up the bootloader in the usual fashion: EFICentOSshimx64.efi is the SecureBoot shim bootloader, and EFICentOSgrubx64.efi is the actual GRUB bootloader. The path EFICentOSshimx64.efi was registered into UEFI NVRAM boot variables. The installer also (attempted to) set up a second copy with shim in the default fallback/removable media boot path EFIBOOTbootx64.efi and GRUB as EFIBOOTgrubx64.efi.

  • in the first reboot that was triggered by the installer, the NVRAM boot variables were intact and the firmware executed a "warm reboot", booting the kernel successfully using EFICentOSshimx64.efi and EFICentOSgrubx64.efi. This boot attempt then resulted in a hang because Nouveau was not disabled.

  • Then, something caused the firmware to forget the NVRAM boot variables, causing the system to attempt a boot from the fallback path EFIBOOTbootx64.efi instead. That happens when you tell UEFI to boot from a specific disk but don't specify a bootloader path. For some reason or another, this allows the fallback copy of the SecureBoot shim to be loaded, but then fails in loading EFIBOOTgrubx64.efi. Note that it doesn't say the file is corrupted: it is saying that the file just does not exist.

Now, you should probably use efibootmgr -v to view your UEFI boot variables as they exist now, and write down the current set-up, or at least the CentOS boot entry, so that you will be able to reproduce it if it is lost ever again. In that situation, you might either boot into rescue mode from CentOS installation media and use the efibootmgr command to fix the NVRAM variables, or perhaps just type in the correct settings using the UEFI "boot settings" menu, if it allows that. (Sadly, most UEFI implementations I've seen won't.)



You should also verify that the fallback GRUB bootloader is intact. The file should be accessible as /boot/efi/EFI/BOOT/grubx64.efi in Linux. Verify that the file exists and is identical to /boot/efi/EFI/CentOS/grubx64.efi.



I don't really know what caused the UEFI NVRAM boot variables to be lost between the first reboot and the third one. There are various buggy UEFI implementations out there. Or did you perhaps reset the "BIOS settings" as part of troubleshooting the hang that turned out to be caused by Nouveau? Resetting the UEFI "BIOS settings" may or may not reset the NVRAM boot variables too, depending on UEFI implementation.



If it turns out the occasional loss of UEFI NVRAM boot variables is a firmware bug, you might check for a BIOS upgrade: run dmidecode -s bios-version to see the current version. According to ASUS support pages, the most up-to-date UEFI BIOS for your system is version 1301. ASUS typically includes an update feature into the UEFI BIOS itself; if that's true on your system, you just need to save the update file onto the EFI system partition (= anywhere under /boot/efi in CentOS), go to BIOS settings, activate the update tool from there, and tell it where the update file is.



One possible reason for NVRAM corruption is the efi-pstore kernel module. If it is enabled (or built into the CentOS standard kernel) and the feature to store kernel log into pstore on a kernel panic is active, this may have filled the NVRAM to 100% with a series of variables containing the kernel log. This might have caused the firmware to detect the variable storage as corrupt and reinitialized the NVRAM boot variables automatically.



If the fallback /boot/efi/EFI/BOOT/grubx64.efi was actually undamaged, the failure to boot from the fallback path might have been caused by a bug in the SecureBoot shim, or by over-zealous enforcement of Secure Boot in the HDD fallback boot path (technically an UEFI firmware bug undocumented feature that makes it incompatible with the SecureBoot shim). An update to the SecureBoot shim might help in that case.






share|improve this answer






















  • Interesting. Confirmed that Secure Boot is enabled (with no option to disable) in the BIOS but set to "Other OS" instead of "Windows UEFI mode". I am already on the latest BIOS for this board (v1301).
    – Zhro
    Nov 23 at 8:31










  • If BIOS allows you to manage Secure Boot keys, deleting the default PK key should switch Secure Boot into "Setup Mode", which has essentially the same effect as disabling Secure Boot. It should also allow you full access to Secure Boot NVRAM variables from the OS, so you could create and install your own Secure Boot key if you wish.
    – telcoM
    Nov 23 at 8:48










  • I've updated my question with more information and attached an image of the Secure Boot screen from my BIOS. It appears (by default) that the Secure Boot state is "Enabled" PK state is "Unloaded". I switched to the Windows Secure boot, cleared all keys, and now Secure Boot says "disabled". Interesting. But is this even related?
    – Zhro
    Nov 23 at 8:53











  • That means the "Other OS" selection is apparently equivalent to switching to Secure Boot Setup Mode, so it might be possible to simplify your set-up by eliminating the shim altogether. If you switched from "Other OS" to "Windows UEFI mode", I'd guess it would initially enable the default PK, and you might or might not be able to further configure Secure Boot keys then. But that's not important right now; did you get the efibootmgr -v output? A basic understanding of UEFI NVRAM variables is an important part of UEFI boot troubleshooting.
    – telcoM
    Nov 23 at 9:07










  • I seem to have made the mistake of clearing the Secure Boot state and this seems to have resolved the issue entirely. I can no longer replicate the EFI issue from my question and cannot continue debugging. I don't know how to restore to the state the system was in previously. I am accepting your answer as you seem to have been on the right track with Secure Boot. I would have liked to have debugged this further.
    – Zhro
    Nov 23 at 10:16













up vote
0
down vote



accepted







up vote
0
down vote



accepted






The name EFIBOOTgrubx64.efi tells me the system is not using the CentOS default UEFI boot path, but the fallback one. But the fallback boot path is EFIBOOTbootx64.efi, which would be occupied by the SecureBoot shim. So it would seem the shim is loaded, but it is failing to perform the next step: the loading of the actual GRUB bootloader from the fallback directory.



My theory:



  • the installation set up the bootloader in the usual fashion: EFICentOSshimx64.efi is the SecureBoot shim bootloader, and EFICentOSgrubx64.efi is the actual GRUB bootloader. The path EFICentOSshimx64.efi was registered into UEFI NVRAM boot variables. The installer also (attempted to) set up a second copy with shim in the default fallback/removable media boot path EFIBOOTbootx64.efi and GRUB as EFIBOOTgrubx64.efi.

  • in the first reboot that was triggered by the installer, the NVRAM boot variables were intact and the firmware executed a "warm reboot", booting the kernel successfully using EFICentOSshimx64.efi and EFICentOSgrubx64.efi. This boot attempt then resulted in a hang because Nouveau was not disabled.

  • Then, something caused the firmware to forget the NVRAM boot variables, causing the system to attempt a boot from the fallback path EFIBOOTbootx64.efi instead. That happens when you tell UEFI to boot from a specific disk but don't specify a bootloader path. For some reason or another, this allows the fallback copy of the SecureBoot shim to be loaded, but then fails in loading EFIBOOTgrubx64.efi. Note that it doesn't say the file is corrupted: it is saying that the file just does not exist.

Now, you should probably use efibootmgr -v to view your UEFI boot variables as they exist now, and write down the current set-up, or at least the CentOS boot entry, so that you will be able to reproduce it if it is lost ever again. In that situation, you might either boot into rescue mode from CentOS installation media and use the efibootmgr command to fix the NVRAM variables, or perhaps just type in the correct settings using the UEFI "boot settings" menu, if it allows that. (Sadly, most UEFI implementations I've seen won't.)



You should also verify that the fallback GRUB bootloader is intact. The file should be accessible as /boot/efi/EFI/BOOT/grubx64.efi in Linux. Verify that the file exists and is identical to /boot/efi/EFI/CentOS/grubx64.efi.



I don't really know what caused the UEFI NVRAM boot variables to be lost between the first reboot and the third one. There are various buggy UEFI implementations out there. Or did you perhaps reset the "BIOS settings" as part of troubleshooting the hang that turned out to be caused by Nouveau? Resetting the UEFI "BIOS settings" may or may not reset the NVRAM boot variables too, depending on UEFI implementation.



If it turns out the occasional loss of UEFI NVRAM boot variables is a firmware bug, you might check for a BIOS upgrade: run dmidecode -s bios-version to see the current version. According to ASUS support pages, the most up-to-date UEFI BIOS for your system is version 1301. ASUS typically includes an update feature into the UEFI BIOS itself; if that's true on your system, you just need to save the update file onto the EFI system partition (= anywhere under /boot/efi in CentOS), go to BIOS settings, activate the update tool from there, and tell it where the update file is.



One possible reason for NVRAM corruption is the efi-pstore kernel module. If it is enabled (or built into the CentOS standard kernel) and the feature to store kernel log into pstore on a kernel panic is active, this may have filled the NVRAM to 100% with a series of variables containing the kernel log. This might have caused the firmware to detect the variable storage as corrupt and reinitialized the NVRAM boot variables automatically.



If the fallback /boot/efi/EFI/BOOT/grubx64.efi was actually undamaged, the failure to boot from the fallback path might have been caused by a bug in the SecureBoot shim, or by over-zealous enforcement of Secure Boot in the HDD fallback boot path (technically an UEFI firmware bug undocumented feature that makes it incompatible with the SecureBoot shim). An update to the SecureBoot shim might help in that case.






share|improve this answer














The name EFIBOOTgrubx64.efi tells me the system is not using the CentOS default UEFI boot path, but the fallback one. But the fallback boot path is EFIBOOTbootx64.efi, which would be occupied by the SecureBoot shim. So it would seem the shim is loaded, but it is failing to perform the next step: the loading of the actual GRUB bootloader from the fallback directory.



My theory:



  • the installation set up the bootloader in the usual fashion: EFICentOSshimx64.efi is the SecureBoot shim bootloader, and EFICentOSgrubx64.efi is the actual GRUB bootloader. The path EFICentOSshimx64.efi was registered into UEFI NVRAM boot variables. The installer also (attempted to) set up a second copy with shim in the default fallback/removable media boot path EFIBOOTbootx64.efi and GRUB as EFIBOOTgrubx64.efi.

  • in the first reboot that was triggered by the installer, the NVRAM boot variables were intact and the firmware executed a "warm reboot", booting the kernel successfully using EFICentOSshimx64.efi and EFICentOSgrubx64.efi. This boot attempt then resulted in a hang because Nouveau was not disabled.

  • Then, something caused the firmware to forget the NVRAM boot variables, causing the system to attempt a boot from the fallback path EFIBOOTbootx64.efi instead. That happens when you tell UEFI to boot from a specific disk but don't specify a bootloader path. For some reason or another, this allows the fallback copy of the SecureBoot shim to be loaded, but then fails in loading EFIBOOTgrubx64.efi. Note that it doesn't say the file is corrupted: it is saying that the file just does not exist.

Now, you should probably use efibootmgr -v to view your UEFI boot variables as they exist now, and write down the current set-up, or at least the CentOS boot entry, so that you will be able to reproduce it if it is lost ever again. In that situation, you might either boot into rescue mode from CentOS installation media and use the efibootmgr command to fix the NVRAM variables, or perhaps just type in the correct settings using the UEFI "boot settings" menu, if it allows that. (Sadly, most UEFI implementations I've seen won't.)



You should also verify that the fallback GRUB bootloader is intact. The file should be accessible as /boot/efi/EFI/BOOT/grubx64.efi in Linux. Verify that the file exists and is identical to /boot/efi/EFI/CentOS/grubx64.efi.



I don't really know what caused the UEFI NVRAM boot variables to be lost between the first reboot and the third one. There are various buggy UEFI implementations out there. Or did you perhaps reset the "BIOS settings" as part of troubleshooting the hang that turned out to be caused by Nouveau? Resetting the UEFI "BIOS settings" may or may not reset the NVRAM boot variables too, depending on UEFI implementation.



If it turns out the occasional loss of UEFI NVRAM boot variables is a firmware bug, you might check for a BIOS upgrade: run dmidecode -s bios-version to see the current version. According to ASUS support pages, the most up-to-date UEFI BIOS for your system is version 1301. ASUS typically includes an update feature into the UEFI BIOS itself; if that's true on your system, you just need to save the update file onto the EFI system partition (= anywhere under /boot/efi in CentOS), go to BIOS settings, activate the update tool from there, and tell it where the update file is.



One possible reason for NVRAM corruption is the efi-pstore kernel module. If it is enabled (or built into the CentOS standard kernel) and the feature to store kernel log into pstore on a kernel panic is active, this may have filled the NVRAM to 100% with a series of variables containing the kernel log. This might have caused the firmware to detect the variable storage as corrupt and reinitialized the NVRAM boot variables automatically.



If the fallback /boot/efi/EFI/BOOT/grubx64.efi was actually undamaged, the failure to boot from the fallback path might have been caused by a bug in the SecureBoot shim, or by over-zealous enforcement of Secure Boot in the HDD fallback boot path (technically an UEFI firmware bug undocumented feature that makes it incompatible with the SecureBoot shim). An update to the SecureBoot shim might help in that case.







share|improve this answer














share|improve this answer



share|improve this answer








edited Nov 23 at 8:54

























answered Nov 23 at 7:10









telcoM

14.6k11842




14.6k11842











  • Interesting. Confirmed that Secure Boot is enabled (with no option to disable) in the BIOS but set to "Other OS" instead of "Windows UEFI mode". I am already on the latest BIOS for this board (v1301).
    – Zhro
    Nov 23 at 8:31










  • If BIOS allows you to manage Secure Boot keys, deleting the default PK key should switch Secure Boot into "Setup Mode", which has essentially the same effect as disabling Secure Boot. It should also allow you full access to Secure Boot NVRAM variables from the OS, so you could create and install your own Secure Boot key if you wish.
    – telcoM
    Nov 23 at 8:48










  • I've updated my question with more information and attached an image of the Secure Boot screen from my BIOS. It appears (by default) that the Secure Boot state is "Enabled" PK state is "Unloaded". I switched to the Windows Secure boot, cleared all keys, and now Secure Boot says "disabled". Interesting. But is this even related?
    – Zhro
    Nov 23 at 8:53











  • That means the "Other OS" selection is apparently equivalent to switching to Secure Boot Setup Mode, so it might be possible to simplify your set-up by eliminating the shim altogether. If you switched from "Other OS" to "Windows UEFI mode", I'd guess it would initially enable the default PK, and you might or might not be able to further configure Secure Boot keys then. But that's not important right now; did you get the efibootmgr -v output? A basic understanding of UEFI NVRAM variables is an important part of UEFI boot troubleshooting.
    – telcoM
    Nov 23 at 9:07










  • I seem to have made the mistake of clearing the Secure Boot state and this seems to have resolved the issue entirely. I can no longer replicate the EFI issue from my question and cannot continue debugging. I don't know how to restore to the state the system was in previously. I am accepting your answer as you seem to have been on the right track with Secure Boot. I would have liked to have debugged this further.
    – Zhro
    Nov 23 at 10:16

















  • Interesting. Confirmed that Secure Boot is enabled (with no option to disable) in the BIOS but set to "Other OS" instead of "Windows UEFI mode". I am already on the latest BIOS for this board (v1301).
    – Zhro
    Nov 23 at 8:31










  • If BIOS allows you to manage Secure Boot keys, deleting the default PK key should switch Secure Boot into "Setup Mode", which has essentially the same effect as disabling Secure Boot. It should also allow you full access to Secure Boot NVRAM variables from the OS, so you could create and install your own Secure Boot key if you wish.
    – telcoM
    Nov 23 at 8:48










  • I've updated my question with more information and attached an image of the Secure Boot screen from my BIOS. It appears (by default) that the Secure Boot state is "Enabled" PK state is "Unloaded". I switched to the Windows Secure boot, cleared all keys, and now Secure Boot says "disabled". Interesting. But is this even related?
    – Zhro
    Nov 23 at 8:53











  • That means the "Other OS" selection is apparently equivalent to switching to Secure Boot Setup Mode, so it might be possible to simplify your set-up by eliminating the shim altogether. If you switched from "Other OS" to "Windows UEFI mode", I'd guess it would initially enable the default PK, and you might or might not be able to further configure Secure Boot keys then. But that's not important right now; did you get the efibootmgr -v output? A basic understanding of UEFI NVRAM variables is an important part of UEFI boot troubleshooting.
    – telcoM
    Nov 23 at 9:07










  • I seem to have made the mistake of clearing the Secure Boot state and this seems to have resolved the issue entirely. I can no longer replicate the EFI issue from my question and cannot continue debugging. I don't know how to restore to the state the system was in previously. I am accepting your answer as you seem to have been on the right track with Secure Boot. I would have liked to have debugged this further.
    – Zhro
    Nov 23 at 10:16
















Interesting. Confirmed that Secure Boot is enabled (with no option to disable) in the BIOS but set to "Other OS" instead of "Windows UEFI mode". I am already on the latest BIOS for this board (v1301).
– Zhro
Nov 23 at 8:31




Interesting. Confirmed that Secure Boot is enabled (with no option to disable) in the BIOS but set to "Other OS" instead of "Windows UEFI mode". I am already on the latest BIOS for this board (v1301).
– Zhro
Nov 23 at 8:31












If BIOS allows you to manage Secure Boot keys, deleting the default PK key should switch Secure Boot into "Setup Mode", which has essentially the same effect as disabling Secure Boot. It should also allow you full access to Secure Boot NVRAM variables from the OS, so you could create and install your own Secure Boot key if you wish.
– telcoM
Nov 23 at 8:48




If BIOS allows you to manage Secure Boot keys, deleting the default PK key should switch Secure Boot into "Setup Mode", which has essentially the same effect as disabling Secure Boot. It should also allow you full access to Secure Boot NVRAM variables from the OS, so you could create and install your own Secure Boot key if you wish.
– telcoM
Nov 23 at 8:48












I've updated my question with more information and attached an image of the Secure Boot screen from my BIOS. It appears (by default) that the Secure Boot state is "Enabled" PK state is "Unloaded". I switched to the Windows Secure boot, cleared all keys, and now Secure Boot says "disabled". Interesting. But is this even related?
– Zhro
Nov 23 at 8:53





I've updated my question with more information and attached an image of the Secure Boot screen from my BIOS. It appears (by default) that the Secure Boot state is "Enabled" PK state is "Unloaded". I switched to the Windows Secure boot, cleared all keys, and now Secure Boot says "disabled". Interesting. But is this even related?
– Zhro
Nov 23 at 8:53













That means the "Other OS" selection is apparently equivalent to switching to Secure Boot Setup Mode, so it might be possible to simplify your set-up by eliminating the shim altogether. If you switched from "Other OS" to "Windows UEFI mode", I'd guess it would initially enable the default PK, and you might or might not be able to further configure Secure Boot keys then. But that's not important right now; did you get the efibootmgr -v output? A basic understanding of UEFI NVRAM variables is an important part of UEFI boot troubleshooting.
– telcoM
Nov 23 at 9:07




That means the "Other OS" selection is apparently equivalent to switching to Secure Boot Setup Mode, so it might be possible to simplify your set-up by eliminating the shim altogether. If you switched from "Other OS" to "Windows UEFI mode", I'd guess it would initially enable the default PK, and you might or might not be able to further configure Secure Boot keys then. But that's not important right now; did you get the efibootmgr -v output? A basic understanding of UEFI NVRAM variables is an important part of UEFI boot troubleshooting.
– telcoM
Nov 23 at 9:07












I seem to have made the mistake of clearing the Secure Boot state and this seems to have resolved the issue entirely. I can no longer replicate the EFI issue from my question and cannot continue debugging. I don't know how to restore to the state the system was in previously. I am accepting your answer as you seem to have been on the right track with Secure Boot. I would have liked to have debugged this further.
– Zhro
Nov 23 at 10:16





I seem to have made the mistake of clearing the Secure Boot state and this seems to have resolved the issue entirely. I can no longer replicate the EFI issue from my question and cannot continue debugging. I don't know how to restore to the state the system was in previously. I am accepting your answer as you seem to have been on the right track with Secure Boot. I would have liked to have debugged this further.
– Zhro
Nov 23 at 10:16


















draft saved

draft discarded
















































Thanks for contributing an answer to Unix & Linux Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f483540%2fwhat-is-causing-my-efi-partition-to-become-corrupt-when-booting-centos-after-ins%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown






Popular posts from this blog

How to check contact read email or not when send email to Individual?

Bahrain

Postfix configuration issue with fips on centos 7; mailgun relay