What causes this? pcieport 0000:00:03.0: PCIe Bus Error: AER / Bad TLP
Clash Royale CLAN TAG#URR8PPP
up vote
9
down vote
favorite
I'm seeing error messages like these below:
Nov 15 15:49:52 x99 kernel: pcieport 0000:00:03.0: AER: Multiple
Corrected error received: id=0018 Nov 15 15:49:52 x99 kernel: pcieport
0000:00:03.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer,
id=0018(Receiver ID) Nov 15 15:49:52 x99 kernel: pcieport 0000:00:03.0:
device [8086:6f08] error status/mask=00000040/00002000 Nov 15 15:49:52
x99 kernel: pcieport 0000:00:03.0: [ 6] Bad TLP
These will cause degraded performance even though they have (so far) been corrected. Obviously, this issue needs to be resolved. However, I cannot find much about it on the Internet. (Maybe I'm looking in the wrong places.) I found only a few links which I will post below.
Does anyone know more about these errors?
Is it the motherboard, the Samsung 950 Pro, or the GPU (or some combination of these)?
The hardware is: Asus X99 Deluxe II Samsung 950 Pro NVMe in the M2. slot on the mb (which shares PCIe port 3). Nothing else is plugged into PCIe port 3. A GeForce GTX 1070 in PCIe slot 1 Core i7 6850K CPU
A couple of the links I found mentions the same hardware (X99 Deluxe II mb & Samsung950 Pro). I'm running Arch Linux.
I do not find the string "8086:6f08" in journalctl or anywhere else I have thought to search so far.
odd error message with nvme ssd (Bad TLP) : linuxquestions https://www.reddit.com/r/linuxquestions/comments/4walnu/odd_error_message_with_nvme_ssd_bad_tlp/
PCIe: Is your card silently struggling with TLP retransmits? http://billauer.co.il/blog/2011/07/pcie-tlp-dllp-retransmit-data-link-layer-error/
GTX 1080 Throwing Bad TLP PCIe Bus Errors - GeForce Forums https://forums.geforce.com/default/topic/957456/gtx-1080-throwing-bad-tlp-pcie-bus-errors/
drivers - PCIe error in dmesg log - Ask Ubuntu https://askubuntu.com/questions/643952/pcie-error-in-dmesg-log
780Ti X99 hard lock - PCIE errors - NVIDIA Developer Forums
https://devtalk.nvidia.com/default/topic/779994/linux/780ti-x99-hard-lock-pcie-errors/
hardware pci
add a comment |Â
up vote
9
down vote
favorite
I'm seeing error messages like these below:
Nov 15 15:49:52 x99 kernel: pcieport 0000:00:03.0: AER: Multiple
Corrected error received: id=0018 Nov 15 15:49:52 x99 kernel: pcieport
0000:00:03.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer,
id=0018(Receiver ID) Nov 15 15:49:52 x99 kernel: pcieport 0000:00:03.0:
device [8086:6f08] error status/mask=00000040/00002000 Nov 15 15:49:52
x99 kernel: pcieport 0000:00:03.0: [ 6] Bad TLP
These will cause degraded performance even though they have (so far) been corrected. Obviously, this issue needs to be resolved. However, I cannot find much about it on the Internet. (Maybe I'm looking in the wrong places.) I found only a few links which I will post below.
Does anyone know more about these errors?
Is it the motherboard, the Samsung 950 Pro, or the GPU (or some combination of these)?
The hardware is: Asus X99 Deluxe II Samsung 950 Pro NVMe in the M2. slot on the mb (which shares PCIe port 3). Nothing else is plugged into PCIe port 3. A GeForce GTX 1070 in PCIe slot 1 Core i7 6850K CPU
A couple of the links I found mentions the same hardware (X99 Deluxe II mb & Samsung950 Pro). I'm running Arch Linux.
I do not find the string "8086:6f08" in journalctl or anywhere else I have thought to search so far.
odd error message with nvme ssd (Bad TLP) : linuxquestions https://www.reddit.com/r/linuxquestions/comments/4walnu/odd_error_message_with_nvme_ssd_bad_tlp/
PCIe: Is your card silently struggling with TLP retransmits? http://billauer.co.il/blog/2011/07/pcie-tlp-dllp-retransmit-data-link-layer-error/
GTX 1080 Throwing Bad TLP PCIe Bus Errors - GeForce Forums https://forums.geforce.com/default/topic/957456/gtx-1080-throwing-bad-tlp-pcie-bus-errors/
drivers - PCIe error in dmesg log - Ask Ubuntu https://askubuntu.com/questions/643952/pcie-error-in-dmesg-log
780Ti X99 hard lock - PCIE errors - NVIDIA Developer Forums
https://devtalk.nvidia.com/default/topic/779994/linux/780ti-x99-hard-lock-pcie-errors/
hardware pci
add a comment |Â
up vote
9
down vote
favorite
up vote
9
down vote
favorite
I'm seeing error messages like these below:
Nov 15 15:49:52 x99 kernel: pcieport 0000:00:03.0: AER: Multiple
Corrected error received: id=0018 Nov 15 15:49:52 x99 kernel: pcieport
0000:00:03.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer,
id=0018(Receiver ID) Nov 15 15:49:52 x99 kernel: pcieport 0000:00:03.0:
device [8086:6f08] error status/mask=00000040/00002000 Nov 15 15:49:52
x99 kernel: pcieport 0000:00:03.0: [ 6] Bad TLP
These will cause degraded performance even though they have (so far) been corrected. Obviously, this issue needs to be resolved. However, I cannot find much about it on the Internet. (Maybe I'm looking in the wrong places.) I found only a few links which I will post below.
Does anyone know more about these errors?
Is it the motherboard, the Samsung 950 Pro, or the GPU (or some combination of these)?
The hardware is: Asus X99 Deluxe II Samsung 950 Pro NVMe in the M2. slot on the mb (which shares PCIe port 3). Nothing else is plugged into PCIe port 3. A GeForce GTX 1070 in PCIe slot 1 Core i7 6850K CPU
A couple of the links I found mentions the same hardware (X99 Deluxe II mb & Samsung950 Pro). I'm running Arch Linux.
I do not find the string "8086:6f08" in journalctl or anywhere else I have thought to search so far.
odd error message with nvme ssd (Bad TLP) : linuxquestions https://www.reddit.com/r/linuxquestions/comments/4walnu/odd_error_message_with_nvme_ssd_bad_tlp/
PCIe: Is your card silently struggling with TLP retransmits? http://billauer.co.il/blog/2011/07/pcie-tlp-dllp-retransmit-data-link-layer-error/
GTX 1080 Throwing Bad TLP PCIe Bus Errors - GeForce Forums https://forums.geforce.com/default/topic/957456/gtx-1080-throwing-bad-tlp-pcie-bus-errors/
drivers - PCIe error in dmesg log - Ask Ubuntu https://askubuntu.com/questions/643952/pcie-error-in-dmesg-log
780Ti X99 hard lock - PCIE errors - NVIDIA Developer Forums
https://devtalk.nvidia.com/default/topic/779994/linux/780ti-x99-hard-lock-pcie-errors/
hardware pci
I'm seeing error messages like these below:
Nov 15 15:49:52 x99 kernel: pcieport 0000:00:03.0: AER: Multiple
Corrected error received: id=0018 Nov 15 15:49:52 x99 kernel: pcieport
0000:00:03.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer,
id=0018(Receiver ID) Nov 15 15:49:52 x99 kernel: pcieport 0000:00:03.0:
device [8086:6f08] error status/mask=00000040/00002000 Nov 15 15:49:52
x99 kernel: pcieport 0000:00:03.0: [ 6] Bad TLP
These will cause degraded performance even though they have (so far) been corrected. Obviously, this issue needs to be resolved. However, I cannot find much about it on the Internet. (Maybe I'm looking in the wrong places.) I found only a few links which I will post below.
Does anyone know more about these errors?
Is it the motherboard, the Samsung 950 Pro, or the GPU (or some combination of these)?
The hardware is: Asus X99 Deluxe II Samsung 950 Pro NVMe in the M2. slot on the mb (which shares PCIe port 3). Nothing else is plugged into PCIe port 3. A GeForce GTX 1070 in PCIe slot 1 Core i7 6850K CPU
A couple of the links I found mentions the same hardware (X99 Deluxe II mb & Samsung950 Pro). I'm running Arch Linux.
I do not find the string "8086:6f08" in journalctl or anywhere else I have thought to search so far.
odd error message with nvme ssd (Bad TLP) : linuxquestions https://www.reddit.com/r/linuxquestions/comments/4walnu/odd_error_message_with_nvme_ssd_bad_tlp/
PCIe: Is your card silently struggling with TLP retransmits? http://billauer.co.il/blog/2011/07/pcie-tlp-dllp-retransmit-data-link-layer-error/
GTX 1080 Throwing Bad TLP PCIe Bus Errors - GeForce Forums https://forums.geforce.com/default/topic/957456/gtx-1080-throwing-bad-tlp-pcie-bus-errors/
drivers - PCIe error in dmesg log - Ask Ubuntu https://askubuntu.com/questions/643952/pcie-error-in-dmesg-log
780Ti X99 hard lock - PCIE errors - NVIDIA Developer Forums
https://devtalk.nvidia.com/default/topic/779994/linux/780ti-x99-hard-lock-pcie-errors/
hardware pci
hardware pci
edited Apr 13 '17 at 12:22
Communityâ¦
1
1
asked Dec 3 '16 at 8:00
MountainX
4,7682469122
4,7682469122
add a comment |Â
add a comment |Â
6 Answers
6
active
oldest
votes
up vote
13
down vote
accepted
I can give at least a few details, even though I cannot fully explain what happens.
As described for example here, the CPU communicates with the PCIe bus controller by transaction layer packets (TLPs). The hardware detects when there are faulty ones, and the Linux kernel reports that as messages.
The kernel option pci=nommconf
disables Memory-Mapped PCI Configuration Space, which is available in Linux since kernel 2.6. Very roughly, all PCI devices have an area that describe this device (which you see with lspci -vv
), and the originally method to access this area involves going through I/O ports, while PCIe allows this space to be mapped to memory for simpler access.
That means in this particular case, something goes wrong when the PCIe controller uses this method to access the configuraton space of a particular device. It may be a hardware bug in the device, in the PCIe root controller on the motherboard, in the specific interaction of those two, or something else.
By using pci=nommconf
, the configuration space of all devices will be accessed in the original way, and changing the access methods works around this problem. So if you want, it's both resolving and suppressing it.
Can I know if it is my motherboard problem? Or my CPU problem. Should I change them?
â user10024395
Jun 14 '17 at 13:52
@user2675516: It's not CPU related. It's a problem of the PCIe root controller (which often is in the Southbridge) and/or the PCIe controller of the device, or their interaction. Yes, changing the motherboard for one with different hardware usually gets rid of it.
â dirkt
Jun 14 '17 at 15:45
I changed from asus e-ws to asus deluxe, but problem still persists. That's why i suspect it is the cpu. Or is it because both are X99 chipset?
â user10024395
Jun 14 '17 at 15:47
@user2675516: If the chipset is the same, esp. the PCIe controller, then changing the motherboard of course won't help. That's why I wrote "motherboard with different hardware".
â dirkt
Jun 14 '17 at 16:02
the common factor for me seems to be a motherboard with the X99 chipset
â MountainX
Jul 4 '17 at 3:18
 |Â
show 1 more comment
up vote
3
down vote
Try this steps:
cp /etc/default/grub ~/Desktop
Edit grub. Add
pci=noaer
at the end ofGRUB_CMDLINE_LINUX_DEFAULT
. Line will be like this:GRUB_CMDLINE_LINUX_DEFAULT="quiet splash pci=noaer"
sudo cp ~/Desktop/grub /etc/default/
sudo update-grub
- Reboot now
I applied your solution but instead ofpci=noaer
I usedpci=nommconf
as suggested by @dirkt
â user3405291
Jun 11 at 5:39
Thanks, pci=noaer fixed my slackware 14.2x64 problem installed on an hp laptop (desktop install didn't exhibit this problem at all)
â John Forkosh
Jun 26 at 21:38
add a comment |Â
up vote
2
down vote
Adding the kernel command line option pci=nommconf
resolved the issue for me. Therefore, I'm assume the issue is motherboard-related. It happens on all my X99 motherboard-equipped computers. It does not happen on Z170 systems or any other hardware I own.
1
Hi I am also facing this problem. Can I know what pci-nommconf do? Is it just suppressing the problem or resolving the problem?
â user10024395
Jun 2 '17 at 10:02
Can't confirm - getting the error on z170i, running arch 4.13.12
â sitilge
Nov 24 '17 at 21:35
@sitilge - thanks for your comment. Which brand/model z170i? My motherboards are Asus. One is X99 Deluxe II
â MountainX
Nov 25 '17 at 0:36
It is asus z170i pro gaming.
â sitilge
Nov 25 '17 at 11:04
add a comment |Â
up vote
1
down vote
I changed the PCIE16_3 slot Config in Bios on my x99-E to be static set to x8 mode instead of auto that is default for M.2 device support. Works fine now without TLP errors on both of my 1070GTX cards connected via PCIe 1x to 16x extension boards.
I did not use port 16_3 first, moved to that slot to test but still had issues before change in bios. Also changed bsleep setting for all cards to 30 in the miner config.
Before change I had the kernel log spammed with faults.
Also tried to powercycle system before and after change. Seems to be pretty persistent.
add a comment |Â
up vote
0
down vote
I get the same errors (Bad TLP associated with device 8086:6f08). I have X99 Deluxe II, Samsung 960 pro, Nvidia 1080 ti. These problems seem to be associated with X99 chipset and M.2 device, like Samsung Pro.
The X99 Deluxe II motherboard shares bandwidth between PCIE16_3 slot and M.2/U.2. Following comment from @Nic, in the BIOS I changed Onboard Devices Configuration | U.2_2 Bandwidth from Auto to U.2_2. This fixed the problem for me.
add a comment |Â
up vote
0
down vote
I know this post is old, but it's one of the first to appear when searching the error messages so for posterity I will post my findings on this because I have completed a root causes analysis and have the true solution:
Search your motherboard manual for "AER". You can kill the source of the problem by either correcting the specific incompatibility or disabling AER altogether. Only use this if all the error spam concerns CORRECTED errors, otherwise you could be covering up an actual issue.
New contributor
add a comment |Â
6 Answers
6
active
oldest
votes
6 Answers
6
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
13
down vote
accepted
I can give at least a few details, even though I cannot fully explain what happens.
As described for example here, the CPU communicates with the PCIe bus controller by transaction layer packets (TLPs). The hardware detects when there are faulty ones, and the Linux kernel reports that as messages.
The kernel option pci=nommconf
disables Memory-Mapped PCI Configuration Space, which is available in Linux since kernel 2.6. Very roughly, all PCI devices have an area that describe this device (which you see with lspci -vv
), and the originally method to access this area involves going through I/O ports, while PCIe allows this space to be mapped to memory for simpler access.
That means in this particular case, something goes wrong when the PCIe controller uses this method to access the configuraton space of a particular device. It may be a hardware bug in the device, in the PCIe root controller on the motherboard, in the specific interaction of those two, or something else.
By using pci=nommconf
, the configuration space of all devices will be accessed in the original way, and changing the access methods works around this problem. So if you want, it's both resolving and suppressing it.
Can I know if it is my motherboard problem? Or my CPU problem. Should I change them?
â user10024395
Jun 14 '17 at 13:52
@user2675516: It's not CPU related. It's a problem of the PCIe root controller (which often is in the Southbridge) and/or the PCIe controller of the device, or their interaction. Yes, changing the motherboard for one with different hardware usually gets rid of it.
â dirkt
Jun 14 '17 at 15:45
I changed from asus e-ws to asus deluxe, but problem still persists. That's why i suspect it is the cpu. Or is it because both are X99 chipset?
â user10024395
Jun 14 '17 at 15:47
@user2675516: If the chipset is the same, esp. the PCIe controller, then changing the motherboard of course won't help. That's why I wrote "motherboard with different hardware".
â dirkt
Jun 14 '17 at 16:02
the common factor for me seems to be a motherboard with the X99 chipset
â MountainX
Jul 4 '17 at 3:18
 |Â
show 1 more comment
up vote
13
down vote
accepted
I can give at least a few details, even though I cannot fully explain what happens.
As described for example here, the CPU communicates with the PCIe bus controller by transaction layer packets (TLPs). The hardware detects when there are faulty ones, and the Linux kernel reports that as messages.
The kernel option pci=nommconf
disables Memory-Mapped PCI Configuration Space, which is available in Linux since kernel 2.6. Very roughly, all PCI devices have an area that describe this device (which you see with lspci -vv
), and the originally method to access this area involves going through I/O ports, while PCIe allows this space to be mapped to memory for simpler access.
That means in this particular case, something goes wrong when the PCIe controller uses this method to access the configuraton space of a particular device. It may be a hardware bug in the device, in the PCIe root controller on the motherboard, in the specific interaction of those two, or something else.
By using pci=nommconf
, the configuration space of all devices will be accessed in the original way, and changing the access methods works around this problem. So if you want, it's both resolving and suppressing it.
Can I know if it is my motherboard problem? Or my CPU problem. Should I change them?
â user10024395
Jun 14 '17 at 13:52
@user2675516: It's not CPU related. It's a problem of the PCIe root controller (which often is in the Southbridge) and/or the PCIe controller of the device, or their interaction. Yes, changing the motherboard for one with different hardware usually gets rid of it.
â dirkt
Jun 14 '17 at 15:45
I changed from asus e-ws to asus deluxe, but problem still persists. That's why i suspect it is the cpu. Or is it because both are X99 chipset?
â user10024395
Jun 14 '17 at 15:47
@user2675516: If the chipset is the same, esp. the PCIe controller, then changing the motherboard of course won't help. That's why I wrote "motherboard with different hardware".
â dirkt
Jun 14 '17 at 16:02
the common factor for me seems to be a motherboard with the X99 chipset
â MountainX
Jul 4 '17 at 3:18
 |Â
show 1 more comment
up vote
13
down vote
accepted
up vote
13
down vote
accepted
I can give at least a few details, even though I cannot fully explain what happens.
As described for example here, the CPU communicates with the PCIe bus controller by transaction layer packets (TLPs). The hardware detects when there are faulty ones, and the Linux kernel reports that as messages.
The kernel option pci=nommconf
disables Memory-Mapped PCI Configuration Space, which is available in Linux since kernel 2.6. Very roughly, all PCI devices have an area that describe this device (which you see with lspci -vv
), and the originally method to access this area involves going through I/O ports, while PCIe allows this space to be mapped to memory for simpler access.
That means in this particular case, something goes wrong when the PCIe controller uses this method to access the configuraton space of a particular device. It may be a hardware bug in the device, in the PCIe root controller on the motherboard, in the specific interaction of those two, or something else.
By using pci=nommconf
, the configuration space of all devices will be accessed in the original way, and changing the access methods works around this problem. So if you want, it's both resolving and suppressing it.
I can give at least a few details, even though I cannot fully explain what happens.
As described for example here, the CPU communicates with the PCIe bus controller by transaction layer packets (TLPs). The hardware detects when there are faulty ones, and the Linux kernel reports that as messages.
The kernel option pci=nommconf
disables Memory-Mapped PCI Configuration Space, which is available in Linux since kernel 2.6. Very roughly, all PCI devices have an area that describe this device (which you see with lspci -vv
), and the originally method to access this area involves going through I/O ports, while PCIe allows this space to be mapped to memory for simpler access.
That means in this particular case, something goes wrong when the PCIe controller uses this method to access the configuraton space of a particular device. It may be a hardware bug in the device, in the PCIe root controller on the motherboard, in the specific interaction of those two, or something else.
By using pci=nommconf
, the configuration space of all devices will be accessed in the original way, and changing the access methods works around this problem. So if you want, it's both resolving and suppressing it.
answered Jun 4 '17 at 5:34
dirkt
15.2k21032
15.2k21032
Can I know if it is my motherboard problem? Or my CPU problem. Should I change them?
â user10024395
Jun 14 '17 at 13:52
@user2675516: It's not CPU related. It's a problem of the PCIe root controller (which often is in the Southbridge) and/or the PCIe controller of the device, or their interaction. Yes, changing the motherboard for one with different hardware usually gets rid of it.
â dirkt
Jun 14 '17 at 15:45
I changed from asus e-ws to asus deluxe, but problem still persists. That's why i suspect it is the cpu. Or is it because both are X99 chipset?
â user10024395
Jun 14 '17 at 15:47
@user2675516: If the chipset is the same, esp. the PCIe controller, then changing the motherboard of course won't help. That's why I wrote "motherboard with different hardware".
â dirkt
Jun 14 '17 at 16:02
the common factor for me seems to be a motherboard with the X99 chipset
â MountainX
Jul 4 '17 at 3:18
 |Â
show 1 more comment
Can I know if it is my motherboard problem? Or my CPU problem. Should I change them?
â user10024395
Jun 14 '17 at 13:52
@user2675516: It's not CPU related. It's a problem of the PCIe root controller (which often is in the Southbridge) and/or the PCIe controller of the device, or their interaction. Yes, changing the motherboard for one with different hardware usually gets rid of it.
â dirkt
Jun 14 '17 at 15:45
I changed from asus e-ws to asus deluxe, but problem still persists. That's why i suspect it is the cpu. Or is it because both are X99 chipset?
â user10024395
Jun 14 '17 at 15:47
@user2675516: If the chipset is the same, esp. the PCIe controller, then changing the motherboard of course won't help. That's why I wrote "motherboard with different hardware".
â dirkt
Jun 14 '17 at 16:02
the common factor for me seems to be a motherboard with the X99 chipset
â MountainX
Jul 4 '17 at 3:18
Can I know if it is my motherboard problem? Or my CPU problem. Should I change them?
â user10024395
Jun 14 '17 at 13:52
Can I know if it is my motherboard problem? Or my CPU problem. Should I change them?
â user10024395
Jun 14 '17 at 13:52
@user2675516: It's not CPU related. It's a problem of the PCIe root controller (which often is in the Southbridge) and/or the PCIe controller of the device, or their interaction. Yes, changing the motherboard for one with different hardware usually gets rid of it.
â dirkt
Jun 14 '17 at 15:45
@user2675516: It's not CPU related. It's a problem of the PCIe root controller (which often is in the Southbridge) and/or the PCIe controller of the device, or their interaction. Yes, changing the motherboard for one with different hardware usually gets rid of it.
â dirkt
Jun 14 '17 at 15:45
I changed from asus e-ws to asus deluxe, but problem still persists. That's why i suspect it is the cpu. Or is it because both are X99 chipset?
â user10024395
Jun 14 '17 at 15:47
I changed from asus e-ws to asus deluxe, but problem still persists. That's why i suspect it is the cpu. Or is it because both are X99 chipset?
â user10024395
Jun 14 '17 at 15:47
@user2675516: If the chipset is the same, esp. the PCIe controller, then changing the motherboard of course won't help. That's why I wrote "motherboard with different hardware".
â dirkt
Jun 14 '17 at 16:02
@user2675516: If the chipset is the same, esp. the PCIe controller, then changing the motherboard of course won't help. That's why I wrote "motherboard with different hardware".
â dirkt
Jun 14 '17 at 16:02
the common factor for me seems to be a motherboard with the X99 chipset
â MountainX
Jul 4 '17 at 3:18
the common factor for me seems to be a motherboard with the X99 chipset
â MountainX
Jul 4 '17 at 3:18
 |Â
show 1 more comment
up vote
3
down vote
Try this steps:
cp /etc/default/grub ~/Desktop
Edit grub. Add
pci=noaer
at the end ofGRUB_CMDLINE_LINUX_DEFAULT
. Line will be like this:GRUB_CMDLINE_LINUX_DEFAULT="quiet splash pci=noaer"
sudo cp ~/Desktop/grub /etc/default/
sudo update-grub
- Reboot now
I applied your solution but instead ofpci=noaer
I usedpci=nommconf
as suggested by @dirkt
â user3405291
Jun 11 at 5:39
Thanks, pci=noaer fixed my slackware 14.2x64 problem installed on an hp laptop (desktop install didn't exhibit this problem at all)
â John Forkosh
Jun 26 at 21:38
add a comment |Â
up vote
3
down vote
Try this steps:
cp /etc/default/grub ~/Desktop
Edit grub. Add
pci=noaer
at the end ofGRUB_CMDLINE_LINUX_DEFAULT
. Line will be like this:GRUB_CMDLINE_LINUX_DEFAULT="quiet splash pci=noaer"
sudo cp ~/Desktop/grub /etc/default/
sudo update-grub
- Reboot now
I applied your solution but instead ofpci=noaer
I usedpci=nommconf
as suggested by @dirkt
â user3405291
Jun 11 at 5:39
Thanks, pci=noaer fixed my slackware 14.2x64 problem installed on an hp laptop (desktop install didn't exhibit this problem at all)
â John Forkosh
Jun 26 at 21:38
add a comment |Â
up vote
3
down vote
up vote
3
down vote
Try this steps:
cp /etc/default/grub ~/Desktop
Edit grub. Add
pci=noaer
at the end ofGRUB_CMDLINE_LINUX_DEFAULT
. Line will be like this:GRUB_CMDLINE_LINUX_DEFAULT="quiet splash pci=noaer"
sudo cp ~/Desktop/grub /etc/default/
sudo update-grub
- Reboot now
Try this steps:
cp /etc/default/grub ~/Desktop
Edit grub. Add
pci=noaer
at the end ofGRUB_CMDLINE_LINUX_DEFAULT
. Line will be like this:GRUB_CMDLINE_LINUX_DEFAULT="quiet splash pci=noaer"
sudo cp ~/Desktop/grub /etc/default/
sudo update-grub
- Reboot now
edited Jul 2 at 3:20
slmâ¦
241k66501669
241k66501669
answered May 28 at 2:51
Ehtesham
726
726
I applied your solution but instead ofpci=noaer
I usedpci=nommconf
as suggested by @dirkt
â user3405291
Jun 11 at 5:39
Thanks, pci=noaer fixed my slackware 14.2x64 problem installed on an hp laptop (desktop install didn't exhibit this problem at all)
â John Forkosh
Jun 26 at 21:38
add a comment |Â
I applied your solution but instead ofpci=noaer
I usedpci=nommconf
as suggested by @dirkt
â user3405291
Jun 11 at 5:39
Thanks, pci=noaer fixed my slackware 14.2x64 problem installed on an hp laptop (desktop install didn't exhibit this problem at all)
â John Forkosh
Jun 26 at 21:38
I applied your solution but instead of
pci=noaer
I used pci=nommconf
as suggested by @dirktâ user3405291
Jun 11 at 5:39
I applied your solution but instead of
pci=noaer
I used pci=nommconf
as suggested by @dirktâ user3405291
Jun 11 at 5:39
Thanks, pci=noaer fixed my slackware 14.2x64 problem installed on an hp laptop (desktop install didn't exhibit this problem at all)
â John Forkosh
Jun 26 at 21:38
Thanks, pci=noaer fixed my slackware 14.2x64 problem installed on an hp laptop (desktop install didn't exhibit this problem at all)
â John Forkosh
Jun 26 at 21:38
add a comment |Â
up vote
2
down vote
Adding the kernel command line option pci=nommconf
resolved the issue for me. Therefore, I'm assume the issue is motherboard-related. It happens on all my X99 motherboard-equipped computers. It does not happen on Z170 systems or any other hardware I own.
1
Hi I am also facing this problem. Can I know what pci-nommconf do? Is it just suppressing the problem or resolving the problem?
â user10024395
Jun 2 '17 at 10:02
Can't confirm - getting the error on z170i, running arch 4.13.12
â sitilge
Nov 24 '17 at 21:35
@sitilge - thanks for your comment. Which brand/model z170i? My motherboards are Asus. One is X99 Deluxe II
â MountainX
Nov 25 '17 at 0:36
It is asus z170i pro gaming.
â sitilge
Nov 25 '17 at 11:04
add a comment |Â
up vote
2
down vote
Adding the kernel command line option pci=nommconf
resolved the issue for me. Therefore, I'm assume the issue is motherboard-related. It happens on all my X99 motherboard-equipped computers. It does not happen on Z170 systems or any other hardware I own.
1
Hi I am also facing this problem. Can I know what pci-nommconf do? Is it just suppressing the problem or resolving the problem?
â user10024395
Jun 2 '17 at 10:02
Can't confirm - getting the error on z170i, running arch 4.13.12
â sitilge
Nov 24 '17 at 21:35
@sitilge - thanks for your comment. Which brand/model z170i? My motherboards are Asus. One is X99 Deluxe II
â MountainX
Nov 25 '17 at 0:36
It is asus z170i pro gaming.
â sitilge
Nov 25 '17 at 11:04
add a comment |Â
up vote
2
down vote
up vote
2
down vote
Adding the kernel command line option pci=nommconf
resolved the issue for me. Therefore, I'm assume the issue is motherboard-related. It happens on all my X99 motherboard-equipped computers. It does not happen on Z170 systems or any other hardware I own.
Adding the kernel command line option pci=nommconf
resolved the issue for me. Therefore, I'm assume the issue is motherboard-related. It happens on all my X99 motherboard-equipped computers. It does not happen on Z170 systems or any other hardware I own.
answered Apr 19 '17 at 4:43
MountainX
4,7682469122
4,7682469122
1
Hi I am also facing this problem. Can I know what pci-nommconf do? Is it just suppressing the problem or resolving the problem?
â user10024395
Jun 2 '17 at 10:02
Can't confirm - getting the error on z170i, running arch 4.13.12
â sitilge
Nov 24 '17 at 21:35
@sitilge - thanks for your comment. Which brand/model z170i? My motherboards are Asus. One is X99 Deluxe II
â MountainX
Nov 25 '17 at 0:36
It is asus z170i pro gaming.
â sitilge
Nov 25 '17 at 11:04
add a comment |Â
1
Hi I am also facing this problem. Can I know what pci-nommconf do? Is it just suppressing the problem or resolving the problem?
â user10024395
Jun 2 '17 at 10:02
Can't confirm - getting the error on z170i, running arch 4.13.12
â sitilge
Nov 24 '17 at 21:35
@sitilge - thanks for your comment. Which brand/model z170i? My motherboards are Asus. One is X99 Deluxe II
â MountainX
Nov 25 '17 at 0:36
It is asus z170i pro gaming.
â sitilge
Nov 25 '17 at 11:04
1
1
Hi I am also facing this problem. Can I know what pci-nommconf do? Is it just suppressing the problem or resolving the problem?
â user10024395
Jun 2 '17 at 10:02
Hi I am also facing this problem. Can I know what pci-nommconf do? Is it just suppressing the problem or resolving the problem?
â user10024395
Jun 2 '17 at 10:02
Can't confirm - getting the error on z170i, running arch 4.13.12
â sitilge
Nov 24 '17 at 21:35
Can't confirm - getting the error on z170i, running arch 4.13.12
â sitilge
Nov 24 '17 at 21:35
@sitilge - thanks for your comment. Which brand/model z170i? My motherboards are Asus. One is X99 Deluxe II
â MountainX
Nov 25 '17 at 0:36
@sitilge - thanks for your comment. Which brand/model z170i? My motherboards are Asus. One is X99 Deluxe II
â MountainX
Nov 25 '17 at 0:36
It is asus z170i pro gaming.
â sitilge
Nov 25 '17 at 11:04
It is asus z170i pro gaming.
â sitilge
Nov 25 '17 at 11:04
add a comment |Â
up vote
1
down vote
I changed the PCIE16_3 slot Config in Bios on my x99-E to be static set to x8 mode instead of auto that is default for M.2 device support. Works fine now without TLP errors on both of my 1070GTX cards connected via PCIe 1x to 16x extension boards.
I did not use port 16_3 first, moved to that slot to test but still had issues before change in bios. Also changed bsleep setting for all cards to 30 in the miner config.
Before change I had the kernel log spammed with faults.
Also tried to powercycle system before and after change. Seems to be pretty persistent.
add a comment |Â
up vote
1
down vote
I changed the PCIE16_3 slot Config in Bios on my x99-E to be static set to x8 mode instead of auto that is default for M.2 device support. Works fine now without TLP errors on both of my 1070GTX cards connected via PCIe 1x to 16x extension boards.
I did not use port 16_3 first, moved to that slot to test but still had issues before change in bios. Also changed bsleep setting for all cards to 30 in the miner config.
Before change I had the kernel log spammed with faults.
Also tried to powercycle system before and after change. Seems to be pretty persistent.
add a comment |Â
up vote
1
down vote
up vote
1
down vote
I changed the PCIE16_3 slot Config in Bios on my x99-E to be static set to x8 mode instead of auto that is default for M.2 device support. Works fine now without TLP errors on both of my 1070GTX cards connected via PCIe 1x to 16x extension boards.
I did not use port 16_3 first, moved to that slot to test but still had issues before change in bios. Also changed bsleep setting for all cards to 30 in the miner config.
Before change I had the kernel log spammed with faults.
Also tried to powercycle system before and after change. Seems to be pretty persistent.
I changed the PCIE16_3 slot Config in Bios on my x99-E to be static set to x8 mode instead of auto that is default for M.2 device support. Works fine now without TLP errors on both of my 1070GTX cards connected via PCIe 1x to 16x extension boards.
I did not use port 16_3 first, moved to that slot to test but still had issues before change in bios. Also changed bsleep setting for all cards to 30 in the miner config.
Before change I had the kernel log spammed with faults.
Also tried to powercycle system before and after change. Seems to be pretty persistent.
edited Jul 2 at 3:18
slmâ¦
241k66501669
241k66501669
answered Apr 3 at 17:24
Nic
112
112
add a comment |Â
add a comment |Â
up vote
0
down vote
I get the same errors (Bad TLP associated with device 8086:6f08). I have X99 Deluxe II, Samsung 960 pro, Nvidia 1080 ti. These problems seem to be associated with X99 chipset and M.2 device, like Samsung Pro.
The X99 Deluxe II motherboard shares bandwidth between PCIE16_3 slot and M.2/U.2. Following comment from @Nic, in the BIOS I changed Onboard Devices Configuration | U.2_2 Bandwidth from Auto to U.2_2. This fixed the problem for me.
add a comment |Â
up vote
0
down vote
I get the same errors (Bad TLP associated with device 8086:6f08). I have X99 Deluxe II, Samsung 960 pro, Nvidia 1080 ti. These problems seem to be associated with X99 chipset and M.2 device, like Samsung Pro.
The X99 Deluxe II motherboard shares bandwidth between PCIE16_3 slot and M.2/U.2. Following comment from @Nic, in the BIOS I changed Onboard Devices Configuration | U.2_2 Bandwidth from Auto to U.2_2. This fixed the problem for me.
add a comment |Â
up vote
0
down vote
up vote
0
down vote
I get the same errors (Bad TLP associated with device 8086:6f08). I have X99 Deluxe II, Samsung 960 pro, Nvidia 1080 ti. These problems seem to be associated with X99 chipset and M.2 device, like Samsung Pro.
The X99 Deluxe II motherboard shares bandwidth between PCIE16_3 slot and M.2/U.2. Following comment from @Nic, in the BIOS I changed Onboard Devices Configuration | U.2_2 Bandwidth from Auto to U.2_2. This fixed the problem for me.
I get the same errors (Bad TLP associated with device 8086:6f08). I have X99 Deluxe II, Samsung 960 pro, Nvidia 1080 ti. These problems seem to be associated with X99 chipset and M.2 device, like Samsung Pro.
The X99 Deluxe II motherboard shares bandwidth between PCIE16_3 slot and M.2/U.2. Following comment from @Nic, in the BIOS I changed Onboard Devices Configuration | U.2_2 Bandwidth from Auto to U.2_2. This fixed the problem for me.
answered May 10 at 1:40
user1759557
1
1
add a comment |Â
add a comment |Â
up vote
0
down vote
I know this post is old, but it's one of the first to appear when searching the error messages so for posterity I will post my findings on this because I have completed a root causes analysis and have the true solution:
Search your motherboard manual for "AER". You can kill the source of the problem by either correcting the specific incompatibility or disabling AER altogether. Only use this if all the error spam concerns CORRECTED errors, otherwise you could be covering up an actual issue.
New contributor
add a comment |Â
up vote
0
down vote
I know this post is old, but it's one of the first to appear when searching the error messages so for posterity I will post my findings on this because I have completed a root causes analysis and have the true solution:
Search your motherboard manual for "AER". You can kill the source of the problem by either correcting the specific incompatibility or disabling AER altogether. Only use this if all the error spam concerns CORRECTED errors, otherwise you could be covering up an actual issue.
New contributor
add a comment |Â
up vote
0
down vote
up vote
0
down vote
I know this post is old, but it's one of the first to appear when searching the error messages so for posterity I will post my findings on this because I have completed a root causes analysis and have the true solution:
Search your motherboard manual for "AER". You can kill the source of the problem by either correcting the specific incompatibility or disabling AER altogether. Only use this if all the error spam concerns CORRECTED errors, otherwise you could be covering up an actual issue.
New contributor
I know this post is old, but it's one of the first to appear when searching the error messages so for posterity I will post my findings on this because I have completed a root causes analysis and have the true solution:
Search your motherboard manual for "AER". You can kill the source of the problem by either correcting the specific incompatibility or disabling AER altogether. Only use this if all the error spam concerns CORRECTED errors, otherwise you could be covering up an actual issue.
New contributor
New contributor
answered 2 mins ago
N3V3N
1
1
New contributor
New contributor
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f327730%2fwhat-causes-this-pcieport-00000003-0-pcie-bus-error-aer-bad-tlp%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password