How do I find all interfaces that have been configured in Linux, including those of containers?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








4















I know that you can display interfaces by doing ip a show. That only displays the interfaces that the host can see, but virtual interfaces configured by containers don't appear in this list. I've tried using ip netns as well, and they don't show up either. Should I recompile another version of iproute2? In /proc/net/fb_trie, you can see the local/broadcast addresses for, I assume, as a use for the forwarding database.



Where can I find any of this information, or command to list all interfaces including containers?



To test this out, start up a container. In my case, it is a lxc container on snap. Do an ip a or ip l. It will show the host machine's view, but not the container configured interface. I'm grepping through procfs, since containers are just cgrouped processes, but I don't get anything other than the fib_trie and the arp entry. I thought it could be due to a netns namespace obfuscation, but ip netns also shows nothing.



You can use conntrack -L to display all incoming and outgoing connections that are established, because lxd needs to connection track the forwarding of the packets, but I'd like to list all ip addresses that are configured on the system, like how I'd be able to tell using netstat or lsof.










share|improve this question



















  • 1





    Doesn't /proc/self/net/dev show them?

    – Raman Sailopal
    Mar 8 at 11:45











  • No /proc/self/net/dev shows only the host machine's interface, not the container's. Specifically, i'm looking for configured interfaces with the ip address. Once a packet is sent, there is an arp table entry, and that's about it.

    – munchkin
    Mar 8 at 11:50











  • On a Debian system, ip a gives me all interfaces including virtual ones used for docker containers. In one specific instance I've got 39 entries listed, and most of these do not have an IP address configured for the host.

    – roaima
    Mar 8 at 13:45












  • I've added veth interfaces by hand and they both appear in 'ip a', and i've tried doing this in docker as well: Doing 'docker pull/run ubuntu', and while in ubuntu, do an 'ip a' and tried to find the interface address from the host. From the host, i can only see 'ip a', the other peer virtual interface isn't there. For example, i have veth589f348@if17 but not if17@veth589f348 . This is the same with both docker and lxc.

    – munchkin
    Mar 8 at 15:51












  • The other interface, inside the container, would typically have @if16 or @if18 appended (not the name reversed). More info about this with my answer there. Anyway I also provided an answer to this question, trying to explain how it works. Using shell isn't optimal for any automatization.

    – A.B
    Mar 13 at 0:22

















4















I know that you can display interfaces by doing ip a show. That only displays the interfaces that the host can see, but virtual interfaces configured by containers don't appear in this list. I've tried using ip netns as well, and they don't show up either. Should I recompile another version of iproute2? In /proc/net/fb_trie, you can see the local/broadcast addresses for, I assume, as a use for the forwarding database.



Where can I find any of this information, or command to list all interfaces including containers?



To test this out, start up a container. In my case, it is a lxc container on snap. Do an ip a or ip l. It will show the host machine's view, but not the container configured interface. I'm grepping through procfs, since containers are just cgrouped processes, but I don't get anything other than the fib_trie and the arp entry. I thought it could be due to a netns namespace obfuscation, but ip netns also shows nothing.



You can use conntrack -L to display all incoming and outgoing connections that are established, because lxd needs to connection track the forwarding of the packets, but I'd like to list all ip addresses that are configured on the system, like how I'd be able to tell using netstat or lsof.










share|improve this question



















  • 1





    Doesn't /proc/self/net/dev show them?

    – Raman Sailopal
    Mar 8 at 11:45











  • No /proc/self/net/dev shows only the host machine's interface, not the container's. Specifically, i'm looking for configured interfaces with the ip address. Once a packet is sent, there is an arp table entry, and that's about it.

    – munchkin
    Mar 8 at 11:50











  • On a Debian system, ip a gives me all interfaces including virtual ones used for docker containers. In one specific instance I've got 39 entries listed, and most of these do not have an IP address configured for the host.

    – roaima
    Mar 8 at 13:45












  • I've added veth interfaces by hand and they both appear in 'ip a', and i've tried doing this in docker as well: Doing 'docker pull/run ubuntu', and while in ubuntu, do an 'ip a' and tried to find the interface address from the host. From the host, i can only see 'ip a', the other peer virtual interface isn't there. For example, i have veth589f348@if17 but not if17@veth589f348 . This is the same with both docker and lxc.

    – munchkin
    Mar 8 at 15:51












  • The other interface, inside the container, would typically have @if16 or @if18 appended (not the name reversed). More info about this with my answer there. Anyway I also provided an answer to this question, trying to explain how it works. Using shell isn't optimal for any automatization.

    – A.B
    Mar 13 at 0:22













4












4








4








I know that you can display interfaces by doing ip a show. That only displays the interfaces that the host can see, but virtual interfaces configured by containers don't appear in this list. I've tried using ip netns as well, and they don't show up either. Should I recompile another version of iproute2? In /proc/net/fb_trie, you can see the local/broadcast addresses for, I assume, as a use for the forwarding database.



Where can I find any of this information, or command to list all interfaces including containers?



To test this out, start up a container. In my case, it is a lxc container on snap. Do an ip a or ip l. It will show the host machine's view, but not the container configured interface. I'm grepping through procfs, since containers are just cgrouped processes, but I don't get anything other than the fib_trie and the arp entry. I thought it could be due to a netns namespace obfuscation, but ip netns also shows nothing.



You can use conntrack -L to display all incoming and outgoing connections that are established, because lxd needs to connection track the forwarding of the packets, but I'd like to list all ip addresses that are configured on the system, like how I'd be able to tell using netstat or lsof.










share|improve this question
















I know that you can display interfaces by doing ip a show. That only displays the interfaces that the host can see, but virtual interfaces configured by containers don't appear in this list. I've tried using ip netns as well, and they don't show up either. Should I recompile another version of iproute2? In /proc/net/fb_trie, you can see the local/broadcast addresses for, I assume, as a use for the forwarding database.



Where can I find any of this information, or command to list all interfaces including containers?



To test this out, start up a container. In my case, it is a lxc container on snap. Do an ip a or ip l. It will show the host machine's view, but not the container configured interface. I'm grepping through procfs, since containers are just cgrouped processes, but I don't get anything other than the fib_trie and the arp entry. I thought it could be due to a netns namespace obfuscation, but ip netns also shows nothing.



You can use conntrack -L to display all incoming and outgoing connections that are established, because lxd needs to connection track the forwarding of the packets, but I'd like to list all ip addresses that are configured on the system, like how I'd be able to tell using netstat or lsof.







linux lxc iproute namespace container






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Mar 24 at 13:47









Jeff Schaller

44.7k1162145




44.7k1162145










asked Mar 8 at 11:36









munchkinmunchkin

433




433







  • 1





    Doesn't /proc/self/net/dev show them?

    – Raman Sailopal
    Mar 8 at 11:45











  • No /proc/self/net/dev shows only the host machine's interface, not the container's. Specifically, i'm looking for configured interfaces with the ip address. Once a packet is sent, there is an arp table entry, and that's about it.

    – munchkin
    Mar 8 at 11:50











  • On a Debian system, ip a gives me all interfaces including virtual ones used for docker containers. In one specific instance I've got 39 entries listed, and most of these do not have an IP address configured for the host.

    – roaima
    Mar 8 at 13:45












  • I've added veth interfaces by hand and they both appear in 'ip a', and i've tried doing this in docker as well: Doing 'docker pull/run ubuntu', and while in ubuntu, do an 'ip a' and tried to find the interface address from the host. From the host, i can only see 'ip a', the other peer virtual interface isn't there. For example, i have veth589f348@if17 but not if17@veth589f348 . This is the same with both docker and lxc.

    – munchkin
    Mar 8 at 15:51












  • The other interface, inside the container, would typically have @if16 or @if18 appended (not the name reversed). More info about this with my answer there. Anyway I also provided an answer to this question, trying to explain how it works. Using shell isn't optimal for any automatization.

    – A.B
    Mar 13 at 0:22












  • 1





    Doesn't /proc/self/net/dev show them?

    – Raman Sailopal
    Mar 8 at 11:45











  • No /proc/self/net/dev shows only the host machine's interface, not the container's. Specifically, i'm looking for configured interfaces with the ip address. Once a packet is sent, there is an arp table entry, and that's about it.

    – munchkin
    Mar 8 at 11:50











  • On a Debian system, ip a gives me all interfaces including virtual ones used for docker containers. In one specific instance I've got 39 entries listed, and most of these do not have an IP address configured for the host.

    – roaima
    Mar 8 at 13:45












  • I've added veth interfaces by hand and they both appear in 'ip a', and i've tried doing this in docker as well: Doing 'docker pull/run ubuntu', and while in ubuntu, do an 'ip a' and tried to find the interface address from the host. From the host, i can only see 'ip a', the other peer virtual interface isn't there. For example, i have veth589f348@if17 but not if17@veth589f348 . This is the same with both docker and lxc.

    – munchkin
    Mar 8 at 15:51












  • The other interface, inside the container, would typically have @if16 or @if18 appended (not the name reversed). More info about this with my answer there. Anyway I also provided an answer to this question, trying to explain how it works. Using shell isn't optimal for any automatization.

    – A.B
    Mar 13 at 0:22







1




1





Doesn't /proc/self/net/dev show them?

– Raman Sailopal
Mar 8 at 11:45





Doesn't /proc/self/net/dev show them?

– Raman Sailopal
Mar 8 at 11:45













No /proc/self/net/dev shows only the host machine's interface, not the container's. Specifically, i'm looking for configured interfaces with the ip address. Once a packet is sent, there is an arp table entry, and that's about it.

– munchkin
Mar 8 at 11:50





No /proc/self/net/dev shows only the host machine's interface, not the container's. Specifically, i'm looking for configured interfaces with the ip address. Once a packet is sent, there is an arp table entry, and that's about it.

– munchkin
Mar 8 at 11:50













On a Debian system, ip a gives me all interfaces including virtual ones used for docker containers. In one specific instance I've got 39 entries listed, and most of these do not have an IP address configured for the host.

– roaima
Mar 8 at 13:45






On a Debian system, ip a gives me all interfaces including virtual ones used for docker containers. In one specific instance I've got 39 entries listed, and most of these do not have an IP address configured for the host.

– roaima
Mar 8 at 13:45














I've added veth interfaces by hand and they both appear in 'ip a', and i've tried doing this in docker as well: Doing 'docker pull/run ubuntu', and while in ubuntu, do an 'ip a' and tried to find the interface address from the host. From the host, i can only see 'ip a', the other peer virtual interface isn't there. For example, i have veth589f348@if17 but not if17@veth589f348 . This is the same with both docker and lxc.

– munchkin
Mar 8 at 15:51






I've added veth interfaces by hand and they both appear in 'ip a', and i've tried doing this in docker as well: Doing 'docker pull/run ubuntu', and while in ubuntu, do an 'ip a' and tried to find the interface address from the host. From the host, i can only see 'ip a', the other peer virtual interface isn't there. For example, i have veth589f348@if17 but not if17@veth589f348 . This is the same with both docker and lxc.

– munchkin
Mar 8 at 15:51














The other interface, inside the container, would typically have @if16 or @if18 appended (not the name reversed). More info about this with my answer there. Anyway I also provided an answer to this question, trying to explain how it works. Using shell isn't optimal for any automatization.

– A.B
Mar 13 at 0:22





The other interface, inside the container, would typically have @if16 or @if18 appended (not the name reversed). More info about this with my answer there. Anyway I also provided an answer to this question, trying to explain how it works. Using shell isn't optimal for any automatization.

– A.B
Mar 13 at 0:22










1 Answer
1






active

oldest

votes


















1














An interface, at a given time, belongs to one network namespace and only one. The init (initial) network namespace, except for inheriting physical interfaces of destroyed network namespaces has no special ability over other network namespaces: it can't see directly their interfaces. As long as you are still in init's pid and mount namespaces, you can still find the network namespaces by using different informations available from /proc and finally display their interfaces by entering those network namespaces.



I'll provide examples in shell.




  • enumerate the network namespaces



    For this you have to know how those namespaces are existing: as long as a resource keep them up. A resource here can be a process (actually a process' thread), a mount point or an open file descriptor (fd). Those resources are all referenced in /proc/ and point to an abstract pseudo-file in the nsfs pseudo-filesystem enumerating all namespaces. This file's only meaningful information is its inode, representing the network namespace, but the inode can't be manipulated alone, it has to be the file. That's why later we can't just keep only the inode value (given by stat -c %i /proc/some/file): we'll keep the inode to be able to remove duplicates and a filename to still have an usable reference for nsenter later.




    • process (actually thread)



      The most common case: for usual containers. Each thread's network namespace can be known via the reference /proc/pid/ns/net: just stat them and enumerate all unique namespaces. The 2>/dev/null is to hide when stat can't find ephemeral processes anymore.



      find /proc/ -mindepth 1 -maxdepth 1 -name '[1-9]*' | while read -r procpid; do
      stat -L -c '%20i %n' $procpid/ns/net
      done 2>/dev/null


      This can be done faster with the specialized lsns command which deals with namespaces, but appears to handle only processes (not mount points nor open fd as seen later):



      lsns -n -u -t net -o NS,PATH


      (which would have to be reformatted for later as lsns -n -u -t net -o NS,PATH | while read inode path; do printf '%20u %sn' $inode "$path"; done)




    • mount point



      Those are mostly used by the ip netns add command which creates permanent network namespaces by mounting them, thus avoiding them disappearing when there is no process nor fd resource keeping them up, then also allowing for example to run a router, firewall or bridge in a network namespace without any linked process.



      Mounted namespaces (handling of mount and perhaps pid namespaces is probably more complex but we're only interested in network namespaces anyway) appear like any other mount point in /proc/mounts, with the filesystem type nsfs. There's no easy way in shell to distinguish a network namespace from an other type of namespace, but since two pseudo-files from the same filesystem (here nsfs) won't share the same inode, just elect them all and ignore errors later in the interface step when trying to use a non-network namespace reference as network namespace. Sorry, below I won't handle correctly mount points with special characters in them, including spaces, because they are already escaped in /proc/mounts's output (it would be easier in any other language), so I won't bother either to use null terminated lines.



      awk '$3 == "nsfs" print $2 ' /proc/mounts | while read -r mount; do
      stat -c '%20i %n' "$mount"
      done



    • open file descriptor



      Those are probably even more rare than mount points except temporarily at namespace creation, but might be held and used by some specialized application handling multiple namespaces, including possibly some containerization technology.



      I couldn't devise a better method than search all fd available in every /proc/pid/fd/, using stat to verify it points to a nsfs namespace and again not caring for now if it's really a network namespace. I'm sure there's a more optimized loop, but this one at least won't wander everywhere nor assume any maximum process limit.



      find /proc/ -mindepth 1 -maxdepth 1 -name '[1-9]*' | while read -r procpid; do
      find $procpid/fd -mindepth 1 | while read -r procfd; do
      if [ "$(stat -f -c %T $procfd)" = nsfs ]; then
      stat -L -c '%20i %n' $procfd
      fi
      done
      done 2>/dev/null


    Now remove all duplicate network namespace references from previous results. Eg by using this filter on the combined output of the 3 previous results (especially from the open file descriptor part):



    sort -k 1n | uniq -w 20



  • in each namespace enumerate the interfaces



    Now we have the references to all the existing network namespaces (and also some non-network namespaces which we'll just ignore), simply enter each of them using the reference and display the interfaces.



    Take the previous commands' output as input to this loop to enumerate interfaces (and as per OP's question, choose to display their addresses), while ignoring errors caused by non-network namespaces as previously explained:



    while read -r inode reference; do
    if nsenter --net="$reference" ip -br address show 2>/dev/null; then
    printf 'end of network %dnn' $inode
    fi
    done


The init network's inode can be printed with pid 1 as reference:



echo -n 'INIT NETWORK: ' ; stat -L -c %i /proc/1/ns/net


Example (real but redacted) output with a running LXC container,an empty "mounted" network namepace created with ip netns add ... having an unconnected bridge interface, a network namespace with an other dummy0 interface, kept alive by a process not in this network namespace but keeping an open fd on it, created with:



unshare --net sh -c 'ip link add dummy0 type dummy; ip address add dev dummy0 10.11.12.13/24; sleep 3' & sleep 1; sleep 999 < /proc/$!/ns/net &


and a running Firefox which isolates each of its "Web Content" threads in an unconnected network namespace (all those down lo interfaces):




lo UNKNOWN 127.0.0.1/8 ::1/128
eth0 UP 192.0.2.2/24 2001:db8:0:1:bc5c:95c7:4ea6:f94f/64 fe80::b4f0:7aff:fe76:76a8/64
wlan0 DOWN
dummy0 UNKNOWN 198.51.100.2/24 fe80::108a:83ff:fe05:e0da/64
lxcbr0 UP 10.0.3.1/24 2001:db8:0:4::1/64 fe80::216:3eff:fe00:0/64
virbr0 DOWN 192.168.122.1/24
virbr0-nic DOWN
vethSOEPSH@if9 UP fe80::fc8e:ff:fe85:476f/64
end of network 4026531992

lo DOWN
end of network 4026532418

lo DOWN
end of network 4026532518

lo DOWN
end of network 4026532618

lo DOWN
end of network 4026532718

lo UNKNOWN 127.0.0.1/8 ::1/128
eth0@if10 UP 10.0.3.66/24 fe80::216:3eff:fe6a:c1e9/64
end of network 4026532822

lo DOWN
bridge0 UNKNOWN fe80::b884:44ff:feaf:dca3/64
end of network 4026532923

lo DOWN
dummy0 DOWN 10.11.12.13/24
end of network 4026533021

INIT NETWORK: 4026531992





share|improve this answer























  • This is a great answer! Also i find linux documentation on the internet seems to have skipped the namespaces, cgroups implications as a part of any sysadmin guide, with little mention of this in tools like iproute2. One basically has to come head to head with an issue and piece together threads from different tools.

    – munchkin
    Mar 15 at 9:37











  • Some beforehand knowledge is indeed needed, but many informations are available from man 7 namespaces and links to other man pages at the end.

    – A.B
    Mar 15 at 13:17












Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f505112%2fhow-do-i-find-all-interfaces-that-have-been-configured-in-linux-including-those%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1














An interface, at a given time, belongs to one network namespace and only one. The init (initial) network namespace, except for inheriting physical interfaces of destroyed network namespaces has no special ability over other network namespaces: it can't see directly their interfaces. As long as you are still in init's pid and mount namespaces, you can still find the network namespaces by using different informations available from /proc and finally display their interfaces by entering those network namespaces.



I'll provide examples in shell.




  • enumerate the network namespaces



    For this you have to know how those namespaces are existing: as long as a resource keep them up. A resource here can be a process (actually a process' thread), a mount point or an open file descriptor (fd). Those resources are all referenced in /proc/ and point to an abstract pseudo-file in the nsfs pseudo-filesystem enumerating all namespaces. This file's only meaningful information is its inode, representing the network namespace, but the inode can't be manipulated alone, it has to be the file. That's why later we can't just keep only the inode value (given by stat -c %i /proc/some/file): we'll keep the inode to be able to remove duplicates and a filename to still have an usable reference for nsenter later.




    • process (actually thread)



      The most common case: for usual containers. Each thread's network namespace can be known via the reference /proc/pid/ns/net: just stat them and enumerate all unique namespaces. The 2>/dev/null is to hide when stat can't find ephemeral processes anymore.



      find /proc/ -mindepth 1 -maxdepth 1 -name '[1-9]*' | while read -r procpid; do
      stat -L -c '%20i %n' $procpid/ns/net
      done 2>/dev/null


      This can be done faster with the specialized lsns command which deals with namespaces, but appears to handle only processes (not mount points nor open fd as seen later):



      lsns -n -u -t net -o NS,PATH


      (which would have to be reformatted for later as lsns -n -u -t net -o NS,PATH | while read inode path; do printf '%20u %sn' $inode "$path"; done)




    • mount point



      Those are mostly used by the ip netns add command which creates permanent network namespaces by mounting them, thus avoiding them disappearing when there is no process nor fd resource keeping them up, then also allowing for example to run a router, firewall or bridge in a network namespace without any linked process.



      Mounted namespaces (handling of mount and perhaps pid namespaces is probably more complex but we're only interested in network namespaces anyway) appear like any other mount point in /proc/mounts, with the filesystem type nsfs. There's no easy way in shell to distinguish a network namespace from an other type of namespace, but since two pseudo-files from the same filesystem (here nsfs) won't share the same inode, just elect them all and ignore errors later in the interface step when trying to use a non-network namespace reference as network namespace. Sorry, below I won't handle correctly mount points with special characters in them, including spaces, because they are already escaped in /proc/mounts's output (it would be easier in any other language), so I won't bother either to use null terminated lines.



      awk '$3 == "nsfs" print $2 ' /proc/mounts | while read -r mount; do
      stat -c '%20i %n' "$mount"
      done



    • open file descriptor



      Those are probably even more rare than mount points except temporarily at namespace creation, but might be held and used by some specialized application handling multiple namespaces, including possibly some containerization technology.



      I couldn't devise a better method than search all fd available in every /proc/pid/fd/, using stat to verify it points to a nsfs namespace and again not caring for now if it's really a network namespace. I'm sure there's a more optimized loop, but this one at least won't wander everywhere nor assume any maximum process limit.



      find /proc/ -mindepth 1 -maxdepth 1 -name '[1-9]*' | while read -r procpid; do
      find $procpid/fd -mindepth 1 | while read -r procfd; do
      if [ "$(stat -f -c %T $procfd)" = nsfs ]; then
      stat -L -c '%20i %n' $procfd
      fi
      done
      done 2>/dev/null


    Now remove all duplicate network namespace references from previous results. Eg by using this filter on the combined output of the 3 previous results (especially from the open file descriptor part):



    sort -k 1n | uniq -w 20



  • in each namespace enumerate the interfaces



    Now we have the references to all the existing network namespaces (and also some non-network namespaces which we'll just ignore), simply enter each of them using the reference and display the interfaces.



    Take the previous commands' output as input to this loop to enumerate interfaces (and as per OP's question, choose to display their addresses), while ignoring errors caused by non-network namespaces as previously explained:



    while read -r inode reference; do
    if nsenter --net="$reference" ip -br address show 2>/dev/null; then
    printf 'end of network %dnn' $inode
    fi
    done


The init network's inode can be printed with pid 1 as reference:



echo -n 'INIT NETWORK: ' ; stat -L -c %i /proc/1/ns/net


Example (real but redacted) output with a running LXC container,an empty "mounted" network namepace created with ip netns add ... having an unconnected bridge interface, a network namespace with an other dummy0 interface, kept alive by a process not in this network namespace but keeping an open fd on it, created with:



unshare --net sh -c 'ip link add dummy0 type dummy; ip address add dev dummy0 10.11.12.13/24; sleep 3' & sleep 1; sleep 999 < /proc/$!/ns/net &


and a running Firefox which isolates each of its "Web Content" threads in an unconnected network namespace (all those down lo interfaces):




lo UNKNOWN 127.0.0.1/8 ::1/128
eth0 UP 192.0.2.2/24 2001:db8:0:1:bc5c:95c7:4ea6:f94f/64 fe80::b4f0:7aff:fe76:76a8/64
wlan0 DOWN
dummy0 UNKNOWN 198.51.100.2/24 fe80::108a:83ff:fe05:e0da/64
lxcbr0 UP 10.0.3.1/24 2001:db8:0:4::1/64 fe80::216:3eff:fe00:0/64
virbr0 DOWN 192.168.122.1/24
virbr0-nic DOWN
vethSOEPSH@if9 UP fe80::fc8e:ff:fe85:476f/64
end of network 4026531992

lo DOWN
end of network 4026532418

lo DOWN
end of network 4026532518

lo DOWN
end of network 4026532618

lo DOWN
end of network 4026532718

lo UNKNOWN 127.0.0.1/8 ::1/128
eth0@if10 UP 10.0.3.66/24 fe80::216:3eff:fe6a:c1e9/64
end of network 4026532822

lo DOWN
bridge0 UNKNOWN fe80::b884:44ff:feaf:dca3/64
end of network 4026532923

lo DOWN
dummy0 DOWN 10.11.12.13/24
end of network 4026533021

INIT NETWORK: 4026531992





share|improve this answer























  • This is a great answer! Also i find linux documentation on the internet seems to have skipped the namespaces, cgroups implications as a part of any sysadmin guide, with little mention of this in tools like iproute2. One basically has to come head to head with an issue and piece together threads from different tools.

    – munchkin
    Mar 15 at 9:37











  • Some beforehand knowledge is indeed needed, but many informations are available from man 7 namespaces and links to other man pages at the end.

    – A.B
    Mar 15 at 13:17
















1














An interface, at a given time, belongs to one network namespace and only one. The init (initial) network namespace, except for inheriting physical interfaces of destroyed network namespaces has no special ability over other network namespaces: it can't see directly their interfaces. As long as you are still in init's pid and mount namespaces, you can still find the network namespaces by using different informations available from /proc and finally display their interfaces by entering those network namespaces.



I'll provide examples in shell.




  • enumerate the network namespaces



    For this you have to know how those namespaces are existing: as long as a resource keep them up. A resource here can be a process (actually a process' thread), a mount point or an open file descriptor (fd). Those resources are all referenced in /proc/ and point to an abstract pseudo-file in the nsfs pseudo-filesystem enumerating all namespaces. This file's only meaningful information is its inode, representing the network namespace, but the inode can't be manipulated alone, it has to be the file. That's why later we can't just keep only the inode value (given by stat -c %i /proc/some/file): we'll keep the inode to be able to remove duplicates and a filename to still have an usable reference for nsenter later.




    • process (actually thread)



      The most common case: for usual containers. Each thread's network namespace can be known via the reference /proc/pid/ns/net: just stat them and enumerate all unique namespaces. The 2>/dev/null is to hide when stat can't find ephemeral processes anymore.



      find /proc/ -mindepth 1 -maxdepth 1 -name '[1-9]*' | while read -r procpid; do
      stat -L -c '%20i %n' $procpid/ns/net
      done 2>/dev/null


      This can be done faster with the specialized lsns command which deals with namespaces, but appears to handle only processes (not mount points nor open fd as seen later):



      lsns -n -u -t net -o NS,PATH


      (which would have to be reformatted for later as lsns -n -u -t net -o NS,PATH | while read inode path; do printf '%20u %sn' $inode "$path"; done)




    • mount point



      Those are mostly used by the ip netns add command which creates permanent network namespaces by mounting them, thus avoiding them disappearing when there is no process nor fd resource keeping them up, then also allowing for example to run a router, firewall or bridge in a network namespace without any linked process.



      Mounted namespaces (handling of mount and perhaps pid namespaces is probably more complex but we're only interested in network namespaces anyway) appear like any other mount point in /proc/mounts, with the filesystem type nsfs. There's no easy way in shell to distinguish a network namespace from an other type of namespace, but since two pseudo-files from the same filesystem (here nsfs) won't share the same inode, just elect them all and ignore errors later in the interface step when trying to use a non-network namespace reference as network namespace. Sorry, below I won't handle correctly mount points with special characters in them, including spaces, because they are already escaped in /proc/mounts's output (it would be easier in any other language), so I won't bother either to use null terminated lines.



      awk '$3 == "nsfs" print $2 ' /proc/mounts | while read -r mount; do
      stat -c '%20i %n' "$mount"
      done



    • open file descriptor



      Those are probably even more rare than mount points except temporarily at namespace creation, but might be held and used by some specialized application handling multiple namespaces, including possibly some containerization technology.



      I couldn't devise a better method than search all fd available in every /proc/pid/fd/, using stat to verify it points to a nsfs namespace and again not caring for now if it's really a network namespace. I'm sure there's a more optimized loop, but this one at least won't wander everywhere nor assume any maximum process limit.



      find /proc/ -mindepth 1 -maxdepth 1 -name '[1-9]*' | while read -r procpid; do
      find $procpid/fd -mindepth 1 | while read -r procfd; do
      if [ "$(stat -f -c %T $procfd)" = nsfs ]; then
      stat -L -c '%20i %n' $procfd
      fi
      done
      done 2>/dev/null


    Now remove all duplicate network namespace references from previous results. Eg by using this filter on the combined output of the 3 previous results (especially from the open file descriptor part):



    sort -k 1n | uniq -w 20



  • in each namespace enumerate the interfaces



    Now we have the references to all the existing network namespaces (and also some non-network namespaces which we'll just ignore), simply enter each of them using the reference and display the interfaces.



    Take the previous commands' output as input to this loop to enumerate interfaces (and as per OP's question, choose to display their addresses), while ignoring errors caused by non-network namespaces as previously explained:



    while read -r inode reference; do
    if nsenter --net="$reference" ip -br address show 2>/dev/null; then
    printf 'end of network %dnn' $inode
    fi
    done


The init network's inode can be printed with pid 1 as reference:



echo -n 'INIT NETWORK: ' ; stat -L -c %i /proc/1/ns/net


Example (real but redacted) output with a running LXC container,an empty "mounted" network namepace created with ip netns add ... having an unconnected bridge interface, a network namespace with an other dummy0 interface, kept alive by a process not in this network namespace but keeping an open fd on it, created with:



unshare --net sh -c 'ip link add dummy0 type dummy; ip address add dev dummy0 10.11.12.13/24; sleep 3' & sleep 1; sleep 999 < /proc/$!/ns/net &


and a running Firefox which isolates each of its "Web Content" threads in an unconnected network namespace (all those down lo interfaces):




lo UNKNOWN 127.0.0.1/8 ::1/128
eth0 UP 192.0.2.2/24 2001:db8:0:1:bc5c:95c7:4ea6:f94f/64 fe80::b4f0:7aff:fe76:76a8/64
wlan0 DOWN
dummy0 UNKNOWN 198.51.100.2/24 fe80::108a:83ff:fe05:e0da/64
lxcbr0 UP 10.0.3.1/24 2001:db8:0:4::1/64 fe80::216:3eff:fe00:0/64
virbr0 DOWN 192.168.122.1/24
virbr0-nic DOWN
vethSOEPSH@if9 UP fe80::fc8e:ff:fe85:476f/64
end of network 4026531992

lo DOWN
end of network 4026532418

lo DOWN
end of network 4026532518

lo DOWN
end of network 4026532618

lo DOWN
end of network 4026532718

lo UNKNOWN 127.0.0.1/8 ::1/128
eth0@if10 UP 10.0.3.66/24 fe80::216:3eff:fe6a:c1e9/64
end of network 4026532822

lo DOWN
bridge0 UNKNOWN fe80::b884:44ff:feaf:dca3/64
end of network 4026532923

lo DOWN
dummy0 DOWN 10.11.12.13/24
end of network 4026533021

INIT NETWORK: 4026531992





share|improve this answer























  • This is a great answer! Also i find linux documentation on the internet seems to have skipped the namespaces, cgroups implications as a part of any sysadmin guide, with little mention of this in tools like iproute2. One basically has to come head to head with an issue and piece together threads from different tools.

    – munchkin
    Mar 15 at 9:37











  • Some beforehand knowledge is indeed needed, but many informations are available from man 7 namespaces and links to other man pages at the end.

    – A.B
    Mar 15 at 13:17














1












1








1







An interface, at a given time, belongs to one network namespace and only one. The init (initial) network namespace, except for inheriting physical interfaces of destroyed network namespaces has no special ability over other network namespaces: it can't see directly their interfaces. As long as you are still in init's pid and mount namespaces, you can still find the network namespaces by using different informations available from /proc and finally display their interfaces by entering those network namespaces.



I'll provide examples in shell.




  • enumerate the network namespaces



    For this you have to know how those namespaces are existing: as long as a resource keep them up. A resource here can be a process (actually a process' thread), a mount point or an open file descriptor (fd). Those resources are all referenced in /proc/ and point to an abstract pseudo-file in the nsfs pseudo-filesystem enumerating all namespaces. This file's only meaningful information is its inode, representing the network namespace, but the inode can't be manipulated alone, it has to be the file. That's why later we can't just keep only the inode value (given by stat -c %i /proc/some/file): we'll keep the inode to be able to remove duplicates and a filename to still have an usable reference for nsenter later.




    • process (actually thread)



      The most common case: for usual containers. Each thread's network namespace can be known via the reference /proc/pid/ns/net: just stat them and enumerate all unique namespaces. The 2>/dev/null is to hide when stat can't find ephemeral processes anymore.



      find /proc/ -mindepth 1 -maxdepth 1 -name '[1-9]*' | while read -r procpid; do
      stat -L -c '%20i %n' $procpid/ns/net
      done 2>/dev/null


      This can be done faster with the specialized lsns command which deals with namespaces, but appears to handle only processes (not mount points nor open fd as seen later):



      lsns -n -u -t net -o NS,PATH


      (which would have to be reformatted for later as lsns -n -u -t net -o NS,PATH | while read inode path; do printf '%20u %sn' $inode "$path"; done)




    • mount point



      Those are mostly used by the ip netns add command which creates permanent network namespaces by mounting them, thus avoiding them disappearing when there is no process nor fd resource keeping them up, then also allowing for example to run a router, firewall or bridge in a network namespace without any linked process.



      Mounted namespaces (handling of mount and perhaps pid namespaces is probably more complex but we're only interested in network namespaces anyway) appear like any other mount point in /proc/mounts, with the filesystem type nsfs. There's no easy way in shell to distinguish a network namespace from an other type of namespace, but since two pseudo-files from the same filesystem (here nsfs) won't share the same inode, just elect them all and ignore errors later in the interface step when trying to use a non-network namespace reference as network namespace. Sorry, below I won't handle correctly mount points with special characters in them, including spaces, because they are already escaped in /proc/mounts's output (it would be easier in any other language), so I won't bother either to use null terminated lines.



      awk '$3 == "nsfs" print $2 ' /proc/mounts | while read -r mount; do
      stat -c '%20i %n' "$mount"
      done



    • open file descriptor



      Those are probably even more rare than mount points except temporarily at namespace creation, but might be held and used by some specialized application handling multiple namespaces, including possibly some containerization technology.



      I couldn't devise a better method than search all fd available in every /proc/pid/fd/, using stat to verify it points to a nsfs namespace and again not caring for now if it's really a network namespace. I'm sure there's a more optimized loop, but this one at least won't wander everywhere nor assume any maximum process limit.



      find /proc/ -mindepth 1 -maxdepth 1 -name '[1-9]*' | while read -r procpid; do
      find $procpid/fd -mindepth 1 | while read -r procfd; do
      if [ "$(stat -f -c %T $procfd)" = nsfs ]; then
      stat -L -c '%20i %n' $procfd
      fi
      done
      done 2>/dev/null


    Now remove all duplicate network namespace references from previous results. Eg by using this filter on the combined output of the 3 previous results (especially from the open file descriptor part):



    sort -k 1n | uniq -w 20



  • in each namespace enumerate the interfaces



    Now we have the references to all the existing network namespaces (and also some non-network namespaces which we'll just ignore), simply enter each of them using the reference and display the interfaces.



    Take the previous commands' output as input to this loop to enumerate interfaces (and as per OP's question, choose to display their addresses), while ignoring errors caused by non-network namespaces as previously explained:



    while read -r inode reference; do
    if nsenter --net="$reference" ip -br address show 2>/dev/null; then
    printf 'end of network %dnn' $inode
    fi
    done


The init network's inode can be printed with pid 1 as reference:



echo -n 'INIT NETWORK: ' ; stat -L -c %i /proc/1/ns/net


Example (real but redacted) output with a running LXC container,an empty "mounted" network namepace created with ip netns add ... having an unconnected bridge interface, a network namespace with an other dummy0 interface, kept alive by a process not in this network namespace but keeping an open fd on it, created with:



unshare --net sh -c 'ip link add dummy0 type dummy; ip address add dev dummy0 10.11.12.13/24; sleep 3' & sleep 1; sleep 999 < /proc/$!/ns/net &


and a running Firefox which isolates each of its "Web Content" threads in an unconnected network namespace (all those down lo interfaces):




lo UNKNOWN 127.0.0.1/8 ::1/128
eth0 UP 192.0.2.2/24 2001:db8:0:1:bc5c:95c7:4ea6:f94f/64 fe80::b4f0:7aff:fe76:76a8/64
wlan0 DOWN
dummy0 UNKNOWN 198.51.100.2/24 fe80::108a:83ff:fe05:e0da/64
lxcbr0 UP 10.0.3.1/24 2001:db8:0:4::1/64 fe80::216:3eff:fe00:0/64
virbr0 DOWN 192.168.122.1/24
virbr0-nic DOWN
vethSOEPSH@if9 UP fe80::fc8e:ff:fe85:476f/64
end of network 4026531992

lo DOWN
end of network 4026532418

lo DOWN
end of network 4026532518

lo DOWN
end of network 4026532618

lo DOWN
end of network 4026532718

lo UNKNOWN 127.0.0.1/8 ::1/128
eth0@if10 UP 10.0.3.66/24 fe80::216:3eff:fe6a:c1e9/64
end of network 4026532822

lo DOWN
bridge0 UNKNOWN fe80::b884:44ff:feaf:dca3/64
end of network 4026532923

lo DOWN
dummy0 DOWN 10.11.12.13/24
end of network 4026533021

INIT NETWORK: 4026531992





share|improve this answer













An interface, at a given time, belongs to one network namespace and only one. The init (initial) network namespace, except for inheriting physical interfaces of destroyed network namespaces has no special ability over other network namespaces: it can't see directly their interfaces. As long as you are still in init's pid and mount namespaces, you can still find the network namespaces by using different informations available from /proc and finally display their interfaces by entering those network namespaces.



I'll provide examples in shell.




  • enumerate the network namespaces



    For this you have to know how those namespaces are existing: as long as a resource keep them up. A resource here can be a process (actually a process' thread), a mount point or an open file descriptor (fd). Those resources are all referenced in /proc/ and point to an abstract pseudo-file in the nsfs pseudo-filesystem enumerating all namespaces. This file's only meaningful information is its inode, representing the network namespace, but the inode can't be manipulated alone, it has to be the file. That's why later we can't just keep only the inode value (given by stat -c %i /proc/some/file): we'll keep the inode to be able to remove duplicates and a filename to still have an usable reference for nsenter later.




    • process (actually thread)



      The most common case: for usual containers. Each thread's network namespace can be known via the reference /proc/pid/ns/net: just stat them and enumerate all unique namespaces. The 2>/dev/null is to hide when stat can't find ephemeral processes anymore.



      find /proc/ -mindepth 1 -maxdepth 1 -name '[1-9]*' | while read -r procpid; do
      stat -L -c '%20i %n' $procpid/ns/net
      done 2>/dev/null


      This can be done faster with the specialized lsns command which deals with namespaces, but appears to handle only processes (not mount points nor open fd as seen later):



      lsns -n -u -t net -o NS,PATH


      (which would have to be reformatted for later as lsns -n -u -t net -o NS,PATH | while read inode path; do printf '%20u %sn' $inode "$path"; done)




    • mount point



      Those are mostly used by the ip netns add command which creates permanent network namespaces by mounting them, thus avoiding them disappearing when there is no process nor fd resource keeping them up, then also allowing for example to run a router, firewall or bridge in a network namespace without any linked process.



      Mounted namespaces (handling of mount and perhaps pid namespaces is probably more complex but we're only interested in network namespaces anyway) appear like any other mount point in /proc/mounts, with the filesystem type nsfs. There's no easy way in shell to distinguish a network namespace from an other type of namespace, but since two pseudo-files from the same filesystem (here nsfs) won't share the same inode, just elect them all and ignore errors later in the interface step when trying to use a non-network namespace reference as network namespace. Sorry, below I won't handle correctly mount points with special characters in them, including spaces, because they are already escaped in /proc/mounts's output (it would be easier in any other language), so I won't bother either to use null terminated lines.



      awk '$3 == "nsfs" print $2 ' /proc/mounts | while read -r mount; do
      stat -c '%20i %n' "$mount"
      done



    • open file descriptor



      Those are probably even more rare than mount points except temporarily at namespace creation, but might be held and used by some specialized application handling multiple namespaces, including possibly some containerization technology.



      I couldn't devise a better method than search all fd available in every /proc/pid/fd/, using stat to verify it points to a nsfs namespace and again not caring for now if it's really a network namespace. I'm sure there's a more optimized loop, but this one at least won't wander everywhere nor assume any maximum process limit.



      find /proc/ -mindepth 1 -maxdepth 1 -name '[1-9]*' | while read -r procpid; do
      find $procpid/fd -mindepth 1 | while read -r procfd; do
      if [ "$(stat -f -c %T $procfd)" = nsfs ]; then
      stat -L -c '%20i %n' $procfd
      fi
      done
      done 2>/dev/null


    Now remove all duplicate network namespace references from previous results. Eg by using this filter on the combined output of the 3 previous results (especially from the open file descriptor part):



    sort -k 1n | uniq -w 20



  • in each namespace enumerate the interfaces



    Now we have the references to all the existing network namespaces (and also some non-network namespaces which we'll just ignore), simply enter each of them using the reference and display the interfaces.



    Take the previous commands' output as input to this loop to enumerate interfaces (and as per OP's question, choose to display their addresses), while ignoring errors caused by non-network namespaces as previously explained:



    while read -r inode reference; do
    if nsenter --net="$reference" ip -br address show 2>/dev/null; then
    printf 'end of network %dnn' $inode
    fi
    done


The init network's inode can be printed with pid 1 as reference:



echo -n 'INIT NETWORK: ' ; stat -L -c %i /proc/1/ns/net


Example (real but redacted) output with a running LXC container,an empty "mounted" network namepace created with ip netns add ... having an unconnected bridge interface, a network namespace with an other dummy0 interface, kept alive by a process not in this network namespace but keeping an open fd on it, created with:



unshare --net sh -c 'ip link add dummy0 type dummy; ip address add dev dummy0 10.11.12.13/24; sleep 3' & sleep 1; sleep 999 < /proc/$!/ns/net &


and a running Firefox which isolates each of its "Web Content" threads in an unconnected network namespace (all those down lo interfaces):




lo UNKNOWN 127.0.0.1/8 ::1/128
eth0 UP 192.0.2.2/24 2001:db8:0:1:bc5c:95c7:4ea6:f94f/64 fe80::b4f0:7aff:fe76:76a8/64
wlan0 DOWN
dummy0 UNKNOWN 198.51.100.2/24 fe80::108a:83ff:fe05:e0da/64
lxcbr0 UP 10.0.3.1/24 2001:db8:0:4::1/64 fe80::216:3eff:fe00:0/64
virbr0 DOWN 192.168.122.1/24
virbr0-nic DOWN
vethSOEPSH@if9 UP fe80::fc8e:ff:fe85:476f/64
end of network 4026531992

lo DOWN
end of network 4026532418

lo DOWN
end of network 4026532518

lo DOWN
end of network 4026532618

lo DOWN
end of network 4026532718

lo UNKNOWN 127.0.0.1/8 ::1/128
eth0@if10 UP 10.0.3.66/24 fe80::216:3eff:fe6a:c1e9/64
end of network 4026532822

lo DOWN
bridge0 UNKNOWN fe80::b884:44ff:feaf:dca3/64
end of network 4026532923

lo DOWN
dummy0 DOWN 10.11.12.13/24
end of network 4026533021

INIT NETWORK: 4026531992






share|improve this answer












share|improve this answer



share|improve this answer










answered Mar 12 at 23:53









A.BA.B

5,93211030




5,93211030












  • This is a great answer! Also i find linux documentation on the internet seems to have skipped the namespaces, cgroups implications as a part of any sysadmin guide, with little mention of this in tools like iproute2. One basically has to come head to head with an issue and piece together threads from different tools.

    – munchkin
    Mar 15 at 9:37











  • Some beforehand knowledge is indeed needed, but many informations are available from man 7 namespaces and links to other man pages at the end.

    – A.B
    Mar 15 at 13:17


















  • This is a great answer! Also i find linux documentation on the internet seems to have skipped the namespaces, cgroups implications as a part of any sysadmin guide, with little mention of this in tools like iproute2. One basically has to come head to head with an issue and piece together threads from different tools.

    – munchkin
    Mar 15 at 9:37











  • Some beforehand knowledge is indeed needed, but many informations are available from man 7 namespaces and links to other man pages at the end.

    – A.B
    Mar 15 at 13:17

















This is a great answer! Also i find linux documentation on the internet seems to have skipped the namespaces, cgroups implications as a part of any sysadmin guide, with little mention of this in tools like iproute2. One basically has to come head to head with an issue and piece together threads from different tools.

– munchkin
Mar 15 at 9:37





This is a great answer! Also i find linux documentation on the internet seems to have skipped the namespaces, cgroups implications as a part of any sysadmin guide, with little mention of this in tools like iproute2. One basically has to come head to head with an issue and piece together threads from different tools.

– munchkin
Mar 15 at 9:37













Some beforehand knowledge is indeed needed, but many informations are available from man 7 namespaces and links to other man pages at the end.

– A.B
Mar 15 at 13:17






Some beforehand knowledge is indeed needed, but many informations are available from man 7 namespaces and links to other man pages at the end.

– A.B
Mar 15 at 13:17


















draft saved

draft discarded
















































Thanks for contributing an answer to Unix & Linux Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f505112%2fhow-do-i-find-all-interfaces-that-have-been-configured-in-linux-including-those%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown






Popular posts from this blog

How to check contact read email or not when send email to Individual?

Displaying single band from multi-band raster using QGIS

How many registers does an x86_64 CPU actually have?