Problem with NFS mount when FS is shared for all clients however works when a specific client network is specified [closed]
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
I have shared a File System over NFS with ro
permissions for all clients using *
. But on client side it is not able to mount it, however sharing that FS for a subnetwork where the client is, mounting works. What could cause issue when it is shared for all.
Listed the all the shares for nas1
NFS server from client
root #: showmount -e nas1
export list for nas1:
/vx/test-omss *
Tried to mount the test-omss
on /mnt
, but got some error
root #: mount -F nfs nas1:/vx/test-omss /mnt
nfs mount: mount: /mnt: Stale NFS file handle
root #:
Went back to NFS server and specified the network/mask for the client.
Listed again what is shared on nas1
after changes
root #: showmount -e nas1
export list for nas1:
/vx/test-omss 172.26.244.0/24
Tried to mount test-omss
on /mnt
, no errors, working.
root #: mount -F nfs nas1:/vx/test-omss /mnt
root #: df -kh | grep -i mnt
mnttab 0K 0K 0K 0% /etc/mnttab
nas1:/vx/test-omss 2.0G 165M 1.8G 9% /mnt
root #: umount /mnt
root #:
mount nfs
closed as too broad by Rui F Ribeiro, G-Man, telcoM, Fabby, αғsнιη Dec 9 at 13:06
Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
add a comment |
up vote
1
down vote
favorite
I have shared a File System over NFS with ro
permissions for all clients using *
. But on client side it is not able to mount it, however sharing that FS for a subnetwork where the client is, mounting works. What could cause issue when it is shared for all.
Listed the all the shares for nas1
NFS server from client
root #: showmount -e nas1
export list for nas1:
/vx/test-omss *
Tried to mount the test-omss
on /mnt
, but got some error
root #: mount -F nfs nas1:/vx/test-omss /mnt
nfs mount: mount: /mnt: Stale NFS file handle
root #:
Went back to NFS server and specified the network/mask for the client.
Listed again what is shared on nas1
after changes
root #: showmount -e nas1
export list for nas1:
/vx/test-omss 172.26.244.0/24
Tried to mount test-omss
on /mnt
, no errors, working.
root #: mount -F nfs nas1:/vx/test-omss /mnt
root #: df -kh | grep -i mnt
mnttab 0K 0K 0K 0% /etc/mnttab
nas1:/vx/test-omss 2.0G 165M 1.8G 9% /mnt
root #: umount /mnt
root #:
mount nfs
closed as too broad by Rui F Ribeiro, G-Man, telcoM, Fabby, αғsнιη Dec 9 at 13:06
Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
3
Probably networking/firewalls/ISP issues, who knows. Some protocols also do not play well with NAT, much less with NAT+VPNs. I had to think a bit hard to understand what you are asking, and there is no technical or debugging data here whatsoever that allows a meaningful and on topic answer. I advise investing a lot more time in questions about your problems - for instance, you do not even show how you share that filesystem. The question is overly broad, you are asking people to guess things, and debug your infra-structure, which is not on topic here also. Any specific Unix question?
– Rui F Ribeiro
Dec 9 at 4:37
I appreciate your time for the comment, I just updated it
– Bharat
Dec 9 at 13:02
It is the default convention, from veritas access NFS doc, If a client was not specified when the NFS> share add command was used, then * is displayed as the system to be exported to, indicating that all clients can access the directory. Output from server > nfs share show /vx/test-omss * (ro,no_root_squash)
– Bharat
Dec 9 at 13:48
check this first and this.
– αғsнιη
Dec 9 at 17:32
Problem is fixed after rebooting the NFS client system, I wish I could have resolved it without rebooting.
– Bharat
Dec 11 at 17:48
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I have shared a File System over NFS with ro
permissions for all clients using *
. But on client side it is not able to mount it, however sharing that FS for a subnetwork where the client is, mounting works. What could cause issue when it is shared for all.
Listed the all the shares for nas1
NFS server from client
root #: showmount -e nas1
export list for nas1:
/vx/test-omss *
Tried to mount the test-omss
on /mnt
, but got some error
root #: mount -F nfs nas1:/vx/test-omss /mnt
nfs mount: mount: /mnt: Stale NFS file handle
root #:
Went back to NFS server and specified the network/mask for the client.
Listed again what is shared on nas1
after changes
root #: showmount -e nas1
export list for nas1:
/vx/test-omss 172.26.244.0/24
Tried to mount test-omss
on /mnt
, no errors, working.
root #: mount -F nfs nas1:/vx/test-omss /mnt
root #: df -kh | grep -i mnt
mnttab 0K 0K 0K 0% /etc/mnttab
nas1:/vx/test-omss 2.0G 165M 1.8G 9% /mnt
root #: umount /mnt
root #:
mount nfs
I have shared a File System over NFS with ro
permissions for all clients using *
. But on client side it is not able to mount it, however sharing that FS for a subnetwork where the client is, mounting works. What could cause issue when it is shared for all.
Listed the all the shares for nas1
NFS server from client
root #: showmount -e nas1
export list for nas1:
/vx/test-omss *
Tried to mount the test-omss
on /mnt
, but got some error
root #: mount -F nfs nas1:/vx/test-omss /mnt
nfs mount: mount: /mnt: Stale NFS file handle
root #:
Went back to NFS server and specified the network/mask for the client.
Listed again what is shared on nas1
after changes
root #: showmount -e nas1
export list for nas1:
/vx/test-omss 172.26.244.0/24
Tried to mount test-omss
on /mnt
, no errors, working.
root #: mount -F nfs nas1:/vx/test-omss /mnt
root #: df -kh | grep -i mnt
mnttab 0K 0K 0K 0% /etc/mnttab
nas1:/vx/test-omss 2.0G 165M 1.8G 9% /mnt
root #: umount /mnt
root #:
mount nfs
mount nfs
edited Dec 9 at 13:50
asked Dec 9 at 3:34
Bharat
38111
38111
closed as too broad by Rui F Ribeiro, G-Man, telcoM, Fabby, αғsнιη Dec 9 at 13:06
Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
closed as too broad by Rui F Ribeiro, G-Man, telcoM, Fabby, αғsнιη Dec 9 at 13:06
Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
3
Probably networking/firewalls/ISP issues, who knows. Some protocols also do not play well with NAT, much less with NAT+VPNs. I had to think a bit hard to understand what you are asking, and there is no technical or debugging data here whatsoever that allows a meaningful and on topic answer. I advise investing a lot more time in questions about your problems - for instance, you do not even show how you share that filesystem. The question is overly broad, you are asking people to guess things, and debug your infra-structure, which is not on topic here also. Any specific Unix question?
– Rui F Ribeiro
Dec 9 at 4:37
I appreciate your time for the comment, I just updated it
– Bharat
Dec 9 at 13:02
It is the default convention, from veritas access NFS doc, If a client was not specified when the NFS> share add command was used, then * is displayed as the system to be exported to, indicating that all clients can access the directory. Output from server > nfs share show /vx/test-omss * (ro,no_root_squash)
– Bharat
Dec 9 at 13:48
check this first and this.
– αғsнιη
Dec 9 at 17:32
Problem is fixed after rebooting the NFS client system, I wish I could have resolved it without rebooting.
– Bharat
Dec 11 at 17:48
add a comment |
3
Probably networking/firewalls/ISP issues, who knows. Some protocols also do not play well with NAT, much less with NAT+VPNs. I had to think a bit hard to understand what you are asking, and there is no technical or debugging data here whatsoever that allows a meaningful and on topic answer. I advise investing a lot more time in questions about your problems - for instance, you do not even show how you share that filesystem. The question is overly broad, you are asking people to guess things, and debug your infra-structure, which is not on topic here also. Any specific Unix question?
– Rui F Ribeiro
Dec 9 at 4:37
I appreciate your time for the comment, I just updated it
– Bharat
Dec 9 at 13:02
It is the default convention, from veritas access NFS doc, If a client was not specified when the NFS> share add command was used, then * is displayed as the system to be exported to, indicating that all clients can access the directory. Output from server > nfs share show /vx/test-omss * (ro,no_root_squash)
– Bharat
Dec 9 at 13:48
check this first and this.
– αғsнιη
Dec 9 at 17:32
Problem is fixed after rebooting the NFS client system, I wish I could have resolved it without rebooting.
– Bharat
Dec 11 at 17:48
3
3
Probably networking/firewalls/ISP issues, who knows. Some protocols also do not play well with NAT, much less with NAT+VPNs. I had to think a bit hard to understand what you are asking, and there is no technical or debugging data here whatsoever that allows a meaningful and on topic answer. I advise investing a lot more time in questions about your problems - for instance, you do not even show how you share that filesystem. The question is overly broad, you are asking people to guess things, and debug your infra-structure, which is not on topic here also. Any specific Unix question?
– Rui F Ribeiro
Dec 9 at 4:37
Probably networking/firewalls/ISP issues, who knows. Some protocols also do not play well with NAT, much less with NAT+VPNs. I had to think a bit hard to understand what you are asking, and there is no technical or debugging data here whatsoever that allows a meaningful and on topic answer. I advise investing a lot more time in questions about your problems - for instance, you do not even show how you share that filesystem. The question is overly broad, you are asking people to guess things, and debug your infra-structure, which is not on topic here also. Any specific Unix question?
– Rui F Ribeiro
Dec 9 at 4:37
I appreciate your time for the comment, I just updated it
– Bharat
Dec 9 at 13:02
I appreciate your time for the comment, I just updated it
– Bharat
Dec 9 at 13:02
It is the default convention, from veritas access NFS doc, If a client was not specified when the NFS> share add command was used, then * is displayed as the system to be exported to, indicating that all clients can access the directory. Output from server > nfs share show /vx/test-omss * (ro,no_root_squash)
– Bharat
Dec 9 at 13:48
It is the default convention, from veritas access NFS doc, If a client was not specified when the NFS> share add command was used, then * is displayed as the system to be exported to, indicating that all clients can access the directory. Output from server > nfs share show /vx/test-omss * (ro,no_root_squash)
– Bharat
Dec 9 at 13:48
check this first and this.
– αғsнιη
Dec 9 at 17:32
check this first and this.
– αғsнιη
Dec 9 at 17:32
Problem is fixed after rebooting the NFS client system, I wish I could have resolved it without rebooting.
– Bharat
Dec 11 at 17:48
Problem is fixed after rebooting the NFS client system, I wish I could have resolved it without rebooting.
– Bharat
Dec 11 at 17:48
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
3
Probably networking/firewalls/ISP issues, who knows. Some protocols also do not play well with NAT, much less with NAT+VPNs. I had to think a bit hard to understand what you are asking, and there is no technical or debugging data here whatsoever that allows a meaningful and on topic answer. I advise investing a lot more time in questions about your problems - for instance, you do not even show how you share that filesystem. The question is overly broad, you are asking people to guess things, and debug your infra-structure, which is not on topic here also. Any specific Unix question?
– Rui F Ribeiro
Dec 9 at 4:37
I appreciate your time for the comment, I just updated it
– Bharat
Dec 9 at 13:02
It is the default convention, from veritas access NFS doc, If a client was not specified when the NFS> share add command was used, then * is displayed as the system to be exported to, indicating that all clients can access the directory. Output from server > nfs share show /vx/test-omss * (ro,no_root_squash)
– Bharat
Dec 9 at 13:48
check this first and this.
– αғsнιη
Dec 9 at 17:32
Problem is fixed after rebooting the NFS client system, I wish I could have resolved it without rebooting.
– Bharat
Dec 11 at 17:48