Can I see the amount of memory which is allocated as GEM buffers?
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
My /proc/meminfo
shows about 500 MB is allocated as Shmem
. I want to get more specific figures. I found an explanation here:
https://lists.kernelnewbies.org/pipermail/kernelnewbies/2013-July/008628.html
It includes tmpfs memory, SysV shared memory (from ipc/shm.c),
POSIX shared memory (under /dev/shm [which is a tmpfs]), and shared anonymous mappings
(from mmap of /dev/zero with MAP_SHARED: see call to shmem_zero_setup()
from drivers/char/mem.c): whatever allocates pages through mm/shmem.c.
2-> as per the developer comments NR_SHMEM included tmpfs and GEM
pages whct is Gem pages
Ah yes, and the Graphics Execution Manager uses shmem for objects shared
with the GPU: see use of shmem_read_mapping_page*() in drivers/gpu/drm/.
I have about
- 50MB in user-visible tmpfs, found with
df -h -t tmpfs
. - 40MB (10,000 pages of 4096 bytes) in sysvipc shared memory, found with
ipcs -mu
.
I would like to get some more positive accounting, for what uses the 500MB! Is there a way to show total GEM allocations? (Or any other likely contributor).
I expect I have some GEM allocations, since I am running a graphical desktop on intel graphics hardware. My kernel version is 4.18.16-200.fc28.x86_64
(Fedora Workstation 28).
linux memory graphics drm
add a comment |
up vote
1
down vote
favorite
My /proc/meminfo
shows about 500 MB is allocated as Shmem
. I want to get more specific figures. I found an explanation here:
https://lists.kernelnewbies.org/pipermail/kernelnewbies/2013-July/008628.html
It includes tmpfs memory, SysV shared memory (from ipc/shm.c),
POSIX shared memory (under /dev/shm [which is a tmpfs]), and shared anonymous mappings
(from mmap of /dev/zero with MAP_SHARED: see call to shmem_zero_setup()
from drivers/char/mem.c): whatever allocates pages through mm/shmem.c.
2-> as per the developer comments NR_SHMEM included tmpfs and GEM
pages whct is Gem pages
Ah yes, and the Graphics Execution Manager uses shmem for objects shared
with the GPU: see use of shmem_read_mapping_page*() in drivers/gpu/drm/.
I have about
- 50MB in user-visible tmpfs, found with
df -h -t tmpfs
. - 40MB (10,000 pages of 4096 bytes) in sysvipc shared memory, found with
ipcs -mu
.
I would like to get some more positive accounting, for what uses the 500MB! Is there a way to show total GEM allocations? (Or any other likely contributor).
I expect I have some GEM allocations, since I am running a graphical desktop on intel graphics hardware. My kernel version is 4.18.16-200.fc28.x86_64
(Fedora Workstation 28).
linux memory graphics drm
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
My /proc/meminfo
shows about 500 MB is allocated as Shmem
. I want to get more specific figures. I found an explanation here:
https://lists.kernelnewbies.org/pipermail/kernelnewbies/2013-July/008628.html
It includes tmpfs memory, SysV shared memory (from ipc/shm.c),
POSIX shared memory (under /dev/shm [which is a tmpfs]), and shared anonymous mappings
(from mmap of /dev/zero with MAP_SHARED: see call to shmem_zero_setup()
from drivers/char/mem.c): whatever allocates pages through mm/shmem.c.
2-> as per the developer comments NR_SHMEM included tmpfs and GEM
pages whct is Gem pages
Ah yes, and the Graphics Execution Manager uses shmem for objects shared
with the GPU: see use of shmem_read_mapping_page*() in drivers/gpu/drm/.
I have about
- 50MB in user-visible tmpfs, found with
df -h -t tmpfs
. - 40MB (10,000 pages of 4096 bytes) in sysvipc shared memory, found with
ipcs -mu
.
I would like to get some more positive accounting, for what uses the 500MB! Is there a way to show total GEM allocations? (Or any other likely contributor).
I expect I have some GEM allocations, since I am running a graphical desktop on intel graphics hardware. My kernel version is 4.18.16-200.fc28.x86_64
(Fedora Workstation 28).
linux memory graphics drm
My /proc/meminfo
shows about 500 MB is allocated as Shmem
. I want to get more specific figures. I found an explanation here:
https://lists.kernelnewbies.org/pipermail/kernelnewbies/2013-July/008628.html
It includes tmpfs memory, SysV shared memory (from ipc/shm.c),
POSIX shared memory (under /dev/shm [which is a tmpfs]), and shared anonymous mappings
(from mmap of /dev/zero with MAP_SHARED: see call to shmem_zero_setup()
from drivers/char/mem.c): whatever allocates pages through mm/shmem.c.
2-> as per the developer comments NR_SHMEM included tmpfs and GEM
pages whct is Gem pages
Ah yes, and the Graphics Execution Manager uses shmem for objects shared
with the GPU: see use of shmem_read_mapping_page*() in drivers/gpu/drm/.
I have about
- 50MB in user-visible tmpfs, found with
df -h -t tmpfs
. - 40MB (10,000 pages of 4096 bytes) in sysvipc shared memory, found with
ipcs -mu
.
I would like to get some more positive accounting, for what uses the 500MB! Is there a way to show total GEM allocations? (Or any other likely contributor).
I expect I have some GEM allocations, since I am running a graphical desktop on intel graphics hardware. My kernel version is 4.18.16-200.fc28.x86_64
(Fedora Workstation 28).
linux memory graphics drm
linux memory graphics drm
asked Nov 19 at 16:15
sourcejedi
21.9k43396
21.9k43396
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
up vote
1
down vote
These appear in process maps as “drm mm object” or “i915”. You can see this in /proc/<pid>/maps
; given the PID of a process using GEM/DRM:
awk '/(drm mm object)|i915/ hypidx = index($1, "-"); from = substr($1, 1, hypidx - 1); to = substr($1, hypidx + 1); sum += strtonum("0x" to) - strtonum("0x" from) END print sum ' /proc/$PID/maps
will show the total size of the allocated GEM buffers. Calculating the total can be done by feeding in all maps which contain at least one occurrence of “drm mm object” or “i915”; as root:
find /proc -maxdepth 2 -name maps |
xargs grep -E -l "(drm mm object)|i915" |
xargs awk '/(drm mm object)|i915/ hypidx = index($1, "-"); sum += strtonum("0x" substr($1, hypidx + 1)) - strtonum("0x" substr($1, 1, hypidx - 1)) END print sum '
(-maxdepth 2
is necessary to avoid looking at thread maps). Some additional inode-based de-duplication might be necessary.
Yeah.maps
shows device and inode number, so you can de-duplicate as well. I don't see "drm mm object"s, I see some "/i915 (deleted)" though.
– sourcejedi
Nov 19 at 17:09
Indeed. There’s also “ttm swap” but that’s something different.
– Stephen Kitt
Nov 19 at 17:20
This strategy did not suffice. I added an answer to explain why.
– sourcejedi
Nov 20 at 16:35
add a comment |
up vote
0
down vote
accepted
You can find some of the shmem files by looking through all open files, in /proc/*/fd/
and /proc/*/map_files/
(or /proc/*/maps
).
With the right hacks, it appears possible to reliably identify which files belong to the hidden shmem filesystem(s).
Each shared memory object is a file with a name. And the names can be used to identify which kernel subsystem created the file.
- SYSV00000000
- i915 (i.e. intel gpu)
- memfd:gdk-wayland
- dev/zero (for any "anonymous" shared mapping)
- ...
However this does not show all DRM / GEM allocations. DRM buffers can exist without being mapped, simply as a numeric handle. These are tied to the open DRM file they were created on. When the program crashes or is killed, the DRM file will be closed, and all its DRM handles will be cleaned up automatically. (Unless some other software keeps a copy of the file descriptor open, like this old bug.)
https://www.systutorials.com/docs/linux/man/7-drm-gem/
You can find open DRM files in /proc/*/fd/
, but they show as a zero-size file with zero blocks allocated.
For example right now, I am still unable to account for over 50% / 300MB of my Shmem
.
$ grep Shmem: /proc/meminfo
Shmem: 612732 kB
$ df -h -t tmpfs
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 59M 3.8G 2% /dev/shm
tmpfs 3.9G 2.5M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 9.0M 3.9G 1% /tmp
tmpfs 786M 20K 786M 1% /run/user/42
tmpfs 786M 8.0M 778M 2% /run/user/1000
tmpfs 786M 5.7M 781M 1% /run/user/1001
$ sudo ipcs -mu
------ Shared Memory Status --------
segments allocated 20
pages allocated 4226
pages resident 3990
pages swapped 0
Swap performance: 0 attempts 0 successes
All open files on hidden shmem filesystem(s):
$ sudo python3 ~/shm -s
15960 /SYSV*
79140 /i915
7912 /memfd:gdk-wayland
1164 /memfd:pulseaudio
104176
Here is a "before and after", logging out one of my two logged-in GNOME users. It might be explained if gnome-shell
had over 100MB of unmapped DRM buffers.
$ grep Shmem: /proc/meminfo
Shmem: 478780 kB
$ df -t tmpfs -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 4.0K 3.9G 1% /dev/shm
tmpfs 3.9G 2.5M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 276K 3.9G 1% /tmp
tmpfs 786M 20K 786M 1% /run/user/42
tmpfs 786M 8.0M 778M 2% /run/user/1000
tmpfs 786M 5.7M 781M 1% /run/user/1001
$ sudo ./shm -s
80 /SYSV*
114716 /i915
1692 /memfd:gdk-wayland
1156 /memfd:pulseaudio
117644
$ grep Shmem: /proc/meminfo
Shmem: 313008 kB
$ df -t tmpfs -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 4.0K 3.9G 1% /dev/shm
tmpfs 3.9G 2.1M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 204K 3.9G 1% /tmp
tmpfs 786M 20K 786M 1% /run/user/42
tmpfs 786M 6.8M 780M 1% /run/user/1000
$ sudo ./shm -s
40 /SYSV*
88496 /i915
1692 /memfd:gdk-wayland
624 /memfd:pulseaudio
90852
Python script to generate the above output:
#!/bin/python3
# Reads Linux /proc. No str, all bytes.
import sys
import os
import stat
import glob
import collections
import math
# File.
# 'name' is first name encountered, we don't track hardlinks.
Inode = collections.namedtuple('Inode', ['name', 'bytes', 'pids'])
# inode number -> Inode object
inodes = dict()
# pid -> program name
pids = dict()
# filename -> list() of Inodes
filenames = dict()
def add_file(pid, proclink):
try:
vfs = os.statvfs(proclink)
# The tmpfs which reports 0 blocks is an internal shm mount
# python doesn't admit f_fsid ...
if vfs.f_blocks != 0:
return
filename = os.readlink(proclink)
# ... but all the shm files are deleted (hack :)
if not filename.endswith(b' (deleted)'):
return
filename = filename[:-10]
# I tried a consistency check that all our st_dev are the same
# but actually there can be more than one internal shm mount!
# i915 added a dedicated "gemfs" so they could control mount options.
st = os.stat(proclink)
# hack the second: ignore deleted character devices from devpts
if stat.S_ISCHR(st.st_mode):
return
# Read process name succesfully,
# before we record file owned by process.
if pid not in pids:
pids[pid] = open(b'/proc/' + pid + b'/comm', 'rb').read()[:-1]
if st.st_ino not in inodes:
inode_pids = set()
inode_pids.add(pid)
inode = Inode(name=filename,
bytes=st.st_blocks * 512,
pids=inode_pids)
inodes[st.st_ino] = inode
else:
inode = inodes[st.st_ino]
inode.pids.add(pid)
# Group SYSV shared memory objects.
# There could be many, and the rest of the name is just a numeric ID
if filename.startswith(b'/SYSV'):
filename = b'/SYSV*'
filename_inodes = filenames.setdefault(filename, set())
filename_inodes.add(st.st_ino)
except FileNotFoundError:
# File disappeared (race condition).
# Don't bother to distinguish "file closed" from "process exited".
pass
summary = False
if sys.argv[1:]:
if sys.argv[1:] == ['-s']:
summary = True
else:
print("Usage: 0 [-s]".format(sys.argv[0]))
sys.exit(2)
os.chdir(b'/proc')
for pid in glob.iglob(b'[0-9]*'):
for f in glob.iglob(pid + b'/fd/*'):
add_file(pid, f)
for f in glob.iglob(pid + b'/map_files/*'):
add_file(pid, f)
def pid_name(pid):
return pid + b'/' + pids[pid]
def kB(b):
return str(math.ceil(b / 1024)).encode('US-ASCII')
out = sys.stdout.buffer
total = 0
for (filename, filename_inodes) in sorted(filenames.items(), key=lambda p: p[0]):
filename_bytes = 0
for ino in filename_inodes:
inode = inodes[ino]
filename_bytes += inode.bytes
if not summary:
out.write(kB(inode.bytes))
out.write(b't')
#out.write(str(ino).encode('US-ASCII'))
#out.write(b't')
out.write(inode.name)
out.write(b't')
out.write(b' '.join(map(pid_name, inode.pids)))
out.write(b'n')
total += filename_bytes
out.write(kB(filename_bytes))
out.write(b't')
out.write(filename)
out.write(b'n')
out.write(kB(total))
out.write(b'n')
add a comment |
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
These appear in process maps as “drm mm object” or “i915”. You can see this in /proc/<pid>/maps
; given the PID of a process using GEM/DRM:
awk '/(drm mm object)|i915/ hypidx = index($1, "-"); from = substr($1, 1, hypidx - 1); to = substr($1, hypidx + 1); sum += strtonum("0x" to) - strtonum("0x" from) END print sum ' /proc/$PID/maps
will show the total size of the allocated GEM buffers. Calculating the total can be done by feeding in all maps which contain at least one occurrence of “drm mm object” or “i915”; as root:
find /proc -maxdepth 2 -name maps |
xargs grep -E -l "(drm mm object)|i915" |
xargs awk '/(drm mm object)|i915/ hypidx = index($1, "-"); sum += strtonum("0x" substr($1, hypidx + 1)) - strtonum("0x" substr($1, 1, hypidx - 1)) END print sum '
(-maxdepth 2
is necessary to avoid looking at thread maps). Some additional inode-based de-duplication might be necessary.
Yeah.maps
shows device and inode number, so you can de-duplicate as well. I don't see "drm mm object"s, I see some "/i915 (deleted)" though.
– sourcejedi
Nov 19 at 17:09
Indeed. There’s also “ttm swap” but that’s something different.
– Stephen Kitt
Nov 19 at 17:20
This strategy did not suffice. I added an answer to explain why.
– sourcejedi
Nov 20 at 16:35
add a comment |
up vote
1
down vote
These appear in process maps as “drm mm object” or “i915”. You can see this in /proc/<pid>/maps
; given the PID of a process using GEM/DRM:
awk '/(drm mm object)|i915/ hypidx = index($1, "-"); from = substr($1, 1, hypidx - 1); to = substr($1, hypidx + 1); sum += strtonum("0x" to) - strtonum("0x" from) END print sum ' /proc/$PID/maps
will show the total size of the allocated GEM buffers. Calculating the total can be done by feeding in all maps which contain at least one occurrence of “drm mm object” or “i915”; as root:
find /proc -maxdepth 2 -name maps |
xargs grep -E -l "(drm mm object)|i915" |
xargs awk '/(drm mm object)|i915/ hypidx = index($1, "-"); sum += strtonum("0x" substr($1, hypidx + 1)) - strtonum("0x" substr($1, 1, hypidx - 1)) END print sum '
(-maxdepth 2
is necessary to avoid looking at thread maps). Some additional inode-based de-duplication might be necessary.
Yeah.maps
shows device and inode number, so you can de-duplicate as well. I don't see "drm mm object"s, I see some "/i915 (deleted)" though.
– sourcejedi
Nov 19 at 17:09
Indeed. There’s also “ttm swap” but that’s something different.
– Stephen Kitt
Nov 19 at 17:20
This strategy did not suffice. I added an answer to explain why.
– sourcejedi
Nov 20 at 16:35
add a comment |
up vote
1
down vote
up vote
1
down vote
These appear in process maps as “drm mm object” or “i915”. You can see this in /proc/<pid>/maps
; given the PID of a process using GEM/DRM:
awk '/(drm mm object)|i915/ hypidx = index($1, "-"); from = substr($1, 1, hypidx - 1); to = substr($1, hypidx + 1); sum += strtonum("0x" to) - strtonum("0x" from) END print sum ' /proc/$PID/maps
will show the total size of the allocated GEM buffers. Calculating the total can be done by feeding in all maps which contain at least one occurrence of “drm mm object” or “i915”; as root:
find /proc -maxdepth 2 -name maps |
xargs grep -E -l "(drm mm object)|i915" |
xargs awk '/(drm mm object)|i915/ hypidx = index($1, "-"); sum += strtonum("0x" substr($1, hypidx + 1)) - strtonum("0x" substr($1, 1, hypidx - 1)) END print sum '
(-maxdepth 2
is necessary to avoid looking at thread maps). Some additional inode-based de-duplication might be necessary.
These appear in process maps as “drm mm object” or “i915”. You can see this in /proc/<pid>/maps
; given the PID of a process using GEM/DRM:
awk '/(drm mm object)|i915/ hypidx = index($1, "-"); from = substr($1, 1, hypidx - 1); to = substr($1, hypidx + 1); sum += strtonum("0x" to) - strtonum("0x" from) END print sum ' /proc/$PID/maps
will show the total size of the allocated GEM buffers. Calculating the total can be done by feeding in all maps which contain at least one occurrence of “drm mm object” or “i915”; as root:
find /proc -maxdepth 2 -name maps |
xargs grep -E -l "(drm mm object)|i915" |
xargs awk '/(drm mm object)|i915/ hypidx = index($1, "-"); sum += strtonum("0x" substr($1, hypidx + 1)) - strtonum("0x" substr($1, 1, hypidx - 1)) END print sum '
(-maxdepth 2
is necessary to avoid looking at thread maps). Some additional inode-based de-duplication might be necessary.
edited Nov 19 at 17:20
answered Nov 19 at 16:52
Stephen Kitt
157k23343418
157k23343418
Yeah.maps
shows device and inode number, so you can de-duplicate as well. I don't see "drm mm object"s, I see some "/i915 (deleted)" though.
– sourcejedi
Nov 19 at 17:09
Indeed. There’s also “ttm swap” but that’s something different.
– Stephen Kitt
Nov 19 at 17:20
This strategy did not suffice. I added an answer to explain why.
– sourcejedi
Nov 20 at 16:35
add a comment |
Yeah.maps
shows device and inode number, so you can de-duplicate as well. I don't see "drm mm object"s, I see some "/i915 (deleted)" though.
– sourcejedi
Nov 19 at 17:09
Indeed. There’s also “ttm swap” but that’s something different.
– Stephen Kitt
Nov 19 at 17:20
This strategy did not suffice. I added an answer to explain why.
– sourcejedi
Nov 20 at 16:35
Yeah.
maps
shows device and inode number, so you can de-duplicate as well. I don't see "drm mm object"s, I see some "/i915 (deleted)" though.– sourcejedi
Nov 19 at 17:09
Yeah.
maps
shows device and inode number, so you can de-duplicate as well. I don't see "drm mm object"s, I see some "/i915 (deleted)" though.– sourcejedi
Nov 19 at 17:09
Indeed. There’s also “ttm swap” but that’s something different.
– Stephen Kitt
Nov 19 at 17:20
Indeed. There’s also “ttm swap” but that’s something different.
– Stephen Kitt
Nov 19 at 17:20
This strategy did not suffice. I added an answer to explain why.
– sourcejedi
Nov 20 at 16:35
This strategy did not suffice. I added an answer to explain why.
– sourcejedi
Nov 20 at 16:35
add a comment |
up vote
0
down vote
accepted
You can find some of the shmem files by looking through all open files, in /proc/*/fd/
and /proc/*/map_files/
(or /proc/*/maps
).
With the right hacks, it appears possible to reliably identify which files belong to the hidden shmem filesystem(s).
Each shared memory object is a file with a name. And the names can be used to identify which kernel subsystem created the file.
- SYSV00000000
- i915 (i.e. intel gpu)
- memfd:gdk-wayland
- dev/zero (for any "anonymous" shared mapping)
- ...
However this does not show all DRM / GEM allocations. DRM buffers can exist without being mapped, simply as a numeric handle. These are tied to the open DRM file they were created on. When the program crashes or is killed, the DRM file will be closed, and all its DRM handles will be cleaned up automatically. (Unless some other software keeps a copy of the file descriptor open, like this old bug.)
https://www.systutorials.com/docs/linux/man/7-drm-gem/
You can find open DRM files in /proc/*/fd/
, but they show as a zero-size file with zero blocks allocated.
For example right now, I am still unable to account for over 50% / 300MB of my Shmem
.
$ grep Shmem: /proc/meminfo
Shmem: 612732 kB
$ df -h -t tmpfs
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 59M 3.8G 2% /dev/shm
tmpfs 3.9G 2.5M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 9.0M 3.9G 1% /tmp
tmpfs 786M 20K 786M 1% /run/user/42
tmpfs 786M 8.0M 778M 2% /run/user/1000
tmpfs 786M 5.7M 781M 1% /run/user/1001
$ sudo ipcs -mu
------ Shared Memory Status --------
segments allocated 20
pages allocated 4226
pages resident 3990
pages swapped 0
Swap performance: 0 attempts 0 successes
All open files on hidden shmem filesystem(s):
$ sudo python3 ~/shm -s
15960 /SYSV*
79140 /i915
7912 /memfd:gdk-wayland
1164 /memfd:pulseaudio
104176
Here is a "before and after", logging out one of my two logged-in GNOME users. It might be explained if gnome-shell
had over 100MB of unmapped DRM buffers.
$ grep Shmem: /proc/meminfo
Shmem: 478780 kB
$ df -t tmpfs -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 4.0K 3.9G 1% /dev/shm
tmpfs 3.9G 2.5M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 276K 3.9G 1% /tmp
tmpfs 786M 20K 786M 1% /run/user/42
tmpfs 786M 8.0M 778M 2% /run/user/1000
tmpfs 786M 5.7M 781M 1% /run/user/1001
$ sudo ./shm -s
80 /SYSV*
114716 /i915
1692 /memfd:gdk-wayland
1156 /memfd:pulseaudio
117644
$ grep Shmem: /proc/meminfo
Shmem: 313008 kB
$ df -t tmpfs -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 4.0K 3.9G 1% /dev/shm
tmpfs 3.9G 2.1M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 204K 3.9G 1% /tmp
tmpfs 786M 20K 786M 1% /run/user/42
tmpfs 786M 6.8M 780M 1% /run/user/1000
$ sudo ./shm -s
40 /SYSV*
88496 /i915
1692 /memfd:gdk-wayland
624 /memfd:pulseaudio
90852
Python script to generate the above output:
#!/bin/python3
# Reads Linux /proc. No str, all bytes.
import sys
import os
import stat
import glob
import collections
import math
# File.
# 'name' is first name encountered, we don't track hardlinks.
Inode = collections.namedtuple('Inode', ['name', 'bytes', 'pids'])
# inode number -> Inode object
inodes = dict()
# pid -> program name
pids = dict()
# filename -> list() of Inodes
filenames = dict()
def add_file(pid, proclink):
try:
vfs = os.statvfs(proclink)
# The tmpfs which reports 0 blocks is an internal shm mount
# python doesn't admit f_fsid ...
if vfs.f_blocks != 0:
return
filename = os.readlink(proclink)
# ... but all the shm files are deleted (hack :)
if not filename.endswith(b' (deleted)'):
return
filename = filename[:-10]
# I tried a consistency check that all our st_dev are the same
# but actually there can be more than one internal shm mount!
# i915 added a dedicated "gemfs" so they could control mount options.
st = os.stat(proclink)
# hack the second: ignore deleted character devices from devpts
if stat.S_ISCHR(st.st_mode):
return
# Read process name succesfully,
# before we record file owned by process.
if pid not in pids:
pids[pid] = open(b'/proc/' + pid + b'/comm', 'rb').read()[:-1]
if st.st_ino not in inodes:
inode_pids = set()
inode_pids.add(pid)
inode = Inode(name=filename,
bytes=st.st_blocks * 512,
pids=inode_pids)
inodes[st.st_ino] = inode
else:
inode = inodes[st.st_ino]
inode.pids.add(pid)
# Group SYSV shared memory objects.
# There could be many, and the rest of the name is just a numeric ID
if filename.startswith(b'/SYSV'):
filename = b'/SYSV*'
filename_inodes = filenames.setdefault(filename, set())
filename_inodes.add(st.st_ino)
except FileNotFoundError:
# File disappeared (race condition).
# Don't bother to distinguish "file closed" from "process exited".
pass
summary = False
if sys.argv[1:]:
if sys.argv[1:] == ['-s']:
summary = True
else:
print("Usage: 0 [-s]".format(sys.argv[0]))
sys.exit(2)
os.chdir(b'/proc')
for pid in glob.iglob(b'[0-9]*'):
for f in glob.iglob(pid + b'/fd/*'):
add_file(pid, f)
for f in glob.iglob(pid + b'/map_files/*'):
add_file(pid, f)
def pid_name(pid):
return pid + b'/' + pids[pid]
def kB(b):
return str(math.ceil(b / 1024)).encode('US-ASCII')
out = sys.stdout.buffer
total = 0
for (filename, filename_inodes) in sorted(filenames.items(), key=lambda p: p[0]):
filename_bytes = 0
for ino in filename_inodes:
inode = inodes[ino]
filename_bytes += inode.bytes
if not summary:
out.write(kB(inode.bytes))
out.write(b't')
#out.write(str(ino).encode('US-ASCII'))
#out.write(b't')
out.write(inode.name)
out.write(b't')
out.write(b' '.join(map(pid_name, inode.pids)))
out.write(b'n')
total += filename_bytes
out.write(kB(filename_bytes))
out.write(b't')
out.write(filename)
out.write(b'n')
out.write(kB(total))
out.write(b'n')
add a comment |
up vote
0
down vote
accepted
You can find some of the shmem files by looking through all open files, in /proc/*/fd/
and /proc/*/map_files/
(or /proc/*/maps
).
With the right hacks, it appears possible to reliably identify which files belong to the hidden shmem filesystem(s).
Each shared memory object is a file with a name. And the names can be used to identify which kernel subsystem created the file.
- SYSV00000000
- i915 (i.e. intel gpu)
- memfd:gdk-wayland
- dev/zero (for any "anonymous" shared mapping)
- ...
However this does not show all DRM / GEM allocations. DRM buffers can exist without being mapped, simply as a numeric handle. These are tied to the open DRM file they were created on. When the program crashes or is killed, the DRM file will be closed, and all its DRM handles will be cleaned up automatically. (Unless some other software keeps a copy of the file descriptor open, like this old bug.)
https://www.systutorials.com/docs/linux/man/7-drm-gem/
You can find open DRM files in /proc/*/fd/
, but they show as a zero-size file with zero blocks allocated.
For example right now, I am still unable to account for over 50% / 300MB of my Shmem
.
$ grep Shmem: /proc/meminfo
Shmem: 612732 kB
$ df -h -t tmpfs
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 59M 3.8G 2% /dev/shm
tmpfs 3.9G 2.5M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 9.0M 3.9G 1% /tmp
tmpfs 786M 20K 786M 1% /run/user/42
tmpfs 786M 8.0M 778M 2% /run/user/1000
tmpfs 786M 5.7M 781M 1% /run/user/1001
$ sudo ipcs -mu
------ Shared Memory Status --------
segments allocated 20
pages allocated 4226
pages resident 3990
pages swapped 0
Swap performance: 0 attempts 0 successes
All open files on hidden shmem filesystem(s):
$ sudo python3 ~/shm -s
15960 /SYSV*
79140 /i915
7912 /memfd:gdk-wayland
1164 /memfd:pulseaudio
104176
Here is a "before and after", logging out one of my two logged-in GNOME users. It might be explained if gnome-shell
had over 100MB of unmapped DRM buffers.
$ grep Shmem: /proc/meminfo
Shmem: 478780 kB
$ df -t tmpfs -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 4.0K 3.9G 1% /dev/shm
tmpfs 3.9G 2.5M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 276K 3.9G 1% /tmp
tmpfs 786M 20K 786M 1% /run/user/42
tmpfs 786M 8.0M 778M 2% /run/user/1000
tmpfs 786M 5.7M 781M 1% /run/user/1001
$ sudo ./shm -s
80 /SYSV*
114716 /i915
1692 /memfd:gdk-wayland
1156 /memfd:pulseaudio
117644
$ grep Shmem: /proc/meminfo
Shmem: 313008 kB
$ df -t tmpfs -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 4.0K 3.9G 1% /dev/shm
tmpfs 3.9G 2.1M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 204K 3.9G 1% /tmp
tmpfs 786M 20K 786M 1% /run/user/42
tmpfs 786M 6.8M 780M 1% /run/user/1000
$ sudo ./shm -s
40 /SYSV*
88496 /i915
1692 /memfd:gdk-wayland
624 /memfd:pulseaudio
90852
Python script to generate the above output:
#!/bin/python3
# Reads Linux /proc. No str, all bytes.
import sys
import os
import stat
import glob
import collections
import math
# File.
# 'name' is first name encountered, we don't track hardlinks.
Inode = collections.namedtuple('Inode', ['name', 'bytes', 'pids'])
# inode number -> Inode object
inodes = dict()
# pid -> program name
pids = dict()
# filename -> list() of Inodes
filenames = dict()
def add_file(pid, proclink):
try:
vfs = os.statvfs(proclink)
# The tmpfs which reports 0 blocks is an internal shm mount
# python doesn't admit f_fsid ...
if vfs.f_blocks != 0:
return
filename = os.readlink(proclink)
# ... but all the shm files are deleted (hack :)
if not filename.endswith(b' (deleted)'):
return
filename = filename[:-10]
# I tried a consistency check that all our st_dev are the same
# but actually there can be more than one internal shm mount!
# i915 added a dedicated "gemfs" so they could control mount options.
st = os.stat(proclink)
# hack the second: ignore deleted character devices from devpts
if stat.S_ISCHR(st.st_mode):
return
# Read process name succesfully,
# before we record file owned by process.
if pid not in pids:
pids[pid] = open(b'/proc/' + pid + b'/comm', 'rb').read()[:-1]
if st.st_ino not in inodes:
inode_pids = set()
inode_pids.add(pid)
inode = Inode(name=filename,
bytes=st.st_blocks * 512,
pids=inode_pids)
inodes[st.st_ino] = inode
else:
inode = inodes[st.st_ino]
inode.pids.add(pid)
# Group SYSV shared memory objects.
# There could be many, and the rest of the name is just a numeric ID
if filename.startswith(b'/SYSV'):
filename = b'/SYSV*'
filename_inodes = filenames.setdefault(filename, set())
filename_inodes.add(st.st_ino)
except FileNotFoundError:
# File disappeared (race condition).
# Don't bother to distinguish "file closed" from "process exited".
pass
summary = False
if sys.argv[1:]:
if sys.argv[1:] == ['-s']:
summary = True
else:
print("Usage: 0 [-s]".format(sys.argv[0]))
sys.exit(2)
os.chdir(b'/proc')
for pid in glob.iglob(b'[0-9]*'):
for f in glob.iglob(pid + b'/fd/*'):
add_file(pid, f)
for f in glob.iglob(pid + b'/map_files/*'):
add_file(pid, f)
def pid_name(pid):
return pid + b'/' + pids[pid]
def kB(b):
return str(math.ceil(b / 1024)).encode('US-ASCII')
out = sys.stdout.buffer
total = 0
for (filename, filename_inodes) in sorted(filenames.items(), key=lambda p: p[0]):
filename_bytes = 0
for ino in filename_inodes:
inode = inodes[ino]
filename_bytes += inode.bytes
if not summary:
out.write(kB(inode.bytes))
out.write(b't')
#out.write(str(ino).encode('US-ASCII'))
#out.write(b't')
out.write(inode.name)
out.write(b't')
out.write(b' '.join(map(pid_name, inode.pids)))
out.write(b'n')
total += filename_bytes
out.write(kB(filename_bytes))
out.write(b't')
out.write(filename)
out.write(b'n')
out.write(kB(total))
out.write(b'n')
add a comment |
up vote
0
down vote
accepted
up vote
0
down vote
accepted
You can find some of the shmem files by looking through all open files, in /proc/*/fd/
and /proc/*/map_files/
(or /proc/*/maps
).
With the right hacks, it appears possible to reliably identify which files belong to the hidden shmem filesystem(s).
Each shared memory object is a file with a name. And the names can be used to identify which kernel subsystem created the file.
- SYSV00000000
- i915 (i.e. intel gpu)
- memfd:gdk-wayland
- dev/zero (for any "anonymous" shared mapping)
- ...
However this does not show all DRM / GEM allocations. DRM buffers can exist without being mapped, simply as a numeric handle. These are tied to the open DRM file they were created on. When the program crashes or is killed, the DRM file will be closed, and all its DRM handles will be cleaned up automatically. (Unless some other software keeps a copy of the file descriptor open, like this old bug.)
https://www.systutorials.com/docs/linux/man/7-drm-gem/
You can find open DRM files in /proc/*/fd/
, but they show as a zero-size file with zero blocks allocated.
For example right now, I am still unable to account for over 50% / 300MB of my Shmem
.
$ grep Shmem: /proc/meminfo
Shmem: 612732 kB
$ df -h -t tmpfs
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 59M 3.8G 2% /dev/shm
tmpfs 3.9G 2.5M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 9.0M 3.9G 1% /tmp
tmpfs 786M 20K 786M 1% /run/user/42
tmpfs 786M 8.0M 778M 2% /run/user/1000
tmpfs 786M 5.7M 781M 1% /run/user/1001
$ sudo ipcs -mu
------ Shared Memory Status --------
segments allocated 20
pages allocated 4226
pages resident 3990
pages swapped 0
Swap performance: 0 attempts 0 successes
All open files on hidden shmem filesystem(s):
$ sudo python3 ~/shm -s
15960 /SYSV*
79140 /i915
7912 /memfd:gdk-wayland
1164 /memfd:pulseaudio
104176
Here is a "before and after", logging out one of my two logged-in GNOME users. It might be explained if gnome-shell
had over 100MB of unmapped DRM buffers.
$ grep Shmem: /proc/meminfo
Shmem: 478780 kB
$ df -t tmpfs -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 4.0K 3.9G 1% /dev/shm
tmpfs 3.9G 2.5M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 276K 3.9G 1% /tmp
tmpfs 786M 20K 786M 1% /run/user/42
tmpfs 786M 8.0M 778M 2% /run/user/1000
tmpfs 786M 5.7M 781M 1% /run/user/1001
$ sudo ./shm -s
80 /SYSV*
114716 /i915
1692 /memfd:gdk-wayland
1156 /memfd:pulseaudio
117644
$ grep Shmem: /proc/meminfo
Shmem: 313008 kB
$ df -t tmpfs -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 4.0K 3.9G 1% /dev/shm
tmpfs 3.9G 2.1M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 204K 3.9G 1% /tmp
tmpfs 786M 20K 786M 1% /run/user/42
tmpfs 786M 6.8M 780M 1% /run/user/1000
$ sudo ./shm -s
40 /SYSV*
88496 /i915
1692 /memfd:gdk-wayland
624 /memfd:pulseaudio
90852
Python script to generate the above output:
#!/bin/python3
# Reads Linux /proc. No str, all bytes.
import sys
import os
import stat
import glob
import collections
import math
# File.
# 'name' is first name encountered, we don't track hardlinks.
Inode = collections.namedtuple('Inode', ['name', 'bytes', 'pids'])
# inode number -> Inode object
inodes = dict()
# pid -> program name
pids = dict()
# filename -> list() of Inodes
filenames = dict()
def add_file(pid, proclink):
try:
vfs = os.statvfs(proclink)
# The tmpfs which reports 0 blocks is an internal shm mount
# python doesn't admit f_fsid ...
if vfs.f_blocks != 0:
return
filename = os.readlink(proclink)
# ... but all the shm files are deleted (hack :)
if not filename.endswith(b' (deleted)'):
return
filename = filename[:-10]
# I tried a consistency check that all our st_dev are the same
# but actually there can be more than one internal shm mount!
# i915 added a dedicated "gemfs" so they could control mount options.
st = os.stat(proclink)
# hack the second: ignore deleted character devices from devpts
if stat.S_ISCHR(st.st_mode):
return
# Read process name succesfully,
# before we record file owned by process.
if pid not in pids:
pids[pid] = open(b'/proc/' + pid + b'/comm', 'rb').read()[:-1]
if st.st_ino not in inodes:
inode_pids = set()
inode_pids.add(pid)
inode = Inode(name=filename,
bytes=st.st_blocks * 512,
pids=inode_pids)
inodes[st.st_ino] = inode
else:
inode = inodes[st.st_ino]
inode.pids.add(pid)
# Group SYSV shared memory objects.
# There could be many, and the rest of the name is just a numeric ID
if filename.startswith(b'/SYSV'):
filename = b'/SYSV*'
filename_inodes = filenames.setdefault(filename, set())
filename_inodes.add(st.st_ino)
except FileNotFoundError:
# File disappeared (race condition).
# Don't bother to distinguish "file closed" from "process exited".
pass
summary = False
if sys.argv[1:]:
if sys.argv[1:] == ['-s']:
summary = True
else:
print("Usage: 0 [-s]".format(sys.argv[0]))
sys.exit(2)
os.chdir(b'/proc')
for pid in glob.iglob(b'[0-9]*'):
for f in glob.iglob(pid + b'/fd/*'):
add_file(pid, f)
for f in glob.iglob(pid + b'/map_files/*'):
add_file(pid, f)
def pid_name(pid):
return pid + b'/' + pids[pid]
def kB(b):
return str(math.ceil(b / 1024)).encode('US-ASCII')
out = sys.stdout.buffer
total = 0
for (filename, filename_inodes) in sorted(filenames.items(), key=lambda p: p[0]):
filename_bytes = 0
for ino in filename_inodes:
inode = inodes[ino]
filename_bytes += inode.bytes
if not summary:
out.write(kB(inode.bytes))
out.write(b't')
#out.write(str(ino).encode('US-ASCII'))
#out.write(b't')
out.write(inode.name)
out.write(b't')
out.write(b' '.join(map(pid_name, inode.pids)))
out.write(b'n')
total += filename_bytes
out.write(kB(filename_bytes))
out.write(b't')
out.write(filename)
out.write(b'n')
out.write(kB(total))
out.write(b'n')
You can find some of the shmem files by looking through all open files, in /proc/*/fd/
and /proc/*/map_files/
(or /proc/*/maps
).
With the right hacks, it appears possible to reliably identify which files belong to the hidden shmem filesystem(s).
Each shared memory object is a file with a name. And the names can be used to identify which kernel subsystem created the file.
- SYSV00000000
- i915 (i.e. intel gpu)
- memfd:gdk-wayland
- dev/zero (for any "anonymous" shared mapping)
- ...
However this does not show all DRM / GEM allocations. DRM buffers can exist without being mapped, simply as a numeric handle. These are tied to the open DRM file they were created on. When the program crashes or is killed, the DRM file will be closed, and all its DRM handles will be cleaned up automatically. (Unless some other software keeps a copy of the file descriptor open, like this old bug.)
https://www.systutorials.com/docs/linux/man/7-drm-gem/
You can find open DRM files in /proc/*/fd/
, but they show as a zero-size file with zero blocks allocated.
For example right now, I am still unable to account for over 50% / 300MB of my Shmem
.
$ grep Shmem: /proc/meminfo
Shmem: 612732 kB
$ df -h -t tmpfs
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 59M 3.8G 2% /dev/shm
tmpfs 3.9G 2.5M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 9.0M 3.9G 1% /tmp
tmpfs 786M 20K 786M 1% /run/user/42
tmpfs 786M 8.0M 778M 2% /run/user/1000
tmpfs 786M 5.7M 781M 1% /run/user/1001
$ sudo ipcs -mu
------ Shared Memory Status --------
segments allocated 20
pages allocated 4226
pages resident 3990
pages swapped 0
Swap performance: 0 attempts 0 successes
All open files on hidden shmem filesystem(s):
$ sudo python3 ~/shm -s
15960 /SYSV*
79140 /i915
7912 /memfd:gdk-wayland
1164 /memfd:pulseaudio
104176
Here is a "before and after", logging out one of my two logged-in GNOME users. It might be explained if gnome-shell
had over 100MB of unmapped DRM buffers.
$ grep Shmem: /proc/meminfo
Shmem: 478780 kB
$ df -t tmpfs -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 4.0K 3.9G 1% /dev/shm
tmpfs 3.9G 2.5M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 276K 3.9G 1% /tmp
tmpfs 786M 20K 786M 1% /run/user/42
tmpfs 786M 8.0M 778M 2% /run/user/1000
tmpfs 786M 5.7M 781M 1% /run/user/1001
$ sudo ./shm -s
80 /SYSV*
114716 /i915
1692 /memfd:gdk-wayland
1156 /memfd:pulseaudio
117644
$ grep Shmem: /proc/meminfo
Shmem: 313008 kB
$ df -t tmpfs -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 4.0K 3.9G 1% /dev/shm
tmpfs 3.9G 2.1M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 204K 3.9G 1% /tmp
tmpfs 786M 20K 786M 1% /run/user/42
tmpfs 786M 6.8M 780M 1% /run/user/1000
$ sudo ./shm -s
40 /SYSV*
88496 /i915
1692 /memfd:gdk-wayland
624 /memfd:pulseaudio
90852
Python script to generate the above output:
#!/bin/python3
# Reads Linux /proc. No str, all bytes.
import sys
import os
import stat
import glob
import collections
import math
# File.
# 'name' is first name encountered, we don't track hardlinks.
Inode = collections.namedtuple('Inode', ['name', 'bytes', 'pids'])
# inode number -> Inode object
inodes = dict()
# pid -> program name
pids = dict()
# filename -> list() of Inodes
filenames = dict()
def add_file(pid, proclink):
try:
vfs = os.statvfs(proclink)
# The tmpfs which reports 0 blocks is an internal shm mount
# python doesn't admit f_fsid ...
if vfs.f_blocks != 0:
return
filename = os.readlink(proclink)
# ... but all the shm files are deleted (hack :)
if not filename.endswith(b' (deleted)'):
return
filename = filename[:-10]
# I tried a consistency check that all our st_dev are the same
# but actually there can be more than one internal shm mount!
# i915 added a dedicated "gemfs" so they could control mount options.
st = os.stat(proclink)
# hack the second: ignore deleted character devices from devpts
if stat.S_ISCHR(st.st_mode):
return
# Read process name succesfully,
# before we record file owned by process.
if pid not in pids:
pids[pid] = open(b'/proc/' + pid + b'/comm', 'rb').read()[:-1]
if st.st_ino not in inodes:
inode_pids = set()
inode_pids.add(pid)
inode = Inode(name=filename,
bytes=st.st_blocks * 512,
pids=inode_pids)
inodes[st.st_ino] = inode
else:
inode = inodes[st.st_ino]
inode.pids.add(pid)
# Group SYSV shared memory objects.
# There could be many, and the rest of the name is just a numeric ID
if filename.startswith(b'/SYSV'):
filename = b'/SYSV*'
filename_inodes = filenames.setdefault(filename, set())
filename_inodes.add(st.st_ino)
except FileNotFoundError:
# File disappeared (race condition).
# Don't bother to distinguish "file closed" from "process exited".
pass
summary = False
if sys.argv[1:]:
if sys.argv[1:] == ['-s']:
summary = True
else:
print("Usage: 0 [-s]".format(sys.argv[0]))
sys.exit(2)
os.chdir(b'/proc')
for pid in glob.iglob(b'[0-9]*'):
for f in glob.iglob(pid + b'/fd/*'):
add_file(pid, f)
for f in glob.iglob(pid + b'/map_files/*'):
add_file(pid, f)
def pid_name(pid):
return pid + b'/' + pids[pid]
def kB(b):
return str(math.ceil(b / 1024)).encode('US-ASCII')
out = sys.stdout.buffer
total = 0
for (filename, filename_inodes) in sorted(filenames.items(), key=lambda p: p[0]):
filename_bytes = 0
for ino in filename_inodes:
inode = inodes[ino]
filename_bytes += inode.bytes
if not summary:
out.write(kB(inode.bytes))
out.write(b't')
#out.write(str(ino).encode('US-ASCII'))
#out.write(b't')
out.write(inode.name)
out.write(b't')
out.write(b' '.join(map(pid_name, inode.pids)))
out.write(b'n')
total += filename_bytes
out.write(kB(filename_bytes))
out.write(b't')
out.write(filename)
out.write(b'n')
out.write(kB(total))
out.write(b'n')
edited Nov 20 at 21:06
answered Nov 20 at 16:34
sourcejedi
21.9k43396
21.9k43396
add a comment |
add a comment |
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f482795%2fcan-i-see-the-amount-of-memory-which-is-allocated-as-gem-buffers%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown