/etc/fstab
autofs
autofs
Configuration 9.4.3. Overriding or Augmenting Site Configuration Files 9.4.4. Using LDAP to Store Automounter Maps
/etc/exports
Configuration File 9.7.2. The exportfs
Command 9.7.3. Running NFS Behind a Firewall 9.7.4. Hostname Formats 9.7.5. NFS over RDMA
AUTH_GSS
9.8.3. File Permissions
rpcbind
rpcbind
rpcbind
service, supports ACLs, and utilizes stateful operations. Red Hat Enterprise Linux 6 supports NFSv2, NFSv3, and NFSv4 clients. When mounting a file system via NFS, Red Hat Enterprise Linux uses NFSv4 by default, if the server supports it.
rpcbind
[3], lockd
, and rpc.statd
daemons. The rpc.mountd
daemon is required on the NFS server to set up the exports.
Note
'-p'
command line option that can set the port, making firewall configuration easier.
/etc/exports
configuration file to determine whether the client is allowed to access any exported file systems. Once verified, all file and directory operations are available to the user.
Important
rpc.nfsd
process now allow binding to any specified port during system start up. However, this can be error-prone if the port is unavailable, or if it conflicts with another daemon.
rpcbind
service. To share or mount NFS file systems, the following services work together depending on which version of NFS is implemented:
Note
portmap
service was used to map RPC program numbers to IP address port number combinations in earlier versions of Red Hat Enterprise Linux. This service is now replaced by rpcbind
in Red Hat Enterprise Linux 6 to enable IPv6 support. For more information about this change, refer to the following links:
service nfs start
starts the NFS server and the appropriate RPC processes to service requests for shared NFS file systems.
service nfslock start
activates a mandatory service that starts the appropriate RPC processes allowing NFS clients to lock files on the server.
rpcbind
accepts port reservations from local RPC services. These ports are then made available (or advertised) so the corresponding remote RPC services can access them. rpcbind
responds to requests for RPC services and sets up connections to the requested RPC service. This is not used with NFSv4.
rpc.nfsd
allows explicit NFS versions and protocols the server advertises to be defined. It works with the Linux kernel to meet the dynamic demands of NFS clients, such as providing server threads each time an NFS client connects. This process corresponds to the nfs
service.
Note
rpc.idmapd
. The NFSv4 client uses the keyring-based idmapper nfsidmap
. nfsidmap
is a stand-alone program that is called by the kernel on-demand to perform ID mapping; it is not a daemon. If there is a problem with nfsidmap
does the client fall back to using rpc.idmapd
. More information regarding nfsidmap
can be found on the nfsidmap man page.
MOUNT
requests from NFSv2 and NFSv3 clients. It checks that the requested NFS share is currently exported by the NFS server, and that the client is allowed to access it. If the mount request is allowed, the rpc.mountd server replies with a Success
status and provides the File-Handle
for this NFS share back to the NFS client.
lockd
is a kernel thread which runs on both clients and servers. It implements the Network Lock Manager(NLM) protocol, which allows NFSv2 and NFSv3 clients to lock files on the server. It is started automatically whenever the NFS server is run and whenever an NFS file system is mounted.
rpc.statd
is started automatically by the nfslock
service, and does not require user configuration. This is not used with NFSv4.
rpc.rquotad
is started automatically by the nfs
service and does not require user configuration.
rpc.idmapd
provides NFSv4 client and server upcalls, which map between on-the-wire NFSv4 names (strings in the form of user@domain
) and local UIDs and GIDs. For idmapd
to function with NFSv4, the /etc/idmapd.conf
file must be configured. At a minimum, the "Domain" parameter should be specified, which defines the NFSv4 mapping domain. If the NFSv4 mapping domain is the same as the DNS domain name, this parameter can be skipped. The client and server must agree on the NFSv4 mapping domain for ID mapping to function properly. Refer to the knowledge base article https://access.redhat.com/site/solutions/130783 when using a local domain name.
-o minorversion=1
-o v4.1
nfs_layout_nfsv41_files
kernel is automatically loaded on the first mount. Use the following command to verify the module was loaded:
$ lsmod | grep nfs_layout_nfsv41_files
mount
command. The mount entry in the output should contain minorversion=1
.
Important
mount
command mounts NFS shares on the client side. Its format is as follows:
# mount -t nfs -o options host:/remote/export /local/directory
mount
options nfsvers
or vers
. By default, mount
will use NFSv4 with mount -t nfs
. If the server does not support NFSv4, the client will automatically step down to a version supported by the server. If the nfsvers
/ vers
option is used to pass a particular version not supported by the server, the mount will fail. The file system type nfs4 is also available for legacy reasons; this is equivalent to running mount -t nfs -o nfsvers=4 host:/remote/export /local/directory
.
man mount
for more details.
/etc/fstab
file and the autofs
service. Refer to Section 9.3.1, “Mounting NFS File Systems using /etc/fstab
” and Section 9.4, “autofs
” for more information.
/etc/fstab
/etc/fstab
file. The line must state the hostname of the NFS server, the directory on the server being exported, and the directory on the local machine where the NFS share is to be mounted. You must be root to modify the /etc/fstab
file.
Example 9.1. Syntax example
/etc/fstab
is as follows:
server:/usr/local/pub /pub nfs defaults 0 0
/pub
must exist on the client machine before this command can be executed. After adding this line to /etc/fstab
on the client system, use the command mount /pub
, and the mount point /pub
is mounted from the server.
/etc/fstab
file is referenced by the netfs
service at boot time, so lines referencing NFS shares have the same effect as manually typing the mount
command during the boot process.
/etc/fstab
entry to mount an NFS export should contain the following information:
server:/remote/export /local/directory nfs options 0 0
Note
/etc/fstab
is read. Otherwise, the mount will fail.
/etc/fstab
, refer to man fstab
.
autofs
/etc/fstab
is that, regardless of how infrequently a user accesses the NFS mounted file system, the system must dedicate resources to keep the mounted file system in place. This is not a problem with one or two mounts, but when the system is maintaining mounts to many systems at one time, overall system performance can be affected. An alternative to /etc/fstab
is to use the kernel-based automount
utility. An automounter consists of two components:
automount
utility can mount and unmount NFS file systems automatically (on-demand mounting), therefore saving system resources. It can be used to mount other file systems including AFS, SMBFS, CIFS, and local file systems.
Important
autofs
uses /etc/auto.master
(master map) as its default primary configuration file. This can be changed to use another supported network source and name using the autofs
configuration (in /etc/sysconfig/autofs
) in conjunction with the Name Service Switch (NSS) mechanism. An instance of the autofs
version 4 daemon was run for each mount point configured in the master map and so it could be run manually from the command line for any given mount point. This is not possible with autofs
version 5, because it uses a single daemon to manage all configured mount points; as such, all automounts must be configured in the master map. This is in line with the usual requirements of other industry standard automounters. Mount point, hostname, exported directory, and options can all be specified in a set of files (or other supported network sources) rather than configuring them manually for each host.
autofs
version 5 features the following enhancements over version 4:
autofs
provide a mechanism to automatically mount file systems at arbitrary points in the file system hierarchy. A direct map is denoted by a mount point of /-
in the master map. Entries in a direct map contain an absolute path name as a key (instead of the relative path names used in indirect maps).
-hosts
map, commonly used for automounting all exports from a host under /net/host
as a multi-mount map entry. When using the -hosts
map, an ls
of /net/host
will mount autofs trigger mounts for each export from host. These will then mount and expire them as they are accessed. This can greatly reduce the number of active mounts needed when accessing a server with a large number of exports.
autofs
configuration file ( /etc/sysconfig/autofs
) provides a mechanism to specify the autofs
schema that a site implements, thus precluding the need to determine this via trial and error in the application itself. In addition, authenticated binds to the LDAP server are now supported, using most mechanisms supported by the common LDAP server implementations. A new configuration file has been added for this support: /etc/autofs_ldap_auth.conf
. The default configuration file is self-documenting, and uses an XML format.
nsswitch
) configuration.
man nsswitch.conf
for more information on the supported syntax of this file. Not all NSS databases are valid map sources and the parser will reject ones that are invalid. Valid sources are files, yp
, nis
, nisplus
, ldap
, and hesiod
.
/-
. The map keys for each entry are merged and behave as one map.
Example 9.2. Multiple master map entries per autofs mount point
/- /tmp/auto_dcthon /- /tmp/auto_test3_direct /- /tmp/auto_test4_direct
autofs
Configuration/etc/auto.master
, also referred to as the master map which may be changed as described in the Section 9.4.1, “Improvements in autofs Version 5 over Version 4”. The master map lists autofs
-controlled mount points on the system, and their corresponding configuration files or network sources known as automount maps. The format of the master map is as follows:
mount-point map-name options
autofs
mount point, /home
, for example.
autofs
version 4 where options were cumulative. This has been changed to implement mixed environment compatibility.
Example 9.3. /etc/auto.master
file
/etc/auto.master
file (displayed with cat /etc/auto.master
):
/home /etc/auto.misc
mount-point [options] location
autofs
mount point. This can be a single directory name for an indirect mount or the full path of the mount point for direct mounts. Each direct and indirect map entry key ( mount-point
above) may be followed by a space separated list of offset directories (sub directory names each beginning with a "/") making them what is known as a multi-mount entry.
/etc/auto.misc
):
payroll -fstype=nfs personnel:/exports/payroll sales -fstype=ext3 :/dev/hda4
autofs
mount point ( sales
and payroll
from the server called personnel
). The second column indicates the options for the autofs
mount while the third column indicates the source of the mount. Following the above configuration, the autofs mount points will be /home/payroll
and /home/sales
. The -fstype=
option is often omitted and is generally not needed for correct operation.
service autofs start
(if the automount daemon has stopped)
service autofs restart
autofs
unmounted directory such as /home/payroll/2006/July.sxc
, the automount daemon automatically mounts the directory. If a timeout is specified, the directory will automatically be unmounted if the directory is not accessed for the timeout period.
# service autofs status
/etc/nsswitch.conf
file has the following directive:
automount: files nis
auto.master
file contains the following
+auto.master
auto.master
map file contains the following:
/home auto.home
auto.home
map contains the following:
beth fileserver.example.com:/export/home/beth joe fileserver.example.com:/export/home/joe * fileserver.example.com:/export/home/&
/etc/auto.home
does not exist.
auto.home
and mount home directories from a different server. In this case, the client will need to use the following /etc/auto.master
map:
/home /etc/auto.home +auto.master
/etc/auto.home
map contains the entry:
* labserver.example.com:/export/home/&
/home
will contain the contents of /etc/auto.home
instead of the NIS auto.home
map.
auto.home
map with just a few entries, create an /etc/auto.home
file map, and in it put the new entries. At the end, include the NIS auto.home
map. Then the /etc/auto.home
file map will look similar to:
mydir someserver:/export/mydir +auto.home
auto.home
map listed above, ls /home
would now output:
beth joe mydir
autofs
does not include the contents of a file map of the same name as the one it is reading. As such, autofs
moves on to the next map source in the nsswitch
configuration.
openldap
package should be installed automatically as a dependency of the automounter
. To configure LDAP access, modify /etc/openldap/ldap.conf
. Ensure that BASE, URI, and schema are set appropriately for your site.
rfc2307bis
. To use this schema it is necessary to set it in the autofs
configuration /etc/autofs.conf
by removing the comment characters from the schema definition.
Example 9.4. Setting autofs configuration
map_object_class = automountMap entry_object_class = automount map_attribute = automountMapName entry_attribute = automountKey value_attribute = automountInformation
Note
/etc/autofs.conf
file instead of the /etc/systemconfig/autofs
file as was the case in previous releases.
automountKey
replaces the cn
attribute in the rfc2307bis
schema. An LDIF
of a sample configuration is described below:
Example 9.5. LDIF configuration
# extended LDIF # # LDAPv3 # base <> with scope subtree # filter: (&(objectclass=automountMap)(automountMapName=auto.master)) # requesting: ALL # # auto.master, example.com dn: automountMapName=auto.master,dc=example,dc=com objectClass: top objectClass: automountMap automountMapName: auto.master # extended LDIF # # LDAPv3 # basewith scope subtree # filter: (objectclass=automount) # requesting: ALL # # /home, auto.master, example.com dn: automountMapName=auto.master,dc=example,dc=com objectClass: automount cn: /home automountKey: /home automountInformation: auto.home # extended LDIF # # LDAPv3 # base <> with scope subtree # filter: (&(objectclass=automountMap)(automountMapName=auto.home)) # requesting: ALL # # auto.home, example.com dn: automountMapName=auto.home,dc=example,dc=com objectClass: automountMap automountMapName: auto.home # extended LDIF # # LDAPv3 # base with scope subtree # filter: (objectclass=automount) # requesting: ALL # # foo, auto.home, example.com dn: automountKey=foo,automountMapName=auto.home,dc=example,dc=com objectClass: automount automountKey: foo automountInformation: filer.example.com:/export/foo # /, auto.home, example.com dn: automountKey=/,automountMapName=auto.home,dc=example,dc=com objectClass: automount automountKey: / automountInformation: filer.example.com:/export/&
mount
commands, /etc/fstab
settings, and autofs
.
all
, none
, or pos
/ positive
.
mount
command.
vers
is identical to nfsvers
, and is included in this release for compatibility reasons.
set-user-identifier
or set-group-identifier
bits. This prevents remote users from gaining higher privileges by running a setuid
program.
port=num
— Specifies the numeric value of the NFS server port. If num
is 0
(the default), then mount
queries the remote host's rpcbind
service for the port number to use. If the remote host's NFS daemon is not registered with its rpcbind
service, the standard NFS port number of TCP 2049 is used instead.
rsize
) and writes ( wsize
) by setting a larger data block size ( num, in bytes), to be transferred at one time. Be careful when changing these values; some older Linux kernels and network cards do not work well with larger block sizes.
Note
sec=sys
, which uses local UNIX UIDs and GIDs by using AUTH_SYS
to authenticate NFS operations.
sec=krb5
uses Kerberos V5 instead of local UNIX UIDs and GIDs to authenticate users.
sec=krb5i
uses Kerberos V5 for user authentication and performs integrity checking of NFS operations using secure checksums to prevent data tampering.
sec=krb5p
uses Kerberos V5 for user authentication, integrity checking, and encrypts NFS traffic to prevent traffic sniffing. This is the most secure setting, but it also involves the most performance overhead.
man mount
and man nfs
.
rpcbind
[3] service must be running. To verify that rpcbind
is active, use the following command:
# service rpcbind status
rpcbind
service is running, then the nfs
service can be started. To start an NFS server, use the following command:
# service nfs start
nfslock
must also be started for both the NFS client and server to function properly. To start NFS locking, use the following command:
# service nfslock start
nfslock
also starts by running chkconfig --list nfslock
. If nfslock
is not set to on
, this implies that you will need to manually run the service nfslock start
each time the computer starts. To set nfslock
to automatically start on boot, use chkconfig nfslock on
.
nfslock
is only needed for NFSv2 and NFSv3.
# service nfs stop
restart
option is a shorthand way of stopping and then starting NFS. This is the most efficient way to make configuration changes take effect after editing the configuration file for NFS. To restart the server type:
# service nfs restart
condrestart
( conditional restart) option only starts nfs
if it is currently running. This option is useful for scripts, because it does not start the daemon if it is not running. To conditionally restart the server type:
# service nfs condrestart
# service nfs reload
/etc/exports
, and
exportfs
/etc/exports
Configuration File/etc/exports
file controls which file systems are exported to remote hosts and specifies options. It follows the following syntax rules:
#
).
\
).
export host(options)
export host1(options1) host2(options2) host3(options3)
/etc/exports
file only specifies the exported directory and the hosts permitted to access it, as in the following example:
Example 9.6. The /etc/exports
file
/exported/directory bob.example.com
bob.example.com
can mount /exported/directory/
from the NFS server. Because no options are specified in this example, NFS will use default settings.
rw
option.
async
.
no_wdelay
. no_wdelay
is only available if the default sync
option is also specified.
nfsnobody
. This effectively "squashes" the power of the remote root user to the lowest local user, preventing possible unauthorized writes on the remote server. To disable root squashing, specify no_root_squash
.
all_squash
. To specify the user and group IDs that the NFS server should assign to remote users from a particular host, use the anonuid
and anongid
options, respectively, as in:
export host(anonuid=uid,anongid=gid)
anonuid
and anongid
options allow you to create a special user and group account for remote NFS users to share.
no_acl
option when exporting the file system.
rw
option is not specified, then the exported file system is shared as read-only. The following is a sample line from /etc/exports
which overrides two default options:
/another/exported/directory 192.168.0.3(rw,async)
192.168.0.3
can mount /another/exported/directory/
read/write and all writes to disk are asynchronous. For more information on exporting options, refer to man exportfs
.
man exports
for details on these less-used options.
Important
/etc/exports
file is very precise, particularly in regards to use of the space character. Remember to always separate exported file systems from hosts and hosts from one another with a space character. However, there should be no other space characters in the file except on comment lines.
/home bob.example.com(rw) /home bob.example.com (rw)
bob.example.com
read/write access to the /home
directory. The second line allows users from bob.example.com
to mount the directory as read-only (the default), while the rest of the world can mount it read/write.
exportfs
Command/etc/exports
file. When the nfs
service starts, the /usr/sbin/exportfs
command launches and reads this file, passes control to rpc.mountd
(if NFSv2 or NFSv3) for the actual mounting process, then to rpc.nfsd
where the file systems are then available to remote users.
/usr/sbin/exportfs
command allows the root user to selectively export or unexport directories without restarting the NFS service. When given the proper options, the /usr/sbin/exportfs
command writes the exported file systems to /var/lib/nfs/etab
. Since rpc.mountd
refers to the etab
file when deciding access privileges to a file system, changes to the list of exported file systems take effect immediately.
/usr/sbin/exportfs
:
/etc/exports
to be exported by constructing a new export list in /etc/lib/nfs/etab
. This option effectively refreshes the export list with any changes made to /etc/exports
.
/usr/sbin/exportfs
. If no other options are specified, /usr/sbin/exportfs
exports all file systems specified in /etc/exports
.
/etc/exports
. Replace file-systems with additional file systems to be exported. These file systems must be formatted in the same way they are specified in /etc/exports
. This option is often used to test an exported file system before adding it permanently to the list of file systems to be exported. Refer to Section 9.7.1, “The /etc/exports
Configuration File” for more information on /etc/exports
syntax.
/etc/exports
; only options given from the command line are used to define exported file systems.
/usr/sbin/exportfs -ua
suspends NFS file sharing while keeping all NFS daemons up. To re-enable NFS sharing, use exportfs -r
.
exportfs
command is executed.
exportfs
command, it displays a list of currently exported file systems. For more information about the exportfs
command, refer to man exportfs
.
exportfs
with NFSv4RPCNFSDARGS= -N 4
in /etc/sysconfig/nfs
.
rpcbind
, which dynamically assigns ports for RPC services and can cause problems for configuring firewall rules. To allow clients to access NFS shares behind a firewall, edit the /etc/sysconfig/nfs
configuration file to control which ports the required RPC services run on.
/etc/sysconfig/nfs
may not exist by default on all systems. If it does not exist, create it and add the following variables, replacing port with an unused port number (alternatively, if the file exists, un-comment and change the default entries as required):
MOUNTD_PORT=port
mountd
( rpc.mountd
) uses.
STATD_PORT=port
rpc.statd
) uses.
LOCKD_TCPPORT=port
nlockmgr
( lockd
) uses.
LOCKD_UDPPORT=port
nlockmgr
( lockd
) uses.
/var/log/messages
. Normally, NFS will fail to start if you specify a port number that is already in use. After editing /etc/sysconfig/nfs
, restart the NFS service using service nfs restart
. Run the rpcinfo -p
command to confirm the changes.
Procedure 9.1. Configure a firewall to allow NFS
rpcbind
/ sunrpc
).
MOUNTD_PORT="port"
STATD_PORT="port"
LOCKD_TCPPORT="port"
LOCKD_UDPPORT="port"
Note
/proc/sys/fs/nfs/nfs_callback_tcpport
and allow the server to connect to that port on the client.
mountd
, statd
, and lockd
are not required in a pure NFSv4 environment.
showmount
command:
$ showmount -e myserver Export list for mysever /exports/foo /exports/bar
/
and look around.
# mount myserver
:/ /mnt/
#cd /mnt/
exports
# ls exports
foo
bar
Note
*
or ?
character to specify a string match. Wildcards are not to be used with IP addresses; however, they may accidentally work if reverse DNS lookups fail. When specifying wildcards in fully qualified domain names, dots ( .
) are not included in the wildcard. For example, *.example.com
includes one.example.com
but does not include one.two.example.com
.
Procedure 9.2. Enable RDMA from server
# yum install rdma; chkconfig --level 2345 rdma on
# yum install rdma; chkconfig --level 345 nfs-rdma on
/etc/rdma/rdma.conf
file to set NFSoRDMA_LOAD=yes and NFSoRDMA_PORT to the desired port.
Procedure 9.3. Enable RDMA from client
# yum install rdma; chkconfig --level 2345 rdma on
# mount -t nfs -o rdma,port=port_number
AUTH_SYS
(also called AUTH_UNIX
) which relies on the client to state the UID and GID's of the user. Be aware that this means a malicious or misconfigured client can easily get this wrong and allow a user access to files that it should not.
rpcbind
[3] service with TCP wrappers. Creating rules with iptables
can also limit access to ports used by rpcbind
, rpc.mountd
, and rpc.nfsd
.
rpcbind
, refer to man iptables
.
AUTH_GSS
Note
Procedure 9.4. Set up RPCSEC_GSS
nfs/client.mydomain@MYREALM
and nfs/server.mydomain@MYREALM
principals.
sec=krb5,krb5i,krb5p
to the export. To continue allowing AUTH_SYS, add sec=sys,krb5,krb5i,krb5p
instead.
sec=krb5
(or sec=krb5i
, or sec=krb5p
depending on the set up) to the mount options.
krb5
, krb5i
, and krb5p
, refer to the exports
and nfs
man pages or to Section 9.5, “Common NFS Mount Options”.
RPCSEC_GSS
framework, including how rpc.svcgssd
and rpc.gssd
inter-operate, refer to http://www.citi.umich.edu/projects/nfsv4/gssd/.
MOUNT
protocol for mounting file systems. This protocol presented possible security holes because of the way that it processed file handles.
su -
command to access any files with the NFS share.
nobody
. Root squashing is controlled by the default option root_squash
; for more information about this option, refer to Section 9.7.1, “The /etc/exports
Configuration File”. If possible, never disable root squashing.
all_squash
option. This option makes every user accessing the exported file system take the user ID of the nfsnobody
user.
rpcbind
Note
rpcbind
service for backward compatibility.
rpcbind
[3] utility maps RPC services to the ports on which they listen. RPC processes notify rpcbind
when they start, registering the ports they are listening on and the RPC program numbers they expect to serve. The client system then contacts rpcbind
on the server with a particular RPC program number. The rpcbind
service redirects the client to the proper port number so it can communicate with the requested service.
rpcbind
to make all connections with incoming client requests, rpcbind
must be available before any of these services start.
rpcbind
service uses TCP wrappers for access control, and access control rules for rpcbind
affect all RPC-based services. Alternatively, it is possible to specify access control rules for each of the NFS RPC daemons. The man
pages for rpc.mountd
and rpc.statd
contain information regarding the precise syntax for these rules.
rpcbind
rpcbind
[3] provides coordination between RPC services and the port numbers used to communicate with them, it is useful to view the status of current RPC services using rpcbind
when troubleshooting. The rpcinfo
command shows each RPC-based service with port numbers, an RPC program number, a version number, and an IP protocol type (TCP or UDP).
rpcbind
, issue the following command:
# rpcinfo -p
Example 9.7. rpcinfo -p
command output
program vers proto port service 100021 1 udp 32774 nlockmgr 100021 3 udp 32774 nlockmgr 100021 4 udp 32774 nlockmgr 100021 1 tcp 34437 nlockmgr 100021 3 tcp 34437 nlockmgr 100021 4 tcp 34437 nlockmgr 100011 1 udp 819 rquotad 100011 2 udp 819 rquotad 100011 1 tcp 822 rquotad 100011 2 tcp 822 rquotad 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100005 1 udp 836 mountd 100005 1 tcp 839 mountd 100005 2 udp 836 mountd 100005 2 tcp 839 mountd 100005 3 udp 836 mountd 100005 3 tcp 839 mountd
rpcbind
will be unable to map RPC requests from clients for that service to the correct port. In many cases, if NFS is not present in rpcinfo
output, restarting NFS causes the service to correctly register with rpcbind
and begin working.
rpcinfo
, refer to its man
page.
man mount
— Contains a comprehensive look at mount options for both NFS server and client configurations.
man fstab
— Gives details for the format of the /etc/fstab
file used to mount file systems at boot-time.
man nfs
— Provides details on NFS-specific file system export and mount options.
man exports
— Shows common options used in the /etc/exports
file when exporting NFS file systems.
man 8 nfsidmap
— Explains the nfsidmap
cammand and lists common options.
rpcbind
service replaces portmap
, which was used in previous versions of Red Hat Enterprise Linux to map RPC program numbers to IP address port number combinations. For more information, refer to Section 9.1.1, “Required Services”.