Previous Page
Next Page

Virtual File Systems, Swap Space, and Core Dumps

Physical memory is supplemented by specially configured space on the physical disk known as swap. Swap is configured either on a special disk partition known as a swap partition or on a swap file system. In addition to swap partitions, special files called swap files can also be configured in existing UFSs to provide additional swap space when needed. The Solaris virtual memory system provides transparent access to physical memory, swap, and memory-mapped objects.

Swap Space

The swap command is used to add, delete, and monitor swap files. The options for swap are shown in Table 26.

Table 26. swap Command Options

Option

Description

-a

Adds a specified swap area. You can also use the script /sbin/swapadd to add a new swap file.

-d

Deletes a specified swap area.

-l

Displays the location of your systems' swap areas.

-s

Displays a summary of the system's swap space.


The Solaris installation program automatically allocates 512 Megabytes of swap if a specific value is not specified.

Core File and Crash Dump Configuration

Core files are created when a program, or application, terminates abnormally. The default location for a core file to be written is the current working directory.

Core files are managed using the coreadm command. When entered with no options, coreadm displays the current configuration, as specified by /etc/coreadm.conf. The options are shown in Table 27.

Table 27. coreadm Syntax

Option

Description

-g pattern

Set the global core file name pattern.

-G content

Set the global core file content using one of the description tokens.

-i pattern

Set the per-process core file name pattern.

-I content

Set the per-process core file name to content.

-d option

Disable the specified core file option.

-e option

Enable the specified core file option.

-p pattern

Sets the per-process core file name pattern for each of the specified pids.

-P content

Sets the per-process core file content to content.

-u

Update the systemwide core file options from the configuration file /etc/coreadm.conf.


Core file names can be customized using a number of embedded variables. Table 28 lists the possible patterns:

Table 28. coreadm Patterns

coreadm Pattern

Description

%p

The Process ID (PID)

%u

Effective User ID

%g

Effective Group ID

%d

Specifies the executable file directory name.

%f

Executable filename

%n

System node name (same as running uname -n)

%m

Machine name (same as running uname -m)

%t

Decimal value of time (number of seconds since 00:00:00 January 1 1970)

-z

Specifies the name of the zone in which the process executed (zonename)

%%

A literal '%' character


A crash dump is a snapshot of the kernel memory, saved on disk, at the time a fatal system error occurred. When a serious error is encountered, the system displays an error message on the console, dumps the contents of kernel memory by default, and then reboots the system.

Normally, crash dumps are configured to use the swap partition to write the contents of memory. The savecore program runs when the system reboots and saves the image in a predefined location, usually /var/crash/<hostname> where <hostname> represents the name of your system.

Configuration of crash dump files is carried out with the dumpadm command. Running this command with no options will display the current configuration by reading the file /etc/dumpadm.conf.

dumpadm options are shown in Table 29.

Table 29. dumpadm Options

Option

Description

-c content-type

Modify crash dump content; valid values are kernel (just kernel pages), all (all memory pages), and curproc (kernel pages and currently executing process pages).

-d dump-device

Modify the dump device. This can be specified either as an absolute pathname (such as /dev/dsk/c0t0d0s3) or the word swap when the system will identify the best swap area to use.

-mink|minm|min%

Maintain minimum free space in the current savecore directory, specified either in kilobytes, megabytes, or a percentage of the total current size of the directory.

-n

Disable savecore from running on reboot. This is not recommended as any crash dumps would be lost.

-r root-dir

Specify a different root directory. If this option is not used, the default "/" is used.

-s savecore-dir

Specify a different savecore directory, instead of the default /var/crash/hostname.

-y

Enable savecore to run on the next reboot. This setting is used by default.


The gcore command can be used to create a core image of a specified running process. By default, the resulting file will be named core.<pid>, where <pid> is the pid of the running process.

gcore options are shown in Table 30.

Table 30. gcore Options

Option

Description

-c content-type

Produces image files with the specified content. This uses the same tokens as coreadm, but cannot be used with the -p or -g options.

-F

Force. This option grabs the specified process even if another process has control.

-g

Produces core image files in the global core file repository, using the global content that was configured with coreadm.

-o filename

Specify filename to be used instead of core as the first part of the name of the core image files.

-p

Produces process-specific core image files, with process-specific content, as specified by coreadm.


Network File System (NFS)

The NFS service allows computers of different architectures, running different operating systems, to share file systems across a network. Just as the mount command lets you mount a file system on a local disk, NFS lets you mount a file system that is located on another system anywhere on the network. The NFS service provides the following benefits:

  • Lets multiple computers use the same files so that everyone on the network can access the same data. This eliminates the need to have redundant data on several systems.

  • Reduces storage costs by having computers share applications and data.

  • Provides data consistency and reliability because all users can read the same set of files.

  • Makes mounting of file systems transparent to users.

  • Makes accessing remote files transparent to users.

  • Supports heterogeneous environments.

  • Reduces system administration overhead.

Solaris 10 introduced NFS version 4, which has the following features:

  • The UID and GID are represented as strings, and a new daemon, nfs4mapid, provides the mapping to numeric IDs.

  • The default transport for NFS version 4 is the Remote Direct Memory Access (RDMA) protocol, a technology for memory-to-memory transfer over high speed data networks.

  • All state and lock information is destroyed when a file system is unshared. In previous versions of NFS, this information was retained.

  • NFS4 provides a pseudo file system to give clients access to exported objects on the NFS server.

  • NFS4 is a stateful protocol where both the client and server hold information about current locks and open files. When a failure occurs, the two work together to re-establish the open, or locked files.

  • NFS4 no longer uses the mountd, statd, or nfslogd daemons.

  • NFS4 supports delegation, which allows the management responsibility of a file to be delegated to the client. Both the server and client support delegation. A client can be granted a read delegation, which can be granted to multiple clients, or a write delegation, providing exclusive access to a file.

NFS uses a number of daemons to handle its services. These services are initialized at startup from the svc:/network/nfs/server:default and svc:/network/nfs/client:default startup service management functions. The most important NFS daemons are outlined in Table 31.

Table 31. NFS Daemons

Daemon

Description

nfsd

This daemon handles file system exporting and file access requests from remote systems. An NFS server runs multiple instances of this daemon. This daemon is usually invoked at the multi-user-server milestone and is started by the svc:/network/nfs/server:default service identifier.

mountd

This daemon handles mount requests from NFS clients. This daemon also provides information about which file systems are mounted by which clients. Use the showmount command to view this information. This daemon is usually invoked at the multi-user-server milestone and is started by the svc:/network/nfs/server:default service identifier. This daemon is not used in NFS version 4.

lockd

This daemon runs on the NFS server and NFS client, and provides file-locking services in NFS. This daemon is started by the svc:/network/nfs/client service identifier at the multi-user milestone.

statd

This daemon runs on the NFS server and NFS client, and interacts with lockd to provide the crash and recovery functions for the locking services on NFS. This daemon is started by the svc:/network/nfs/client service identifier at the multi-user milestone. This daemon is not used in NFS version 4.

rpcbind

This daemon facilitates the initial connection between the client and the server.

nfsmapid

A new daemon that maps to and from NFS v4 owner and group identification and UID and GID numbers. It uses entries in the passwd and group files to carry out the mapping, and also references /etc/nsswitch.conf to determine the order of access.

nfs4cbd

A new client side daemon that listens on each transport and manages the callback functions to the NFS server.

nfslogd

This daemon provides operational logging to the Solaris NFS server. NFS logging uses the configuration file /etc/nfs/nfslog.conf. The nfslogd daemon is not used in NFS version 4.


Autofs

When a network contains even a moderate number of systems, all trying to mount file systems from each other, managing NFS can quickly become a nightmare. The Autofs facility, also called the automounter, is designed to handle such situations by providing a method in which remote directories are mounted only when they are being used.

When a request is made to access a file system at an Autofs mount point, the system goes through the following steps:

1.
Autofs intercepts the request.

2.
Autofs sends a message to the automountd daemon for the requested file system to be mounted.

3.
automountd locates the file system information in a map and performs the mount.

4.
Autofs allows the intercepted request to proceed.

5.
Autofs unmounts the file system after a period of inactivity.


Previous Page
Next Page