Monday, June 25, 2007

 

Tuning the Linux kernel for better network Throughput

By Vincent Danen, Special to ZDNet Asia
18 June 2007

The Linux kernel and the distributions that package it typically provide very conservative defaults to certain network settings that affect networking parameters. These settings can be tuned via the /proc filesystem or using the sysctl program. The latter is often better, as it reads the contents of /etc/sysctl.conf, which allows you to keep settings across reboots.

The following is a snippet from /etc/sysctl.conf that may improve network performance:

net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_syncookies = 1
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

The above isn't to replace what may already exist in /etc/sysctl.conf, but rather to supplement it. The first command enables TCP window scaling, which allows clients to download data at a higher rate by enabling extra bits in TCP packets that are used to increase the window size.

The second command enables TCP SYN cookies, which is often enabled by default and is extremely effective in preventing conditions such as SYN floods that can drain the server of resources used to process incoming connections.

The last four options increase the TCP send and receive buffers, which allow an application to move its data out faster so as to serve other requests. This also improves the client's ability to send data to the server when it gets busy.

By adding these commands to the /etc/sysctl.conf file, you ensure they take effect on every reboot. To enable them immediately without a reboot, use:

# sysctl -p /etc/sysctl.conf

To see all of the currently configured sysctl options, use:

# sysctl -a

This will list all of the configuration keys and their current values. The sysctl.conf file allows you to configure and save new defaults; what you see from this output are the defaults defined in the kernel that are currently effective. To see the value of one particular item, use:

# sysctl -q net.ipv4.tcp_window_scaling

Likewise, to set the value of one item without configuring it in sysctl.conf -- and understanding that it won't be retained across reboots, use:

# sysctl -w net.ipv4.tcp_window_scaling=1

This can be useful for testing the effectiveness of certain settings without committing them to being defaults.

Labels: ,


 

Tuning the Network File System for better performance

Tuning the Network File System for better performance
By Vincent Danen, TechRepublic
The Network File System (NFS) is still very popular on Linux systems, but it can use some help to increase performance by tweaking the relatively conservative defaults that most Linux distributions ship with. This can be done by tweaking both NFS servers and clients.

On the server side, you must ensure that there are enough NFS kernel threads to handle the number of connections by the clients. You can determine whether or not the default is sufficient by looking at RPC statistics using nfsstat on the NFS client:

# nfsstat -rc
Client rpc stats:
calls retrans authrefrsh
3409166 330 0

Here you can see that the retrans value is quite high, meaning that retransmissions were often necessary since the last reboot. This is a clear indication that the number of available NFS kernel threads on the server is insufficient to handle the requests from this client. The default number of threads for rpc.nfsd to start is typically eight threads.

To tell rpc.nfsd to use more kernel threads, the number of threads must be passed as an argument to it. Typically, most distributions will have a file such as /etc/sysconfig/nfs to configure this; on a Mandriva Linux system, the configuration item RPCNFSDCOUNT in /etc/sysconfig/nfs is used to determine the number of kernel threads to pass to rpc.nfsd. Increase this number -- perhaps to 16 -- on a moderately busy server, or increase up to 32 or 64 on a more heavily used system. Re-evaluate using nfsstat to determine whether or not the number of kernel threads is sufficient; if the retrans setting is 0 then it is enough; but, if the client still needs to retransmit, increase the number of threads further.

On the client side of things, remote NFS mounts should be mounted with the following options:

rsize=32768,wsize=32768,intr,noatime

By default, most clients will mount remote NFS file systems with an 8-KB read/write block size; the above will increase that to a 32-KB read/write block size. It will also ensure that NFS operations can be interrupted if there is a hang and will also ensure that the atime won’t be constantly updated on files accessed on remote NFS file systems.

If NFS file systems are mounted via /etc/fstab, make the changes there; otherwise, you will need to make them to any configuration files belonging to your chosen automounter. In the case of amd, the /etc/amd.net file would look like:

/defaults fs:=${autodir}/${rhost}/root/${rfs};opts:=nosuid,nodev, rsize=32768,wsize=32768,intr,noatime
* rhost:=${key};type:=host;rfs:=/

By tweaking the defaults of NFS servers and clients, you can make using NFS faster and more responsive, particularly if you make heavy use of NFS file systems.

Labels: ,


This page is powered by Blogger. Isn't yours?