varnish的监控和调试

Varnish settings

To see further description of these settings, also check param.show -l in the Varnish management interface.

  • -p thread_pool_max=4000 (default 1000)

This number should be as low as possible, but with an upwards margin. Do not set it much higher than you need, that only leads to thread pile-ups. The "correct" number is something like the 90% centile number of concurrent requests when running your peak load. Since that is an incredible tricky number to measure, I suggest you set it 10% over the highest number of threads you see during normal operation.

  • -p thread_pools=4 (default 1)

To reduce lock contention, you might want to increase this number a little. But just a little.

  • -p listen_depth=4096 (default 1024)

You may want to increase this, but there is little advantage to increasing it too high. Set it to your peak connection/second rate, so that you get a buffer of a full second if the acceptor gets busy. More than that is not going to do anything good.

Running with many objects

If you have many objects (more than 100000), you may need to set the following command line options:

  • -p lru_interval=3600 (default: 2 seconds)

If your cache servers cache most/all objects for a longer time, it makes sense to increase the period before an object is moved to the LRU list. This reduces the amount of lock operations necessary for LRU list access.

 

  • -h classic,500009 (default: 16383)

To keep hash lookups fast, you should not have more than 10 objects per hash bucket. If you have 3 million objects, number of objects should be at least 300000. The number should be a prime number. You can generate one on http://www.prime-numbers.org/.

  • -p obj_workspace=4096 (default: 8192)

For every object, this amount of memory is allocated for HTTP protocol header information. Try to decrease this setting, it will decrease the need for VM space to fit all your objects. Be aware that Varnish currently crashes if there is an object is too big for this limit (see #214)

  • -s malloc,50G

Try running with malloc storage if you experience VM hangs. You do this instead of setting up data files, and might have to increase the amount of swap space needed. You can set a limit for how much to allocate, which should be smaller than available swap space on the machine. Possible benefit of not having any swap space on the OS/system disk.

VCL Setting

Enable grace period (varnish serves stale (but cacheable) objects while retriving object from backend)

in vcl_recv:

set req.grace = 30s;

in vcl_fetch:

set obj.grace = 30s;

FreeBSD

  • If using FreeBSD 7.0 or newer, try using SCHED_ULE instead of SCHED_4BSD in your kernel config.
  • Turn off soft-updates on the filesystems where you keep your Varnish data files. It will not help Varnish.
  • sysctl.conf settings (see tuning(7) manpage and http://www.freebsd.org/doc/en/books/handbook/configtuning-kernel-limits.html):

kern.ipc.nmbclusters=65536 kern.ipc.somaxconn=16384 kern.maxfiles=131072 kern.maxfilesperproc=104856 kern.threads.max_threads_per_proc=4096

  • loader.conf settings:

kern.ipc.maxsockets="131072" kern.ipc.maxpipekva="104857600" (only if you get the "kern.ipc.maxpipekva exceeded" messages in your logs, varnish does not use pipes for worker pool synchronization any more)

  • If you run 32-bit FreeBSD, you will need to change set kern.maxdsiz (maximum data size per process in number of bytes) in loader.conf to a larger number if you want to cache more than 512 MB (the default setting) of objects.
  • If you use the malloc storage type, and your system hangs with "swap zone exhausted, increase kern.maxswzone" on the console, try increasing kern.maxswzone (default is 32 MB in FreeBSD 7.0) in loader.conf.

Linux

Edit /etc/sysctl.conf

These are numbers from a highly loaded varnishe serving about 4000-8000 req/s

(details: http://projects.linpro.no/pipermail/varnish-misc/2008-April/001769.html)

net.ipv4.ip_local_port_range = 1024 65536
net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.ipv4.tcp_rmem=4096 87380 16777216
net.ipv4.tcp_wmem=4096 65536 16777216
net.ipv4.tcp_fin_timeout = 3
net.ipv4.tcp_tw_recycle = 1
net.core.netdev_max_backlog = 30000
net.ipv4.tcp_no_metrics_save=1
net.core.somaxconn = 262144
net.ipv4.tcp_syncookies = 0
net.ipv4.tcp_max_orphans = 262144
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 2

All UNIX platforms

  • Set the mount option noatime and nodiratime on the filesystems where you keep your Varnish data files. There is no point in keeping track of how often they are accessed, it will waste cycles/give unneccessary disk activity.

Monitoring

  • Make sure you monitor your cache hit ratio, the ratio of requests in % that is actually cached. This should be a high number, in order for Varnish to take the load of the backends. Use varnishstat (see hitrate avg), and if possible also monitor and graph it. Tools here can be [Nagios http://www.nagios.org/], or [Munin http://munin.projects.linpro.no/] (see also [Muninexchange http://muninexchange.projects.linpro.no/ and http://anders.fupp.net/plugins/] for plugins).
  • Monitor the number of Varnish threads. It should never be as high as the Varnish thread_pool_max setting.

 

/usr/local/varnish/bin/varnishstat -n /var/vcache/

你可能感兴趣的:(性能,varnish)