max_connections / shared_buffers / effective_cache_size

Hello, I'm a Sun Solaris sys admin for a start-up

company.  I've got the UNIX background, but now I'm

having to learn PostgreSQL to support it on our

servers :)



Server Background:



Solaris 10 x86

PostgreSQL 8.0.3

Dell PowerEdge 2650 w/4gb ram.

This is running JBoss/Apache as well (I KNOW the bad

juju of running it all on one box, but it's all we

have currently for this project). I'm dedicating 1gb

for PostgreSQL alone.



So, far I LOVE it compared to MySQL it's solid.



The only things I'm kind of confused about (and I've

been searching for answers on lot of good perf docs,

but not too clear to me) are the following:



1.) shared_buffers I see lot of reference to making

this the size of available ram (for the DB).  However,

I also read to make it the size of pgdata directory.



I notice when I load postgres each daemon is using the

amount of shared memory (shared_buffers).  Our current

dataset (pgdata) is 85mb in size.  So, I'm curious

should this size reflect the pgdata or the 'actual'

memory given?



I currently have this at 128mb





You generally want shared_buffers to be no more than 10% of available

ram. Postgres expects the OS to do it's own caching. 128M/4G = 3% seems

reasonable to me. I would certainly never set it to 100% of ram.



2.) effective_cache_size - from what I read this is

the 'total' allowed memory for postgresql to use

correct? So, if I am willing to allow 1GB of memory

should I make this 1GB?





This is the effective amount of caching between the actual postgres

buffers, and the OS buffers. If you are dedicating this machine to

postgres, I would set it to something like 3.5G. If it is a mixed

machine, then you have to think about it.



This does not change how postgres uses RAM, it changes how postgres

estimates whether an Index scan will be cheaper than a Sequential scan,

based on the likelihood that the data you want will already be cached in

Ram.



If you dataset is only 85MB, and you don't think it will grow, you

really don't have to worry about this much. You have a very small database.



3.) max_connections, been trying to figure 'how' to

determine this #.  I've read this is buffer_size+500k

per a connection.



ie.  128mb(buffer) + 500kb = 128.5mb per connection?





Max connections is just how many concurrent connections you want to

allow. If you can get away with lower, do so.  Mostly this is to prevent

connections * work_mem to get bigger than your real working memory and

causing you to swap.



I was curious about 'sort_mem' I can't find reference

of it in the 8.0.3 documentation, has it been removed?





sort_mem changed to work_mem in 8.0, same thing with vacuum_mem ->

maintenance_work_mem.



work_mem and max_stack_depth set to 4096

maintenance_work_mem set to 64mb





Depends how much space you want to give per connection. 4M is pretty

small for a machine with 4G of RAM, but if your DB is only 85M it might

be plenty.

work_mem is how much memory a sort/hash/etc will use before it spills to

disk. So look at your queries. If you tend to sort most of your 85M db

in a single query, you might want to make it a little bit more. But if

all of your queries are very selective, 4M could be plenty.



I would make maintenance_work_mem more like 512M. It is only used for

CREATE INDEX, VACUUM, etc. Things that are not generally done by more

than one process at a time. And it's nice for them to have plenty of

room to run fast.



Thanks for any help on this.  I'm sure bombardment of

newbies gets old :)



-William





Good luck,

John

你可能感兴趣的:(Connection)