邮件列表语录

Jeff Darcy:


The GlusterFS philosophy has generally been to handle performance issues via horizontal scaling - which works even for data sizes greater than any cache - and be conservative about the other issues.

 

If you can buy more performance but you can’t buy more of those other things, you’d be a fool to buy the system that’s built for speed and speed alone.

 If the measured network performance is
100MB/s (800Mb/s) then with two replicas one would expect no better than
50MB/s and with four replicas no better than 25MB/s. 


Joe Landman:


Ext4 isn't designed for parallel IO workloads, while xfs is.

 

 

Gilad:

 

 

Later, I terminated fuse and repeated the tests on the same directory 

using the original mount point. My benchmarks showed fuse can reach 

about 30% of the original file system's performance. (cached reads were 

about 2000MB/s, whereas w/o fuse I got about 6000MB/s). 

 

 

Stephan:

 

 

Ok, now you can see why I am talking about dropping the long-gone unix
versions (BSD/Solaris/name-one) and concentrate on doing a linux-kernel module
for glusterfs without fuse overhead. It is the _only_ way to make this project
a really successful one.

 

 

someone:

 

"you must increase the inode size of XFS to 512 bytes from the default 256 bytes"

 

 

 

That can be solved by having an option in 'glusterd.vol', 'option rpc-auth-allow-insecure off'. That way, glusterd doesn't allow any connection from user programs. Please check if that is enough.

 

Brian Candler:

 

However gluster 3.3.x is still not ideal as a VM backing store, because of the performance issues of going via the kernel and back out through the FUSE layer. There are bleeding-edge patches to KVM which allow it to use libglusterfs to talk directly to the storage bricks, staying in userland: http://lists.gnu.org/archive/html/qemu-devel/2012-06/msg01745.html

 

Daniel Mons:

 

Gluster's single biggest bottleneck (and this is
common to many clustered file systems) is file lookup over the network
for uncached content, and especially negative lookup.  These are
several orders of magnitude slower than the storage, and increasing
the storage IOPS won't help things much at all.

 

Robert van Leeuwen

 

Copying a few files and looking at the speed usually does not say a lot about real-life perfomance.
(unless real-life is also just one person copying big files around)
Most workloads I see hit IOPS limits a long time before throughput.

你可能感兴趣的:(邮件)