GlusterFS源码解析—— GlusterFS 命令行常见错误

原文地址:http://write.blog.csdn.net/postedit/25927643


问题1

[root@localhost ~]# gluster peer status
Connection failed. Please check if gluster daemon is operational.

原因:未开启glusterd服务

解决方法:开启glusterd服务

/etc/init.d/glusterd start


问题2

[root@localhost ~]# gluster peer probe 192.168.230.130

peer probe: failed: Probe returned with unknown errno 107

原因:日志中打印[2014-05-15 15:55:25.929461] I [glusterd-handler.c:2836:glusterd_probe_begin] 0-glusterd:Unable to find peerinfo for host: 192.168.230.130 (24007)

防火墙没开启24007端口

解决方法:开启24007端口或者关掉防火墙

/sbin/iptables -I INPUT -p tcp --dport 24007 -j ACCEPT   # 开启24007端口

/etc/init.d/iptables stop     # 关掉防火墙


注:

也可以使用主机名来代替IP,修改/etc/hosts文件实现

gluster peer probe server-130


问题3

volume create volume1 192.168.230.135:/tmp/brick1
volume create: volume2: failed

不能只使用一个server上的brick创建volume,需要至少两个brick,或者在client主机(这里为192.168.230.134)上使用一个brick创建volume。

gluster> volume create volume1 192.168.230.134:/tmp/brick1 force
volume create: volume1: success: please start the volume to access data
gluster> volume info

Volume Name: volume1
Type: Distribute
Volume ID: b01a2c29-09a6-41fd-a94e-ea834173a6a3
Status: Created
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.230.134:/tmp/brick1

gluster>

gluster> volume create volume2 192.168.230.134:/tmp/brick2  192.168.230.135:/tmp/brick2 force
volume create: volume2: success: please start the volume to access data
gluster> volume info
 
Volume Name: volume1
Type: Distribute
Volume ID: b01a2c29-09a6-41fd-a94e-ea834173a6a3
Status: Created
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.230.134:/tmp/brick1
 
Volume Name: volume2
Type: Distribute
Volume ID: 4af2e260-70ce-49f5-9663-9c831c5cf831
Status: Created
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.230.134:/tmp/brick2
Brick2: 192.168.230.135:/tmp/brick2

问题4

创建一个volume后删除,再使用同名的brick创建volume失败。

gluster> volume create test 192.168.230.134:/tmp/brick1 force
volume create: test: success: please start the volume to access data
gluster> volume info 
 
Volume Name: test
Type: Distribute
Volume ID: c29f75d2-c9f5-4d6f-90c5-c562139ab9cd
Status: Created
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.230.134:/tmp/brick1
gluster> volume delete test force
Usage: volume delete 
gluster> volume delete test
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: test: success
gluster> volume create test 192.168.230.134:/tmp/brick1 force
volume create: test: failed: /tmp/brick1 or a prefix of it is already part of a volume

因为volume delete时并未删掉create时创建的目录,需要手动删除后再使用。




你可能感兴趣的:(GlusterFS)