Proxmox VE2.2虚拟化安装配置学习笔记(二)

就是这样,你已经创建了一个集群。你可以检查下正在运行的主机:

server2:

pveca -l

应该在输出结果中显示两个主机的信息:

server2:~# pveca -l
CID―-IPADDRESS―-ROLE-STATE――�CUPTIME―LOAD―-MEM―ROOT―DATA
1 : 172.16.1.200   M     S           00:15   0.00     5%     1%     0%
2 : 172.16.1.201   N     S           00:04   0.08    15%     1%     0%
server2:~#

二:现在在2.2这个版本下创建群集命令如下:

在Server1上运行以下命令:

Server1:

1:创建群集名称

root@server1:~# pvecm create korolev

Generating public/private rsa key pair.

Your identification has been saved in /root/.ssh/id_rsa.

Your public key has been saved in /root/.ssh/id_rsa.pub.

The key fingerprint is:

1b:a7:9e:b7:c3:7c:ad:ca:62:58:38:cf:15:62:73:24 root@server1

The key's randomart image is:

+--[ RSA 2048]----+

| |

| E . |

| o |

| + o |

| oS+.. |

| o .=. |

| *o+ . |

| ..=o= . . |

| .oo+=.. |

+-----------------+

Restarting pve cluster filesystem: pve-cluster[dcdb] notice: wrote new cluster config '/etc/cluster/cluster.conf'

.

Starting cluster:

Checking if cluster has been disabled at boot... [ OK ]

Checking Network Manager... [ OK ]

Global setup... [ OK ]

Loading kernel modules... [ OK ]

Mounting configfs... [ OK ]

Starting cman... [ OK ]

Waiting for quorum... [ OK ]

Starting fenced... [ OK ]

Starting dlm_controld... [ OK ]

Tuning DLM kernel config... [ OK ]

Unfencing self... [ OK ]

2:查看集群状态:

root@server1:~# pvecm status

Version: 6.2.0

Config Version: 1

Cluster Name: korolev

Cluster Id: 13864

Cluster Member: Yes

Cluster Generation: 4

Membership state: Cluster-Member

Nodes: 1

Expected votes: 1

Total votes: 1

Node votes: 1

Quorum: 1

Active subsystems: 5

Flags:

Ports Bound: 0

Node name: server1

Node ID: 1

Multicast addresses: 239.192.54.94

Node addresses: 172.16.1.200

在server2上运行以下命令:

Server2:

添加群集节点:

root@server2:~# pvecm add 172.16.1.200 //输入集群masterIP地址

Generating public/private rsa key pair.

Your identification has been saved in /root/.ssh/id_rsa.

Your public key has been saved in /root/.ssh/id_rsa.pub.

The key fingerprint is:

0e:ba:bf:41:52:a7:69:ca:24:ba:1b:d3:01:7d:7e:a5 root@server2

The key's randomart image is:

+--[ RSA 2048]----+

| |

| . |

|. . . . o |

| . o . * |

| o + E S |

| o = B o |

|+ . + . . |

| + . . |

|o. ..o. |

+-----------------+

The authenticity of host '172.16.1.200 (172.16.1.200)' can't be established.

RSA key fingerprint is e9:7a:1b:12:f6:6b:54:82:cb:8b:fe:40:3f:a7:00:27.

Are you sure you want to continue connecting (yes/no)? yes //需要出入yes回车

[email protected]'s password:**** //需要提供新加接点的root账户

copy corosync auth key

stopping pve-cluster service

Stopping pve cluster filesystem: pve-cluster.

backup old database

Starting pve cluster filesystem : pve-clusterfuse: failed to access mountpoint /etc/pve: Transport endpoint is not connected

[main] crit: fuse_mount error: Transport endpoint is not connected

[main] notice: exit proxmox configuration filesystem (-1)

(warning).

cannot stat initial working directory for /etc/pve: Transport endpoint is not connected at /usr/bin/pvecm line 478

starting pve-cluster failed //这里明显提示添加失败。

root@server2:/etc/pve# pvecm add 172.16.1.200 //再次试着添加就会提示一下错误

pve configuration filesystem not mounted

特别是这个错误最常出现,网上有建议重启一下再试着添加,我这里亲测发现虽然这里提示这个错误失败,将集群master和这台主机slave都重启一下之后,再登录到要添加到集群masterslave主机上尝试添加:

root@server2:~# pvecm add 172.16.1.200

authentication key already exists //提示已经存在了,说明先前已添加成功了!

root@server2:~# pvecm status //查看集群状态,发现nodes变成2个了

Version: 6.2.0

Config Version: 4

Cluster Name: korolev

Cluster Id: 13864

Cluster Member: Yes

Cluster Generation: 12

Membership state: Cluster-Member

Nodes: 2

Expected votes: 2

Total votes: 2

Node votes: 1

Quorum: 2

Active subsystems: 5

Flags:

Ports Bound: 0

Node name: server2

Node ID: 2

Multicast addresses: 239.192.54.94

Node addresses: 172.16.1.201

root@server2:~# pvecm nodes //查看集群接点

Node Sts Inc Joined Name

1 M 12 2012-12-11 21:16:52 server1

2 M 4 2012-12-11 21:16:17 server2

***总结一下***出现以上错误的原因我觉的应该是由于在master主机server1上创建了集群之后需要重启之后生效,不然在其他slave主机上运行运行添加到集群就会经常出现添加失败fail或者再次尝试添加时候提示” pve configuration filesystem not mounted”错误。

再回到集群master主机server1上:

Server1

root@server1:~# pvecm nodes //查看节点信息

Node Sts Inc Joined Name

1 M 12 2012-12-11 21:17:51 server1

2 M 12 2012-12-11 21:17:51 server2

root@server1:~# pvecm delnode NodesName

ERROR: unknown command 'delnodes'

USAGE: pvecm <COMMAND> [ARGS] [OPTIONS]

pvecm help [<cmd>] [OPTIONS]

pvecm add <hostname> [OPTIONS]

pvecm addnode <node> [OPTIONS]

pvecm create <clustername> [OPTIONS]

pvecm delnode <node>

pvecm expected <expected>

pvecm keygen <filename>

pvecm nodes

pvecm status

pvecm updatecerts [OPTIONS]

root@server1:~# pvecm delnode server2 //删除接点

现在我们回到Proxmox的控制面http://172.16.1.200/(server2.proxmox.com不需要控制面板!因为现在是一个群集所以即使用server2IP登录,界面显示一样,最好用master主机IP来管理集群。)并查看群集,在这里将会显示两台服务器:

clip_image001[8]

4 添加设备模板

在我们创建OpenVZ容器之前,我们至少要在我们的系统当中添加一个操作系统模板(对于KVM客户机来说,你不单可以添加ISO文件,也可以直接从OS CD或者DVD来安装)。Proxmox2.2相对于之前的老版本在添加模板的时候发生了很多变化,接下来一一详细做一下添加模板步骤:

clip_image002[12]

点击节点主机,选择本地存储―内容―模板。你将会发现列出的众多openve的模板,这里可以下载下来使用:

clip_image003[8]

在这里,你将会看到一个Proxmox项目提供的templates列表,分admin/mail/system/www等几类,你可以直接下载这些系统到本地存储然后使用。

clip_image004[10]

可能下载不成功,由于自己的接点主机只在局域网中做测试,没有连到internet所以没法下载成功,其实可以采用下边的方法解决:

理论上讲-这些系统的链接太老了,已经不再适合我们了,你可以去http://download.proxmox.com/appliances/然后去访问admin, mail, system, 或者www 文件夹下载你需要的templates到你的本地硬盘里。

clip_image005[8]

clip_image006[12]

将这些模板下载下来,然后在Local选项里,你可以把templates上传到Proxmox master主机上(这里是限于测试条件,只能将模板放在本地存储上,其实如果实体设备上有现成的高性能存储,还是建议放在实体存储的LUN中,这样各个节点读取起来方便。):

你可能感兴趣的:(p,主机)