Today's enterprise depends on the availability of mail and web services. Failure is never far away, whether it be a hardware failure or a human error. We have to try to make an infrastructure as highly available as possible.
When building highly available clusters, people often choose one extra physical machine per service, creating an A-B, fail-over schema. With static websites, there is no problem making the application highly available; you can just store the data in two places. However, the moment you add a database to your environment, things start to become more difficult. The easy way out is to move the database to a different machine and move that server into a SEP field.
That's not how we do it. In the old days when sites became too heavily loaded, we used MySQL replication to create multiple read-only copies of the database, which LVS load balanced. This, however, meant that we had to modify the application itself so that it could only write to the master node.
Later, many people tried to create a replication environment that implemented high availability. All of them struggled with the problem that they couldn't exactly define where a node failed, so it was possible to lose records. Also, recovering from a failover seemed to be a difficult task.
The MySQL NDB storage engine consists of different parts:
ndb_mgmd
is the NDB management daemon. This daemon manages the cluster. It should start first in order to monitor the state of the other parts. The management daemon can arbitrate who becomes master and which nodes have to be disconnected from the cluster. It is also capable of (re)starting different nodes and starting backups. The other nodes will ask the management node for their configuration details, but from then on, they don't really need the management node anymore. You can easily stop and start the management node while not disturbing the cluster, as long as no other fault happens during this restart. The management node listens on port tcp/1186 (tcp/2200 in older versions). ndb_mgm
is the management client. It sends commands to ndb_mgmd
. ndbd
is the actual network database engine. You need at least the number of nodes equal to the amount of replicas you want. To spread data over multiple nodes, increase the number of nodes. mysqld
is the standard SQL node that connects to the ndbd
for NDB engine type storage. It still can use MyISAM or InnoDB tables. mysql的ndb存储引擎构成的不同部分:
ndb_mgmd是ndb管理守护。这个守护管理集群。它应该首先启动,以顺序监控其他部分。管理守护可以仲裁的主节点和其他节点都必须从断开其他集群环境。它同时也能够(重新)开始不同节点并开始备份。其他节点会要求管理节点的配置细节,但从此,不是真的不要管理其他节点了。在重启其间只要没有其他故障发生,你可以轻易停止,并启动管理节点,而不是烦恼的集群。管理节点监听端口tcp/1186 ( tcp/2200在较旧版本) 。
ndb_mgm是管理客户端。它发出命令到ndb_mgmd 。
ndbd是实际的网络数据库引擎。你至少需要想要复制的个数的节点。数据覆盖在多个节点,增加节点数目。
mysqld是ndbd为ndb引擎类型存储的标准sql节点连接。它仍然可以使用或myisam innodb的图表。
Standard MySQL clients connect to the SQL node and won't notice the difference between a MyISAM or InnoDB query, so there is no need to change the API.
In order to achieve high availability, you need at least three nodes: one management node, and two different replica nodes for ndbd
.
|
下面就是老版本的配置细节,我就不继续说明了。