d.在node4上安装apache+tomcat
安装jdk1.7.0_67 需要依赖JDK才能运行 jdk1.7.0_67和apache-tomcat-7.0.55.tar.gz存在兼容性问题 apache-tomcat-7.0.42.tar.gz
#rpm -ivh jdk-7u67-linux-x64.rpm
# tar xf apache-tomcat-7.0.55.tar.gz-C /usr/local/
导出命令
# cat/etc/profile.d/java.sh
exportJAVA_HOME=/usr/java/jdk1.7.0_67 这里是固定格式的
exportPATH=$JAVA_HOME/bin:$PATH
#./etc/profile.d/java.sh
# ln -svapache-tomcat-7.0.55 tomcat
# cat/etc/profile.d/tomcat.sh
exportCATALINA_HOME=/usr/local/tomcat
exportPATH=$CATALINA_HOME/bin:$PATH
作为tomcat的启动脚本,添加到/etc/rc.d/init.d/tomcat文件中
#!/bin/sh
# Tomcat initscript for Linux.
#
# chkconfig: 234596 14
# description: TheApache Tomcat servlet/JSP container.
# JAVA_OPTS='-Xms64m-Xmx128m'
JAVA_HOME=/usr/java/latest
CATALINA_HOME=/usr/local/tomcat
export JAVA_HOMECATALINA_HOME
case $1 in
start)
exec $CATALINA_HOME/bin/catalina.sh start ;;
stop)
exec $CATALINA_HOME/bin/catalina.sh stop;;
restart)
$CATALINA_HOME/bin/catalina.sh stop
sleep 2
exec $CATALINA_HOME/bin/catalina.sh start ;;
*)
echo "Usage: `basename $0`{start|stop|restart}"
exit 1
;;
esac
给其执行权限
# chmod +x/etc/rc.d/init.d/tomcat
添加到服务列表中
# chkconfig --addtomcat
启动tomcat
service tomcatstart
ss -tnlp | grep8080
部署一个虚拟主机,在配置文件修改如下:server.xml
在<Host>组后面添加
<Hostname="www.tree.com" appBase="/tomcat"
unpackWARs="true"autoDeploy="true">
<Context path=""docBase="webapps" reLoadable="true" />
</Host>
或者是
<Host name="www.tree.com" appBase="/tomcat/webapps"
unpackWARs="true"autoDeploy="true">
</Host>
但是资源需要在/tomcat/webapps/ROOT目录下
修改
<Enginename="Catalina" defaultHost="localhost">为<Enginename="Catalina" defaultHost="www.tree.com">
修改监听的端口为80,默认是8080
<Connectorport="80" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443"/>
创建目录路径:
# tree /tomcat/
/tomcat/
└── webapps
├── index.jsp
其中/tomcat/webapps/test/index.jsp测试页内容如下:
# cat /tomcat/webapps/index.jsp
<%@ page language="java" %>
<%@ page import="java.util.*" %>
<html>
<head>
<title> JSP TestPage</title>
</head>
<body>
<%
out.println("Hello How are you.");
out.println("Hello there.");
%>
</body>
</html>
#catalina.sh start
查看端口是否起来
#ss -tnl
访问测试
http://192.168.21.166
在tomcat上,多个应用就多个tomcat,不建议Context,也不建议虚拟主机,单机多实例去运行。
编译安装httpd
准备好环境
# yuminstall -y gcc gcc-c++ pcre-devel openssl-devel
# tar xf apr-1.4.6.tar.bz2 -C /usr/local
# cd apr-1.4.6
# ./configure--prefix=/usr/local/apr
# make &&make install
apr-util是apr的工具库,其可以让程序员更好的使用apr的功能。可以从http://apr.apache.org/获取apr源码,目前最新的版本是1.4.1。
# tar xfapr-util-1.4.1.tar.bz2 -C /usr/local
# cdapr-util-1.4.1
# ./configure --prefix=/usr/local/apr-util --with-apr=/usr/local/apr
# make && make install
安装apache
httpd目前最新的2.4系列版本中引入了event MPM,其在性能上较之其它MPM有了较大的提升,
# tar xfhttpd-2.4.2 -C /usr/local
# cd httpd-2.4.2
# ./configure--prefix=/usr/local/apache --sysconfdir=/etc/httpd --enable-so --enable-ssl --enable-cgi--enable-rewrite --with-zlib --with-pcre --with-apr=/usr/local/apr--with-apr-util=/usr/local/apr-util --enable-mpms-shared=all --with-mpm=event--enable-proxy --enable-proxy-http --enable-proxy-ajp --enable-proxy-balancer --enable-lbmethod-heartbeat--enable-heartbeat --enable-slotmem-shm --enable-slotmem-plain --enable-watchdog
# make &&make install
为apache提供init脚本,实现服务的控制。建立/etc/rc.d/init.d/httpd文件,并添加如下内容:
这是个脚本文件,因此需要执行权限;同时,为了让httpd服务能够开机自动启动,还需要将其添加至服务列表。
# cat/etc/rc.d/init.d/httpd
#!/bin/bash
#
#httpd Startup script for theApache HTTP Server
#
#chkconfig: - 85 15
#description: Apache is a World Wide Web server. It is used to serve \
# HTML files and CGI.
#processname: httpd
#Source function library.
./etc/rc.d/init.d/functions
if [-f /etc/sysconfig/httpd ]; then
. /etc/sysconfig/httpd
fi
#Start httpd in the C locale by default.
HTTPD_LANG=${HTTPD_LANG-"C"}
#This will prevent initlog from swallowing up a pass-phrase prompt if
#mod_ssl needs a pass-phrase from the user.
INITLOG_ARGS=""
# SetHTTPD=/usr/sbin/httpd.worker in /etc/sysconfig/httpd to use a server
#with the thread-based "worker" MPM; BE WARNED that some modules maynot
#work correctly with a thread-based MPM; notably PHP will refuse to start.
#Path to the apachectl script, server binary, and short-form for messages.
apachectl=/usr/local/apache/bin/apachectl
httpd=${HTTPD-/usr/local/apache/bin/httpd}
prog=httpd
pidfile=${PIDFILE-/var/run/httpd.pid}
lockfile=${LOCKFILE-/var/lock/subsys/httpd}
RETVAL=0
start(){
echo -n $"Starting $prog: "
LANG=$HTTPD_LANG daemon--pidfile=${pidfile} $httpd $OPTIONS
RETVAL=$?
echo
[ $RETVAL = 0 ] && touch${lockfile}
return $RETVAL
}
stop(){
echo -n $"Stopping $prog: "
killproc -p ${pidfile} -d 10 $httpd
RETVAL=$?
echo
[ $RETVAL = 0 ] && rm -f ${lockfile}${pidfile}
}
reload(){
echo -n $"Reloading $prog: "
if ! LANG=$HTTPD_LANG $httpd $OPTIONS -t>&/dev/null; then
RETVAL=$?
echo $"not reloading due toconfiguration syntax error"
failure $"not reloading $httpd dueto configuration syntax error"
else
killproc -p ${pidfile} $httpd -HUP
RETVAL=$?
fi
echo
}
# Seehow we were called.
case"$1" in
start)
start
;;
stop)
stop
;;
status)
status -p ${pidfile} $httpd
RETVAL=$?
;;
restart)
stop
start
;;
condrestart)
if [ -f ${pidfile} ] ; then
stop
start
fi
;;
reload)
reload
;;
graceful|help|configtest|fullstatus)
$apachectl $@
RETVAL=$?
;;
*)
echo $"Usage: $prog{start|stop|restart|condrestart|reload|status|fullstatus|graceful|help|configtest}"
exit 1
esac
exit$RETVAL
#chmod +x /etc/rc.d/init.d/httpd
#chkconfig --add httpd
启动时需要先修改tomcat使用HTTP协议监听的端口
# service httpdstart 启动时,显示正常,但是查看不到监听的端口,status查看状态是没有起来的,查看错误日志 /usr/local/apache/logs/error_log
[Thu Aug 0608:01:25.328782 2015] [proxy_balancer:emerg] [pid 3966:tid 139927044081408]AH01177: Failed to lookup provider 'shm' for 'slotmem': is mod_slotmem_shmloaded??
[Thu Aug 0608:01:25.336850 2015] [:emerg] [pid 3966:tid 139927044081408] AH00020:Configuration Failed, exiting
在/etc/httpd/httpd.conf中启用
LoadModuleslotmem_shm_module modules/mod_slotmem_shm.so,而后再启动httpd时,就可以正常了,但是查看错误日志还有信息
[Thu Aug 0608:04:39.837549 2015] [lbmethod_heartbeat:notice] [pid 4005:tid139928518989568] AH02282: No slotmem from mod_heartmonitor
把此模块注释了
#LoadModulelbmethod_heartbeat_module modules/mod_lbmethod_heartbeat.so
此时启动httpd,错误日志中就不会有错误提示了
但是此时关闭httpd时,提示错误,而且关闭不了httpd,查看以后发现,启动httpd后,其pid文件根本不是放在/var/run/httpd.pid中,而是在/usr/local/apache/logs/httpd.pid,所以httpd的启动脚本/etc/rc.d/init.d/httpd需要改一下pidfile那项,改为如下:
原
pidfile=${PIDFILE-/var/run/httpd.pid}
改
pidfile=${PIDFILE-/usr/local/apache/logs/httpd.pid}
或者是在/etc/httpd/httpd.conf配置文件中指定PidFile=/var/run/httpd.pid
修改启用ServerName
此时httpd就可以正常启动和关闭了
Apache以mod_proxy方式和tomcat结合
在主配置文件/etc/httpd/httpd.conf尾部添加如下内容:
ProxyViaOff
ProxyRequestsOff
ProxyPass/ http://192.168.21.166:8080/
ProxyPassReverse/ http://192.168.21.166:8080/
<Proxy*>
Require all granted
</Proxy>
<Location/>
Require all granted
</Location> 也可以把此内容添加到虚拟主机中,需要把主配置文件中的DocumentRoot给注释了。而后在虚拟配置文件中添加如下内容:
<VirtualHost*:80>
ProxyVia Off
ProxyRequests Off
ProxyPass / http://192.168.21.166:8080/
ProxyPassReverse /http://192.168.21.166:8080/
<Proxy *>
Require all granted
</Proxy>
<Location />
Require all granted
</Location>
</VirtualHost>
此时在浏览器上访问可以正常代理到后端tomcat上了
e.在node8和node88上部署好DRBD和mysql高可用,一般情况下这里都是MySQL做主从,有多个从服务器,在多个从服务器时如果读请求压力还未减轻,则需要通过在从服务器前端添加一层读缓存,能命中的读请求在还未过期时或者过期后数据未改变时,都到缓存中取结果进行返回。没有命中则需要到从服务器上查找结果,找到后缓存一份而后再进行读返回。
准备好MySQL源码包
# ls/usr/local/src/
mariadb-10.0.13.tar.gz
这里是通过脚本的方式自动化实现安装
#!/bin/bash
useradd -r -s /sbin/nologinmysql > /dev/null
#vgextend$(vgs|awk '{if(NR==2) {print $1}}') /dev/sdb > /dev/null
lvcreate -L 18G -ndata vg_lvm > /dev/null
mkdir /mysql
mkfs.ext4/dev/vg_lvm/data > /dev/null
mount/dev/vg_lvm/data /mysql
mkdir /mysql/data
chown -R mysql.mysql/mysql/data
tar -xf/usr/local/src/mariadb-10.0.13.tar.gz -C /usr/local/
yum groupinstall-y "Development tools" "Server Platform Development" >/dev/null
echo -e"\033[42mGroupinstall is OK.\033[0m"
yum install -ylibxml2-devel cmake > /dev/null
echo -e"\033[42mInstall is OK.\033[0m"
cd/usr/local/mariadb-10.0.13/
cmake .-DMYSQL_DATADIR=/mysql/data -DWITH_SSL=system -DWITH_SPHINX_STORAGE_ENGINE=1> /dev/null
echo -e"\033[42mCmake is OK.\033[0m"
make &&make install > /dev/null
echo -e "\033[42mMakeand Make install is OK.\033[0m"
cd/usr/local/mysql
echo "exportPATH=/usr/local/mysql/bin:$PATH" > /etc/profile.d/mysql.sh
source/etc/profile.d/mysql.sh
cp -fsupport-files/my-large.cnf /etc/my.cnf
sed -i'/^\[mysqld\]/a datadir=/mysql/data' /etc/my.cnf
cp/usr/local/mysql/support-files/mysql.server /etc/rc.d/init.d/mysqld
chmod +x/etc/rc.d/init.d/mysqld
chkconfig --addmysqld
chkconfig mysqldon
chown -Rroot.mysql /usr/local/mysql/*
/usr/local/mysql/scripts/mysql_install_db--user=mysql --datadir=/mysql/data > /dev/null
echo -e"033[42mMysql initial is ok.\033[0m"
service mysqldstart
ss -tnlp|grep 3306
高可用corosync+pacemaker实现
配置HA的前提:
时间同步、基于主机名互相通信、SSH互信
安装程序包corosync, pacemaker
这里在MySQL高可用情况下,需要的资源有vip , mysqld , filesystem[rsync+inotify、nfs]或者block store[DRBD、iscsi]
这里通过DRBD来实现block store
考虑到有多个资源,组织多个资源的两种方案:
group 定义组
constraint 定义约束
location 位置约束 服务中的多个资源更倾向于哪个节点上
order 顺序约束 服务中的多个资源的启动及关闭次序
colocation 排列约束 服务中的多个资源在一起的倾向性
安装pacemaker时依赖corosync
#yum install -ypacemaker
生成认证文件
#corosync-keygen
直接生成key的,默认是从熵池中读取随机数的,如果随机数不够就会处于等待状态,需要通过敲击键盘来生成随机数。通过yum安装普通的包,也能更快产生随机数
/etc/corosync/authkey
-r-------- 1 rootroot 128 Aug 8 15:38/etc/corosync/authkey
配置文件/etc/corosync/corosync.conf.example复制为/etc/corosync/corosync.conf
# cp/etc/corosync/corosync.conf.example /etc/corosync/corosync.conf
修改配置文件/etc/corosync/corosync.conf
secauth: off 是否基于安全方式认证每一个节点 改为on
threads: 0 启动的线程数,通过cpu核心数进行修改
bindnetaddr: 192.168.1.0 打算绑定在哪个网络地址上改为 192.168.21.0
mcastaddr: 239.255.1.1 多播地址改为 226.194.25.36[需要处于多播的范围]
这里两种记录日志的方式都打开了
to_syslog: yes 改为no
amf 定义corosync是否启用openais的amf机制即是否兼容其API机制,corosync必须以插件化的方式运行pacemaker,在2.0之前pacemaker是作为corosync的插件,在2.0之后pacemaker是一个独立的服务了。需要补充一个插件
加上如下:
service { 指定插件
ver: 0
name: pacemaker
# use_mgmtd: yes
}
aisexec { ais运行时的用户
user: root
group: root
}
amf {
mode: disabled
}
高可用节点双方实现互信
#ssh-keygen -t rsa-P ''
#ssh-copy-id -i/root/.ssh/id_rsa.pub nodeXX
把认证文件和配置文件复制到高可用的另外一个节点上
# scp/etc/corosync/corosync.conf node88:/etc/corosync/corosync.conf
# scp/etc/corosync/authkey node88:/etc/corosync/authkey
启动corosync
# service corosyncstart
查看corosync引擎是否正常启动:
# grep -e"Corosync Cluster Engine" -e "configuration file"/var/log/cluster/corosync.log
Aug 08 15:56:29corosync [MAIN ] Corosync Cluster Engine('1.4.7'): started and ready to provide service.
Aug 08 15:56:29corosync [MAIN ] Successfully read mainconfiguration file '/etc/corosync/corosync.conf'.
查看初始化成员节点通知是否正常发出:
# grep TOTEM /var/log/cluster/corosync.log
Aug 08 15:56:29corosync [TOTEM ] Initializing transport (UDP/IP Multicast).
Aug 08 15:56:29corosync [TOTEM ] Initializing transmit/receive security: libtomcryptSOBER128/SHA1HMAC (mode 0).
Aug 08 15:56:29corosync [TOTEM ] The network interface [192.168.21.159] is now up.
Aug 08 15:56:29corosync [TOTEM ] A processor joined or left the membership and a newmembership was formed.
Aug 08 15:56:49corosync [TOTEM ] A processor joined or left the membership and a new membershipwas formed.
检查启动过程中是否有错误产生。下面的错误信息表示packmaker不久之后将不再作为corosync的插件运行,因此,建议使用cman作为集群基础架构服务;此处可安全忽略。
# grep ERROR:/var/log/cluster/corosync.log | grep -v unpack_resources
Aug 08 15:56:29corosync [pcmk ] ERROR:process_ais_conf: You have configured a cluster using the Pacemaker plugin forCorosync. The plugin is not supported in this environment and will be removedvery soon.
Aug 08 15:56:29corosync [pcmk ] ERROR:process_ais_conf: Please see Chapter 8of 'Clusters from Scratch' (http://www.clusterlabs.org/doc) for details onusing Pacemaker with CMAN
Aug 08 15:56:32corosync [pcmk ] ERROR:pcmk_wait_dispatch: Child process cib terminated with signal 6 (pid=32326,core=true)
。。。
Aug 08 15:59:32corosync [pcmk ] ERROR:pcmk_wait_dispatch: Child process crmd exited (pid=1050, rc=201)
查看pacemaker是否正常启动:
# grep pcmk_startup/var/log/cluster/corosync.log
Aug 08 15:56:29corosync [pcmk ] info: pcmk_startup:CRM: Initialized
Aug 08 15:56:29corosync [pcmk ] Logging: Initializedpcmk_startup
Aug 08 15:56:29corosync [pcmk ] info: pcmk_startup:Maximum core file size is: 18446744073709551615
Aug 08 15:56:29corosync [pcmk ] info: pcmk_startup:Service: 9
Aug 08 15:56:29corosync [pcmk ] info: pcmk_startup:Local hostname: node8
pacemaker的配置接口:
crmsh centos 6.4之前使用的 suse提供的
pcs centos 6.4之后使用的 redhat提供的
这里以crmsh为例,crmsh依赖于pssh包
# wgethttp://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/network:ha-clustering:Stable.repo
# cp network\:ha-clustering\:Stable.repo/etc/yum.repos.d/
# yum -y installcrmsh
复制到另一个高可用节点上
# scp/root/network\:ha-clustering\:Stable.repo node88:/etc/yum.repos.d
在高可用场景下需要注意VIP