服务器列表:(8C64G)
10.104.142.129
10.104.142.131
10.104.142.132
10.104.142.161
10.104.142.162
10.104.142.163
10.104.142.164
10.104.142.165
10.104.142.167
10.104.142.168
10.104.142.169
10.104.142.170
10.104.142.171
10.104.142.172
10.104.142.173
10.104.142.174
10.104.142.175
10.104.142.176
部署步骤:
将需要部署的工具上传到/opt/soft目录
将启动工具脚本上传到/opt/start目录
一:zookeeper
10.104.142.129 zookeeper-3.4.5.tar.gz
10.104.142.131
zookeeper-3.4.5.tar.gz + apache-tomcat-7.0.42.tar.gz + dubbo-admin-2.5.4-SNAPSHOT.war
10.104.142.132
zookeeper-3.4.5.tar.gz
安装zookeeper
1.解压zookeeper到指定目录
tar zxvf zookeeper-3.4.5.tar.gz -C /opt/zookeeper
2.在/opt/zookeeper目录下创建zookeeperData目录
3.cd /opt/zookeeper/zookeeper-3.4.5/conf
拷贝zoo_sample.cfg zoo.cfg
修改zoo.cfg文件
dataDir=
/opt/zookeeper/zookeeperData
server.1=10.104.142.129:2888:3888
server.2=
10.104.142.131:2888:3888
server.3=
10.104.142.132:2888:3888
4.打开日志自定清理功能
# The number of snapshots to retain in dataDir
autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
autopurge.purgeInterval=1
autopurge.purgeInterval 这个参数指定了清理频率,单位是小时,需要填写一个1或更大的整数,默认是0,表示不开启自己清理功能
autopurge.snapRetainCount 这个参数和上面的参数搭配使用,这个参数指定了保留文件的数目,默认是保留三个
5.创建链接
cd /zookeeper
ln -s zookeeper-3.4.5/ default-zookeeper
6.cd /opt/zookeeper/zookeeperData
vi myid
写入数字1
其他机器分别写入2和3,和步骤3中zoo.cfg配置文件对应
7.启动
./bin/zkServer.sh start
或者
cd /opt/start
./start-zookeeper.sh
8.停止
./bin/zkServer.sh stop
或者
cd /opt/start
./stop-zookeeper.sh
安装dubbo后台
把tomcat压缩包拷贝到/opt目录下,并解压
tar zxvf
apache-tomcat-7.0.42.tar.gz
解压apache-tomcat-7.0.42.tar.gz 到webapps下的ROOT
拷贝
dubbo-admin-2.5.4-SNAPSHOT.war到apache-tomcat-7.0.42的webapps目录下,
进入WEB-INF目录,修改dubbo.properties
dubbo.registry.address=zookeeper://10.104.142.131:2181?backup=10.104.142.129:2181,10.104.142.132:2181
dubbo.admin.root.password=root
dubbo.admin.guest.password=root
启动tomcat
启动命令进入bin目录./startup.sh
查看dubbo后台
10.104.142.131:8080
/governance/services
用户名/密码:root/root
二:cassandra
10.104.142.167 apache-cassandra-2.0.6-bin.tar.gz
10.104.142.168
apache-cassandra-2.0.6-bin.tar.gz
10.104.142.169
apache-cassandra-2.0.6-bin.tar.gz
1.解压cassandra到/opt/cassandra目录
tar xzvf apache-cassandra-2.0.6-bin.tar.gz -C /opt/cassandra
2.创建链接
cd /opt/cassandra
ln -s apache-cassandra-2.0.6/ default-cassandra
3.修改配置文件
cd /opt/cassandra/apache-cassandra-2.0.6/conf
修改配置文件cassandra.yaml
cluster_name: 'LIBRA'
data_file_directories:
- /opt/cassandra/default-cassandra/data
(以上一定要按照这种格式来)
commitlog_directory: /opt/cassandra/default-cassandra/commitlog
saved_caches_directory: /opt/cassandra/default-cassandra/saved_caches
seeds: "10.104.142.167,10.104.142.168,10.104.142.169"
listen_address: 10.104.142.167
rpc_address: 0.0.0.0
拷贝到其他两台,并修改listen_address为本身的ip
cassandra建库建表(在167上):
进入cassandra的bin目录:
$bin/cassandra-cli -host 10.104.142.167 -port 9160
建库:create keyspace libra_sec;
建表:use libra_sec
create column family statements;
create column family new_middleware_config;
create column family new_config_data;
create column family statistics_data;
// create column family statistics_data_history;
//statistics_data_history是scf(super column family)
create column family statistics_data_history with column_type=Super;
create column family window_data;
=============================================================================================
2016-03-22更新
原因:表建的不对,statistics_data_history是scf
异常信息是:
why:supercolumn parameter is invalid for standard CF
进入cassandra的bin目录:
$bin/cassandra-cli -host 10.104.142.168 -port 9160
use libra_sec;
先删除:drop column family statistics_data_history;
再创建:
create column family statistics_data_history with column_type=Super;
=============================================================================================
通过ASSUME指令在客户端的整个会话过程中采用哪一种数据格式
ASSUME
window_data
KEYS AS utf8;
ASSUME
window_data
COMPARATOR AS utf8;
ASSUME
window_data
VALIDATOR AS utf8;
4.修改log4j日志输出目录
cd /opt/cassandra/apache-cassandra-2.0.6/conf
修改配置文件log4j-server.properties
log4j.appender.R.File=/opt/cassandra/default-cassandra/system.log
5.启动
$ bin/cassandra -f
三:storm
10.104.142.161(superviser) apache-storm-0.9.1-incubating.zip
10.104.142.162(superviser) apache-storm-0.9.1-incubating.zip
10.104.142.163(superviser) apache-storm-0.9.1-incubating.zip
10.104.142.164(superviser) apache-storm-0.9.1-incubating.zip
10.104.142.165(superviser) apache-storm-0.9.1-incubating.zip
10.104.142.170
(superviser) apache-storm-0.9.1-incubating.zip
10.104.142.171(superviser) apache-storm-0.9.1-incubating.zip
10.104.142.172(superviser) apache-storm-0.9.1-incubating.zip
10.104.142.173(superviser) apache-storm-0.9.1-incubating.zip
10.104.142.174(superviser) apache-storm-0.9.1-incubating.zip
10.104.142.175(superviser) apache-storm-0.9.1-incubating.zip
10.104.142.176(nimbus) apache-storm-0.9.1-incubating.zip
安装storm
1.解压到指定目录
unzip
apache-storm-0.9.1-incubating.zip -d /opt/storm
2.建立链接
ln -s apache-storm-0.9.1-incubating/ default-storm
3.配置storm集群
cd /opt/storm/default-storm/conf
修改配置文件storm.yaml
storm.zookeeper.servers:
- "
10.104.142.129"
- "
10.104.142.131"
- "
10.104.142.132"
nimbus.host: "
10.104.142.176"
ui.port: 8081
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
worker.childopts: "-Xmx4096m"
4.启动
cd /opt/start
./start-Storm-Nimbus.sh(只有176管理节点需要)
./start-Storm-UI.sh
(只有176管理节点需要)
./start-Storm-Supervisor.sh
5.查看storm后台UI
10.104.142.176:8081
四.libra项目启动
上传
libra-sec-core-0.0.1.jar到10.104.142.176的/opt/soft目录
启动命令:/opt/storm/default-storm/bin/storm jar /opt/soft/libra-sec-core-0.0.1.jar com.suning.libra.Main LIBRA-SEC -c nimbus.host=localhost
dubbo
http://10.104.255.231:8080/
stormUI
http://10.104.255.240:8081/
/***************************************************************************************************************************/
cassandra配置格式问题
zk重连session过期异常:
zk服务器磁盘满了,zk日志设置自动清除
异常:
java.nio.channels.UnresolvedAddressException
怀疑可能是地址有问题,最后发现使用netty链接并没有链接服务器的ip而是使用域名。原因在/etc/sysconfig/network中配置了服务器的名称。
而在/etc/hosts中没有将服务器名称对应到ip的配置,所以程序链接不上。最后在/etc/hosts中增加了对应配合后。 重新运行程序。一切正常。问题解决
启动异常:
java.io.FileNotFoundException: File 'storm-local/supervisor/stormdist/LIBRA-SEC-2-1453428394/stormconf.ser' does not exist
删除
/opt/storm/default-storm/conf/storm-local目录下的文件