Mysql相关
--------------
mydumper 过滤备份
mydumper -u root --regex '^(?!(new_account\.Edm))' -S /usr/local/mysql/tmp/3306/mysql.sock -o /home/mysql/mysqltmp/ -p 'dbPwd1)(*'
--------------
初始化
5.6:
mysql_install_db --defaults-file=/usr/local/mysql/data/3308/my.cnf --basedir=/usr/local/mysql --datadir=/usr/local/mysql/data/3308 --skip-name-resolve --user=mysql > /dev/null
5.7:
mysqld --initialize --basedir=/usr/local/mysql --datadir=/usr/local/mysql/data/3308 --defaults-file=/usr/local/mysql/data/3308/my.cnf --user=mysql
--------------
join和排除
select a.name,b.name from app a join user b on a.creator_id = b.id where b.name!='SYSTEM';
select id,db,user,host,command,time,state,info from information_schema.processlist where command!='Sleep' order by time desc;
--------------
不同版本:---------
wget --user=guest --password=gFkh21ezY1oFVcG --dns-timeout=2 --connect-timeout=2 http://info.db.ucweb.com:1221/mysql-5.7.28-linux-glibc2.12-x86_64.tar.gz
wget --user=guest --password=gFkh21ezY1oFVcG --dns-timeout=2 --connect-timeout=2 http://info.db.ucweb.com:1221/mysql-5.5.62-linux-glibc2.12-x86_64.tar.gz
wget --user=guest --password=gFkh21ezY1oFVcG --dns-timeout=2 --connect-timeout=2 http://info.db.ucweb.com:1221/mysql-5.6.46-linux-glibc2.12-x86_64.tar.gz
wget --user=guest --password=gFkh21ezY1oFVcG --dns-timeout=2 --connect-timeout=2 http://info.db.ucweb.com:1221/mysql-5.7.29-linux-glibc2.12-x86_64.tar.gz
官网版本:
wget https://dev.mysql.com/get/Downloads/MySQL-5.7/mysql-5.7.29-linux-glibc2.12-x86_64.tar.gz
配置文件备份:------
wget --user=guest --password=gFkh21ezY1oFVcG --dns-timeout=2 --connect-timeout=2 http://info.db.ucweb.com:1221/my_bak.sh;mv my_bak.sh /root/scripts;echo '15 1 * * * (source /etc/profile;cd /root/scripts; /bin/sh my_bak.sh >/dev/null 2>&1 &)' >>/var/spool/cron/root;mkdir /home/mysql/backup/conf/
插件安装:------
INSTALL PLUGIN validate_password SONAME 'validate_password.so';
INSTALL PLUGIN CONNECTION_CONTROL SONAME 'connection_control.so';
INSTALL PLUGIN CONNECTION_CONTROL_FAILED_LOGIN_ATTEMPTS SONAME 'connection_control.so';
-----------
数据库跨机房回切
11.3.109.171:3308(db780) udc_info
select id, business_name,cluster_switch_flag from failover_business_t where business_name like '%账号API%';
目标库proxy和源库proxy都需要改
-----------
在没有master.info表的情况下,可以在mysql库里找“select * from mysql.slave_master_info\G”查到密码
select * from mysql.slave_master_info\G
-----------
数据库升级:
wget --dns-timeout=2 --connect-timeout=2 --user=guest --password=gFkh21ezY1oFVcG http://dbinfo.uc.local:1221/weijl/binary_mysql_upgrade_new.sh
https://www.percona.com/downloads/Percona-Server-5.7/Percona-Server-5.7.29-32/binary/redhat/6/x86_64/Percona-Server-5.7.29-32-r56bce88-el6-x86_64-bundle.tar
./mysql_upgrade -uroot -p -S /usr/local/mysql/tmp/3303/mysql.sock --skip-write-binlog
有环境变量的原因,需要在执行时添加”./“ 不然会加载到源目录去
-----------
广州db_pf01/02的ucmha_deployer平台 - > 各机房controller 跨机房切换指令下发流程
-----------
跨机房切换:
1、登录源端、目标端的主从库,确认slave关系,数据是否一致;
2、在ucmha上确认子业务的候选知否已选,两端配置是否一致;
3、在子业务点击“跨机房切换”;
4、在“监控→发布切换历史”中找到切换一刻,slave的pos点,配置change语句;
5、在新主库上停slave,在其他三个从库上重建slave关系;
6、在ucmha上,先选“重置”,再选“角色重置”,先点源机房,再点目标机房。
采坑总结:
选候选节点时,点了ucmha的连接池,导致重置角色失败,取消连接池的勾选,重新发布即可恢复。
-----------
弹内业务读写回uc机房做法:分别在uc中转区机房、弹内机房找两台机器,部署tcpha,然后类似接管道那样拼起来;
tcph相关说明文档:
uc中转区机器:
dc058 : 11.156.10.248
dc061 : 11.156.10.215
dc062 : 11.156.10.249
弹内:corona-su18
11.180.67.180
11.180.67.114
11.180.67.85
在tcpha.conf文件中配置好信息,然后执行命令启动代理端口:./tcpha.sh start proxy10350
-----------
录入一个没接入tddl的数据库到tddl,需要将目标挂载的应用在的dbcontrol上关联appgroup才能在录入tddl时选到,一个app应用可以加入到多个appgroup中
导航在线页服务 导航在线折叠栏国内版
la4导航在线折叠栏国际版[mysql] 导航在线折叠栏国际版 olps(java)
-----------
x-gate进行数入弹:
x-gate:http://walmart.alibaba-inc.com/create_job
# 填写模板:集团的db版本只能填5.6以上,且不能建触发器、函数、存储过程。如果一个数据库实例中有多个库,要把不用的过滤掉,不然在第三步点”启动“时会显示”映射未完成“。
# 所有配额,最好是整数,名称不要太长,不然是失效,目前最长的是:wdj_account_db610
# 源库里,不能有函数、外键、触发器、存储过程、视图等设置
# 源端集群名称|备库数量|QPS|TPS|IP:PORT|反向回流IP:PORT(没有反向任务就填正向IP:PORT)|磁盘大小|DB版本|存储产品|dba|DTS黑名单(准确库名)(分号';'分隔)|TDDL黑名单(正则)(分号';'分隔)|是否标准化库名(0:不标准,1:标准)|实例规
wdj_api_db624|1|1000|500|11.5.64.212:3303|11.5.64.180:3303|400|5.6|X-Cluster|tangjia.ln|db_monitor||1
wdj_api_new_db624 UCBU_NEW_ACCOUNT_LOG_RTWF9_APP kuka_2c405e5cc1cb4135bcaff6d22fa71e76
wdj_db610|1|2000|1000|11.5.64.205:3306|11.5.64.173:3306|500|5.6|X-Cluster|tangjia.ln|db_monitor;box;percona||1
wdj_db610 UCBU_NEW_ACCOUNT_7WQHF_APP kuka_01b0e9422f1a4955817e4d37297dc6ad
wdj_db610
account[mysql]
db609:11.5.64.173:3306
db610:11.5.64.205:3306
11.180.67.180:10351
11.180.67.114:10351
11.180.67.85:10351
TDDL: UCBU_NEW_ACCOUNT_7WQHF_APP
instance_id:kuka_01b0e9422f1a4955817e4d37297dc6ad
wdj_api_db624
wdj_account_api_17118046 账号API[mysql]
db623:11.5.64.180:3303
db624:11.5.64.212:3303
11.180.67.180:10350
11.180.67.114:10350
11.180.67.85:10350
TDDL: UCBU_NEW_ACCOUNT_LOG_RTWF9_APP
instance_id:kuka_2c405e5cc1cb4135bcaff6d22fa71e76
# DTS同步账号,在源主库上执行
GRANT ALL PRIVILEGES ON *.* TO 'idb_dts_sync'@'%' IDENTIFIED BY 'dXdWC1SceTrCd3Cc';
# 需要修改从库的binlog格式为row,然后要重启slave进程,让日志格式生效。
-----------
防止线上ddl锁表的事务提交:
借用openark-kit-196工具,
在root@usa-db138 路径:/root/scripts/openark-kit-196/alter.sh
镜像表:
/usr/bin/python scripts/oak-online-alter-table -u root -p 'dbPwd1)(*' -S /usr/local/mysql/tmp/3301/mysql.sock -d 9apps -t na_package -g na_package_20191231_create -a "ADD INDEX create_time_index(create_time)" --sleep=500 --skip-delete-pass > na_package_create.log
# 以上这步会复制一个镜像表,然后再rename回原表;rename
rename table na_package to na_package_bak_20191231, na_package_20191231_create to na_package;
验证方法:
select sum(crc32(concat(ifnull(LM_WH_TRANS_ID,'NULL'),ifnull(DASOURCE_ID,'NULL'),ifnull(DATA_SOURCES,'NULL'),ifnull(TRANSACTION_NO,'NULL')))) as sum from lm_wh_trans union all select sum(crc32(concat(ifnull(LM_WH_TRANS_ID,'NULL'),ifnull(DASOURCE_ID,'NULL'),ifnull(DATA_SOURCES,'NULL'),ifnull(TRANSACTION_NO,'NULL')))) as sum from lm_wh_trans_online_20150409;
select group_concat('ifnull(',column_name,',\'NULL\')') from information_schema.columns where table_schema='pp_activity' and table_name ='lucky_record' ;
# 触发器相关
show triggers; #全部触发器
drop trigger trigger_name; #删除触发器
select * from information_schema.`triggers`\G #查看触发器
# 过滤 sleep
pager grep -v Sleep
-----------
添加字段:
alter table app_config_stat add column `flowlimit_lower` bigint(20) NOT NULL DEFAULT '0',add column `flowlimit_upper` bigint(20) NOT NULL DEFAULT '0';
-----------
schema导出
USE information_schema;
SELECT
T.TABLE_SCHEMA AS '数据库名称',
T.TABLE_NAME AS '表名',
T.TABLE_TYPE AS '表类型',
T. ENGINE AS '数据库引擎',
C.ORDINAL_POSITION AS '字段编号',
C.COLUMN_NAME AS '字段名',
C.COLUMN_TYPE AS '数据类型',
C.IS_NULLABLE AS '允许为空',
C.COLUMN_KEY AS '键类型',
C.EXTRA AS '自增属性',
C.CHARACTER_SET_NAME AS '编码名称',
C.COLUMN_COMMENT AS '字段说明'
FROM
COLUMNS C
INNER JOIN TABLES T ON C.TABLE_SCHEMA = T.TABLE_SCHEMA
AND C.TABLE_NAME = T.TABLE_NAME
WHERE T.TABLE_SCHEMA = '9apps' and T.TABLE_NAME = 'na_language' into outfile '/tmp/na_language.txt';
my3306 -e "USE information_schema; SELECT T.TABLE_SCHEMA AS '数据库名称',T.TABLE_NAME AS '表名',T.TABLE_TYPE AS '表类型',T. ENGINE AS '数据库引擎',C.ORDINAL_POSITION AS '字段编号',C.COLUMN_NAME AS '字段名',C.COLUMN_TYPE AS '数据类型',C.IS_NULLABLE AS '允许为空',C.COLUMN_KEY AS '键类型',C.EXTRA AS '自增属性',C.CHARACTER_SET_NAME AS '编码名称',C.COLUMN_COMMENT AS '字段说明' FROM COLUMNS C INNER JOIN TABLES T ON C.TABLE_SCHEMA = T.TABLE_SCHEMA AND C.TABLE_NAME = T.TABLE_NAME WHERE T.TABLE_SCHEMA = '9apps' and T.TABLE_NAME = 'na_package_tag'" > na_package_tag.txt
na_language
na_country
na_package_tag
-----------
主要对接-国际应用发行-国内应用发行-国际头条
-----------
添加字段:ALTER TABLE t_metaq_stats_log add tps bigint(20) Default NULL
删除 ALTER TABLE 表名 DROP字段名称
更改字段 (1) 更改字段长度 alter table 表名 modify column 名称 类型
-----------
对于在tddl上已有实例的应用,如果想在其他机房创建实例,可以在dbplay上申请工单,然后去dbcontrol中修改新建的业务名,这样的前提是两个地方的表库授权等一模一样。然后再tddl上添加即可。
dbcontrol中的业务名可以不同,子业务是通过appgroup来关联的,只要appgroup一致,创建完ucmha的proxy之后,tddl上就会自动关联。
关联关系是 ucmha的proxy→关联dbcontrol的业务名; dbcontrol的appgroup→关联tddl中的业务。
如果在dbcontrol中删了app应用,那么需要:
1、去其他地区的库dump表结构.mysqldump -u root -p --socket=/usr/local/mysql/tmp/3302/mysql.sock -d mvq|gzip > mvq.sql.gz ;
2、mysql上的库表授权信息同步过来,“show grants for mvq@'11.%';” ;
3、最后去其他机房的ucmha的“数据库实例 → 用户列表 ”中找到对应的用户和密码,同步到新实例上,不然添加tddl会验证不了。
4、如果有逻辑表,则在逻辑表框体中点击按逻辑表生成规则,如果没有逻辑表,则直接在老集群的规则中拷贝过来,然后需要再发布,才能进行验证;
-----------
创建跨机房主从同步时,需要统一系统的mysql用户密码:MzMQ6oSxl1L2yY0iINxZ
否则tddl无法创建
-----------
归档下线,确认备份数据上传了
漫画支付[mysql] 下线
sz1.backup.uc.local
dump_11.5.68.235_3304_20191024_31928.sql.tar
-----------
dbplay部署集群,可登陆查看udc发布日志,来确定报错原因
db_pf01 11.3.76.107
db_pf02 11.3.78.246
部署目录
/home/udc/local/uae_agent/apps/dbplay
-----------
过滤关键字
pager egrep -v 'Sleep|coordinator|replicator'
show processlist;
查看数据库状态:status
-----------
系统初始化
1、重装集群,确保两台机器不在一个机柜上
2、v42机器需要对磁盘做lvm
3、新集群需要执行check_dbserver.sh 安装常用工具
4、安装ucmha的客户端
5、添加堡垒机mysql的密码确保udc可以连接 MzMQ6oSxl1L2yY0iINxZ
6、使用install_mysql_binary_muti_new.sh 安装数据库实例
7、安装agproxy,使服务器可以被ucmha识别到
8、在uchma的 自动化部署→服务器管理 找到对应机房点击同步,将新加的机器收录进来
9、在uchma的 自动化部署→分发管理 找到对应的proxy组,进行分发
10、在uchma的 业务管理里进行发布
v42机型,磁盘特殊处理:http://ok.ucweb.local/pages/viewpage.action?pageId=38928620
# create
sudo pvcreate /dev/nvme0n1p1 /dev/nvme1n1p1 /dev/nvme2n1p1 /dev/nvme3n1p1
sudo vgcreate vg1 /dev/nvme0n1p1 /dev/nvme1n1p1 /dev/nvme2n1p1 /dev/nvme3n1p1
sudo lvcreate -l 3662104 vg1 -n lvm1
sudo mkfs.ext4 -L /data1 /dev/vg1/lvm1
# remove
lvdisplay
echo -e "y\n"|lvremove /dev/vgdata/volume1
vgdisplay
echo -e "y\n"|vgremove vgdata
pvdisplay
echo -e "y\n"|pvremove /dev/nvme3n1p1 /dev/nvme1n1p1
# 磁盘打标 ,会直接格式化
mkfs.ext4 -L /data2 /dev/nvme1n1p1
mkfs.ext4 -L /data3 /dev/nvme2n1p1
# 写入配置文件
echo "LABEL=/data1 /data1 ext4 defaults,noatime,nodiratime 0 0" >> /etc/fstab
echo "LABEL=/data3 /data3 ext4 defaults,noatime,nodiratime 0 0" >> /etc/fstab
-----------
创建从proxy
1、需要从线上一台从库导出现有的数据;需要先在ucmha上停从库监控,然后停一下从库,拷贝数据,拷贝结束才能再起开启从库
2、拷贝在从库到目标集群上,可以用pgiz多线程远程压缩拷贝
3、重建新从库的主从关系
4、在ucmha上添加新proxy
# 举例来自 nineapps-advanced-search 该应用
A:11.174.115.148
B:usa-db191 11.7.113.238
B:usa-db190 11.7.110.183
3306
如遇到权限问题,可以先在B机进行;
数据同步
#通过FIFO直接解压
#A机上,进入需要传输文件(目录)的上级目录 ,-p为使用进程数
screen -S 9apps-3306-1126
tar -cf - 3301 | pigz -2 -p 6 -c | ssh -p 9922 [email protected] "cat > /home1/mysql/mysqldata/3306/scp_gzip_pipe"
tar -cf - 3301 | pigz -2 -p 6 -c | ssh -p 9922 [email protected] "cat > /home1/mysql/mysqldata/3306/scp_gzip_pipe"
# 添加screen
screen -S yourname -> 新建一个叫yourname的session
screen -ls -> 列出当前所有的session
screen -r yourname -> 回到yourname这个session
screen -d yourname -> 远程detach某个session
screen -d -r yourname -> 结束当前session并回到yourname这个session
# 退出screan而不停止进程;
ctrl +a +d
C-a d -> detach,暂时离开当前session,将目前的 screen session (可能含有多个 windows) 丢到后台执行,并会回到还没进 screen 时的状态,此时在 screen session 里,每个 window 内运行的 process (无论是前台/后台)都在继续执行,即使 logout 也不影响。
C-a z -> 把当前session放到后台执行,用 shell 的 fg 命令则可回去。
#B机上
mknod scp_gzip_pipe p
mknod tar_pipe p
chmod 777 scp_gzip_pipe tar_pipe
nohup unpigz -2 -p 5 -c < scp_gzip_pipe > tar_pipe &
nohup tar -xf - < tar_pipe > gzip_pipe &
如遇到找不到unpigz,下载包解压即可
wget --user=guest --password=gFkh21ezY1oFVcG --dns-timeout=2 --connect-timeout=2 http://info.db.ucweb.com:1221/pigz.tar.gz
下载这个包解压后放到/usr/bin/下即可
#启动数据库,在master.info中查看slave的停止位置,然后reset slave,然后再change master
#-- 重建主从关系
CHANGE MASTER TO master_host='11.174.115.181',
master_user='myrep', master_password='rB9BFN2xM4eRdaD',
master_port=3301, master_log_file='mysql-bin.007298', master_log_pos=50071336;start slave;show slave status\G
-----------
ucmha报错:1226,监控请求次数超过上限,需要登录报警的主库上修改参数:
show grants for ucmha_monitor@'11.%';
GRANT REPLICATION CLIENT ON *.* TO 'ucmha_monitor'@'11.%' WITH MAX_QUERIES_PER_HOUR 1440000 MAX_UPDATES_PER_HOUR 14400 MAX_CONNECTIONS_PER_HOUR 72000 MAX_USER_CONNECTIONS 16;
多加0即可;
-----------
目录移动流程
主从切换
1、通知研发,暂停ucmha、udc的监控
2、停从库
3、修改从库的data、log目录,mv、ln -s
4、启从库
5、在ucmha切换proxy的主从关系,让流量走从库;
6、停主库
7、修改主库的data、log目录
8、起主库
9、建立主库follow从库的同步
10、ucmha,流量回切主库
11、通知研发
usa-db199 11.7.110.182
usa-db200 11.7.110.214
-----------
修改不知道明文密码的用户授权:
CREATE USER 'ro_udcweb'@'11.5.64.237' IDENTIFIED BY '*C2C8444740559A4A1207F9E205D1DC6A28BFCAE5';
update user set Password='*4E543B5C7D2DFEC3103BABBC50E5175570BEAA05' where User='ro_udcweb';
grant all privileges on suggestion.* to suggestion@'10.%';
flush privilegs;
-----------
打开general log 查看请求信息:
SHOW GLOBAL VARIABLES like '%general_log%';
set global general_log="on";
-----------
修改字符集connectionInitSql=set names utf8mb4
需要在
1、数据库配置文件“/usr/local/mysql/data/3304/my.cnf”中添加:character-set-server = utf8mb4
2、在global set中修改“%char%”属性的设置:utf8mb4
3、在tddl的“连接属性”后面追加:connectionInitSql=set names utf8mb4 #这步改完需要重连一次
-----------
在线修改binlog格式:
set global binlog_format=ROW;
需要重启slave进程
-----------
date -s "2019-09-24 21:30:00";date;hwclock -w
修改时间
-----------
backup
mysqldump -u root -p --socket=/usr/local/mysql/tmp/3307/mysql.sock zsearch_console |gzip > zsearch_console.sql.gz
mydumper -u root --regex '^(?!(new_account\.Edm))' -S /usr/local/mysql/tmp/3306/mysql.sock -o /home/mysql/mysqltmp/ -p 'dbPwd1)(*'
recive
my3307 zsearch_console < zsearch_console.sql
myloader -u root -p 'dbPwd1)(*' -S /usr/local/mysql/tmp/3307/mysql.sock -o -t 12 -d ./dump_11.5.65.24_3306_20200204_27185
# -o 可以覆盖
# 在备份目录中,有个metadata文件,上面记录的是被dump的位点,下面记录的是被dump对象的主库位点。举例如下:
[root@db610 3306]# cat metadata
Started dump at: 2020-04-29 11:40:13
SHOW MASTER STATUS:
Log: mysql-bin.000949
Pos: 505244935
GTID:
SHOW SLAVE STATUS:
Host: 11.5.64.205
Log: mysql-bin.001356
Pos: 40314665
GTID:
Finished dump at: 2020-04-29 11:49:46
-----------
rename database
1、create database qiqu_bak
2、get-table:
mysql -uroot -p -S /usr/local/mysql/tmp/3301/mysql.sock -Nse "select table_name from information_schema.TABLES where TABLE_SCHEMA='qiqu'" > table.list
3、for table:
for i in `cat table.list`;do echo $i;mysql -uroot --password='dbPwd1)(*' -S /usr/local/mysql/tmp/3301/mysql.sock -e "rename table qiqu.$i to qiqu_bak.$i;";done
-----------
dbcontrol日常环境数据库:11.3.138.78:3306 pw:mysqlserver_t
mysql日常环境在uae2登录
http://uae-2.ep.ucweb.local/#/project/333/services?service=50553
ucmha是mysql的主从自动切换代理;
tddl是业务连接的流量路由,可以做分库分表,数据分片的策略管理;
主从库,先停从库,再停主库。
如果再申请tddl时没有设置corona,可以在udc平台添加tddl应用名,然后发布,就会产生对应的用户名密码等连接信息
----------------------------------------------------------------------------------------------------------------------------
下线资源步骤:0、登录mysql查数连接情况;1、停监控;2、停ucmha-agent,阻止连接进入;3、过一晚,再停mysql进程;
----------------------------------------------------------------------------------------------------------------------------
-1、登录master库查连接
select substring_index(host,':',1) as b,user,db,COMMAND,count(*) from information_schema.PROCESSLIST group by b,db,COMMAND;
0、在udc取消对应ip的监控发布
1、在ucmha中的服务管理中停止对应的agent;
2、删除proxy实例
3、删除实例信息
4、下线proxy组 配置,下线的动作会同步到diamond等其他依赖系统,删除就是强制清理,所以建议下线操作;
5、在dbcontrol的业务列表中找到对应业务项,删除子业务后。子业务配置不要手动删除,ucmha上会自动清理掉;
6、tddl上删掉后端实例,并发布;
7、如果要下线tddl,在dbcontrol的下线对应的规则
8、登录服务器,手动停库
----------------------------------------------------------------------------------------------------------------------------
创建新实例步骤:
----------------------------------------------------------------------------------------------------------------------------
--
简要步骤:
1、看工单,确认实例大小;
2、选机器,修改pool_size配置,启动服务,配置主从,创建ucmha相关监控的库表和授权,添加crontab,注意data和log的目录要确保连接到大磁盘里;
3、配置ucmha,并发布;
4、配置tddl;
5、再次发布ucmha
--
详细步骤:
1、UDC领工单,分别在dbcontrol、uchma中查看新申请的应用名有没有被用,并根据业务提交的需求评估实际资源用量;
2、在UDC上修改实际的分片数和实例数,提交之后ucmha上会出现对应的两个proxy主机组;
3、配置proxy组,工单中的一个实例代表一个proxy组,一组proxy包含一主一从,流量都在主上,备份在从上。读写分离需要多加一个从进行读流量分担;
4、选取mysql服务器,在udc平台上判断主从角色,登录mysql服务器,查看当前可用端口; 一般master放一台集群,slave们放一台,避免混部,slave的数据备份会影响性能;
5、打开对应端口的配置,并修改该实例的inno_db对应的内存使用量,跟进qps来评估;当前2w的qps给了15g;/etc/my.cnf
6、启动mysql 主从实例;
7、打开root用户下的crontab的监控采集项,以及从库mysql用户下的crontab的备份任务;
8、创建同步用户,配置主从同步;
9、在主库创建ucmha源数据采集相关,ucmha有独立的监控采集机制,需要在主库实例上创建对应的源数据库,主从配置会自动同步到从库;
10、添加proxy后端mysql的主从信息;
11、配置proxy信息,根据业务名备注;
12、发布配置。
13、回到UDC点击实施操作,此时需要选择是tddl客户端接入,还是要加corona代理,tddl支持java接口,corona是为那些非java类只支持jdbc的方式的服务器接入方式;
14、这一步会在ucmha的proxy里加入新的用户,此时需要在ucmha平台再发布一次;
15、回到UDC点击验证tddl,确认无误后,本次实例创建结束。
----------------------------------------------------------------------------------------------------------------------------
初始化相关
----------------------------------------------------------------------------------------------------------------------------
--添加实例
一般情况下会有已经注释的实例,直接打开注释就行,如果没有,不要手动添加,而是执行脚本自动配置:
wget --user=guest --password=gFkh21ezY1oFVcG --dns-timeout=2 --connect-timeout=2 http://info.db.ucweb.com:1221/mysql/add_instance.sh
wget --user=guest --password=gFkh21ezY1oFVcG --dns-timeout=2 --connect-timeout=2 http://info.db.ucweb.com:1221/mysql/my.cnf
wget --user=guest --password=gFkh21ezY1oFVcG --dns-timeout=2 --connect-timeout=2 http://info.db.ucweb.com:1221/mysql/install_mysql_binary_muti_new.sh
已有在跑实例时;
新增单个
sh add_instance.sh --conf=/usr/local/mysql/data/3306/my.cnf --portlist=3308
新增多个
sh add_instance.sh --conf=/usr/local/mysql/data/3306/my.cnf --portlist=3301,3302,3303,3304,3305,3308,3309,3310
空机器初始化:
sh install_mysql_binary_muti_new.sh
脚本目录:/usr/local/src
快捷登录:
[root@db768 src]# which my3305
alias my3314='mysql -uroot -p -S /usr/local/mysql/tmp/3314/mysql.sock'
/usr/local/mysql/bin/mysql
[root@db768 src]# alias my3307='mysql -uroot -p -S /usr/local/mysql/tmp/3307/mysql.sock'
--新机器检查
wget --user=guest --password=gFkh21ezY1oFVcG --dns-timeout=2 --connect-timeout=2 http://dbinfo.uc.local:1221/scripts/check_dbserver.sh
wget --user=guest --password=gFkh21ezY1oFVcG --dns-timeout=2 --connect-timeout=2 http://info.db.ucweb.com:1221/mysql/check_dbserver.sh
wget --user=guest --password=gFkh21ezY1oFVcG --dns-timeout=2 --connect-timeout=2 http://dbinfo.uc.local:1221/collect_info_dbsize.sh
wget --user=guest --password=gFkh21ezY1oFVcG --dns-timeout=2 --connect-timeout=2 http://info.db.ucweb.com:1221/collect_info_dbsize.sh
wget --user=guest --password=gFkh21ezY1oFVcG --dns-timeout=2 --connect-timeout=2 http://info.db.ucweb.com:1221/ucmha/publish_ctrl.sh
--升级controller
sh publish_ctrl.sh xxxxxxxoriVersion v3.3-release-M6
或者
cd /root/scripts && rm -f check_dbserver.sh > /dev/null 2>&1;wget --user=guest --password=gFkh21ezY1oFVcG --dns-timeout=2 --connect-timeout=2 http://dbinfo.uc.local:1221/scripts/check_dbserver.sh > /dev/null 2>&1 && sh check_dbserver.sh
wget https://downloads.mysql.com/archives/get/file/mysql-5.6.43-linux-glibc2.12-x86_64.tar.gz
wget https://dev.mysql.com/get/Downloads/MySQL-5.6/mysql-5.6.46-linux-glibc2.12-x86_64.tar.gz
wget https://dev.mysql.com/get/Downloads/MySQL-5.7/mysql-5.7.28-linux-glibc2.12-x86_64.tar.gz
----------------------------------------------------------------------------------------------------------------------------
主从同步相关配置
----------------------------------------------------------------------------------------------------------------------------
-- 创建同步用户
GRANT REPLICATION SLAVE ON *.* TO 'myrep'@'10.%' IDENTIFIED BY 'wFarfUFpMFt697JV';
GRANT REPLICATION SLAVE ON *.* TO 'myrep'@'11.%' IDENTIFIED BY 'wFarfUFpMFt697JV';
GRANT REPLICATION SLAVE ON *.* TO 'myrep'@'172.%' IDENTIFIED BY 'wFarfUFpMFt697JV';
-- 初始化主从关系
CHANGE MASTER TO master_host='11.6.118.228',
master_user='myrep', master_password='wFarfUFpMFt697JV',
master_port=3301, master_log_file='mysql-bin.000004', master_log_pos=727;start slave;show slave status\G
-- 开通binlog 只读账号
grant REPLICATION CLIENT,REPLICATION SLAVE,SELECT on *.* to binlog_read@'11.%' identified by 'vOl4hRAA0e7HrIf9';
-- GRANT REPLICATION SLAVE ON *.* TO 'myrep'@'47.88.212.%' IDENTIFIED BY 'wFarfUFpMFt697JV';
mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000004 | 524 | | | |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)
start slave until master_log_file='mysql-bin.000010',master_log_pos=210916378;
reset slave;
----------------------------------------------------------------------------------------------------------------------------
日常操作
----------------------------------------------------------------------------------------------------------------------------
-- 业务建库
create database `live_question`;
grant select,insert,update,delete,execute on `live_question`.* to live_question@'10.%' identified by 'Ghmz93hcnY4vSOll';
grant select,insert,update,delete,execute on `ucmobile_crash_log`.* to ucmobile@'10.%' identified by 'ucweb@2014';
grant select,insert,update,delete,execute on `ActPlat_Userinfo`.* to ActPlat_Userinfo@'10.%' identified by 'Ghmz93hcnY4vSOll';
grant select,insert,update,delete,execute on `ums`.* to uccm@'10.%' identified by 'GT7CppEdY5QoWAzx';
-- 监控&备份相关
- 增加信息采集脚本的crontab
mkdir -p /root/scripts
cd /root/scripts
mv /usr/local/src/collect_info*.sh /root/scripts (记得改脚本名)
#root下
crontab -e
* * * * * (cd /root/scripts ;sh collect_info_host.sh ;)
* * * * * (cd /root/scripts ;sh collect_info_mysql.sh --socket=/usr/local/mysql/tmp/3306/mysql.sock ;)
#mysql下增加备份脚本
su - mysql
mkdir -p backup
mkdir -p scripts
cp /usr/local/src/dump_mysql_put.sh ~/scripts
crontab -e
--只需要配从库
10 2 * * * /bin/sh /home/mysql/scripts/dump_mysql_put.sh --socket=/usr/local/mysql/tmp/3301/mysql.sock --innodb=1 --slave=1
(记得修改备份服务器域名,并验证通过 10.32.38.179 wx6.backup.uc.local)
10 3 * * * /bin/sh /home/mysql/scripts/dump_mysql_put.sh --socket=/usr/local/mysql/tmp/3302/mysql.sock --innodb=1 --slave=1
10 3 * * * /bin/sh /home/mysql/scripts/dump_mysql_rsync.sh --socket=/usr/local/mysql/tmp/3302/mysql.sock --innodb=1 --slave=1
15 3 * * * /bin/sh /home/mysql/scripts/dump_mysql_put.sh -A mydumper --socket=/usr/local/mysql/tmp/3309/mysql.sock
汕头珠池ftp:st3.backup.uc.local
无锡ftp:wx6.backup.uc.local,10.32.38.179
广州沙溪ftp:gz1.backup.uc.local
----------------------------------------------------------------------------------------------------------------------------
中间件&工具平台
----------------------------------------------------------------------------------------------------------------------------
root下
--ucmha3.0的配置方法
-- 安装 de_agent
-- 内网执行,不带参数(汕头的controler还没有启用,需要手动在机器上hosts上添加转换 汕头珠池 ucmha3-st-ctrlr.uc.local 10.34.132.109)(如果启动失败可以到目录下执行 ./safe.sh 9099 ucmha3-st-ctrlr.uc.local:9095 start)
useradd agproxy ; runuser -c "curl -s --user guest:gFkh21ezY1oFVcG http://dbinfo.uc.local:1221/ucmha/install_DeployAgent_lzj.sh | bash" agproxy
-- 公网执行,带参数
useradd agproxy ; runuser -c "curl -s --user guest:gFkh21ezY1oFVcG http://info.db.ucweb.com:1221/ucmha/install_DeployAgent_lzj.sh | bash -s controller=ucmha3-aliyunsz-ctrlr.uc.local port=9095" agproxy
./safe.sh 9099 ucmha-controler.alibaba-inc.com:9095 start
-- 张北
useradd agproxy ; runuser -c "curl -s --user guest:gFkh21ezY1oFVcG http://info.db.ucweb.com:1221/ucmha/install_DeployAgent_lzj.sh | bash -s controller=ucmha3-zb-ctrlr.uc.local port=9095" agproxy
-- 升级deployagent
wget --user=guest --password=gFkh21ezY1oFVcG --dns-timeout=2 --connect-timeout=2 http://info.db.ucweb.com:1221/ucmha/publish_deployagent.sh
--升级controller
sh publish_deployagent.sh xxxxxxxoriVersion v3.3-release-M6
--服务器上装好mysql,proxy
--登录ucmha,在服务器管理界面点:同步
--在udc上添加业务然后刷新ucmha就会出现业务分支,在业务分支上添加proxy组,再添加实例,添加proxy
--到分发界面进行分发
--在proxy组界面下,点击用户列表,添加用户
--回到业务层,点击右侧的发布
--回到proxy组界面,点击proxy页,导出proxy信息给业务运维
--udc2.0上进入dbplay刷新db列表
--CMDB先添加业务,机器关联新业务
--udc添加新实例的数据库(一天同步一次,如果需要马上查询后者变更等就先添加库)
---堡垒机添加mysql账号,以便UDC可以访问
ucmha负载均衡proxy组自动挡掉写请求,只接受读请求,所以,无需配置只读帐号给应用
ucmha2.0因为有中心controler,所以跨机房切换可以通过修改主proxy组的从库为异地实例来达到跨机房切换,
ucmha3.0则没有中心controller概念,每个机房都有独立的controller,所以同proxy组内不能同时存在不同机房实例;
ucmha3.0跨机房切换是通过增加跨机房切换的专用proxy组来搞,这个跨机房切换的proxy组里配的实例可以同时在容灾组和负载均衡组
----------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------
--------UCMT(配制好ucmha2.0,udc,cmdb后,udc1.0版本会自动生成监控,不需要ucmt再手动配置,如果使用ucmha3.0,则需要手动配置ucmt监控,因为ucmha3.0没有对接udc)
--1.主库可用性监控(写入数据)
国际商业化cloudera_sf_mysql_3300_dba
UC_Insight_mysql_3306_SF_dba
check_mysql_auto_dba
-h $HOSTADDRESS$ -p 3301 -ct db_check -w 5 -c 8 -update 1
--2. 主从同步监控
国际商业化cloudera_sf_from_aflow_s27_to_aflow_s28_rep
UC_Insight_from_aflowm21_to_aflowm22_replication
--slave $HOSTADDRESS$ --slave-port 3301 --master 10.41.17.238 --master-port 3301 --warn 900 --crit 1500 --check_table db_check --backup-hour 3 --get-master 1
check_mysql_replication_auto_dba
3.0的在添加用户之前,需要先到服务管理里面将proxy及agent状态手工刷新一下才可以勾选自动同步到proxy
----------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------
--创建库
create database xtq;
--创建用户
grant select,insert,update,delete on xtq.* to xtq@'10.%' identified by 'xmpf3jzkr3RUdmZl';
use db_monitor ;
CREATE TABLE accesslog
( thread_id int(11) DEFAULT NULL,
log_time datetime default null,
dbname varchar(50) default null,
localname varchar(50) DEFAULT NULL,
matchname varchar(50) DEFAULT NULL,
key idx_log_time(log_time)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
select concat('grant insert on db_monitor.accesslog to ',user,'@''',host,''';') from mysql.user where Super_priv <> 'Y' ;
调用输出结果
init_connect='insert into db_monitor.accesslog(thread_id,log_time,dbname,localname,matchname) values(connection_id(),now(),DATABASE(),user(),current_user());'
Tddl分片的优势
1.分库分表(分片)对应用透明,简化分片规则的配置,查询和变更可以在udc上按逻辑库表进行
非分片的应用使用Tddl方式连接数据库的优势
1.技术向集团技术体系靠拢,且以后需要分片时,不需要重构应用
2.tddl连接方式自身实现了proxy的高可用,不再依赖tcpha
3.tddl使用统一的appname名连接,封装了用户名/密码/ip和端口,增强了应用数据的安全性
4.减少数据库迁移时,对业务切换的依赖,不再需要业务配合迁移
上海主动mob733(10.16.41.31)
上海被动mob731 (10.16.11.107)
天津主动monitor1(221.238.196.253 -- 10.14.12.69)
天津被动mob569(221.238.17.240 -- 10.14.12.40)
汕头主动monitor5(121.14.161.150 -- 10.20.103.114)
汕头被动monitor4(121.14.161.149 -- 10.20.102.117)
潮洲主动mob620(113.107.71.141 -- 10.12.14.41)
潮洲被动mob25(125.91.253.187 -- 10.12.13.187)
美国监控67.228.166.107
苏州监控mob251(58.211.21.104 -- 10.22.101.69)
西安主动monitor3(113.142.16.76 -- 10.18.67.149)
西安被动monitor2(113.142.16.75 -- 10.18.67.148)
运营监控机st-mob186(125.91.4.145)
----------------------------------------------------------------------------------------------------------------------------
-- ucmha-建相关创建指令
----------------------------------------------------------------------------------------------------------------------------
create database db_monitor ;
use db_monitor;
create table db_check
(id tinyint not null,
check_time datetime not null,
primary key (id)
) ;
insert into db_check values (0,now()),(1,now()),(2,now()),(3,now()),(4,now()),(5,now()),(6,now()),(7,now()),(8,now()),(9,now()),(10,now()),(11,now()),(12,now()),(13,now()),(14,now()),(15,now()),(16,now()),(17,now()),(18,now()),(19,now()),(20,now()),(21,now()),(22,now()),(23,now()),(24,now()),(25,now()),(26,now()),(27,now()),(28,now()),(29,now()),(30,now()),(31,now()),(32,now()),(33,now()),(34,now()),(35,now()),(36,now()),(37,now()),(38,now()),(39,now()),(40,now()),(41,now()),(42,now()),(43,now()),(44,now()),(45,now()),(46,now()),(47,now()),(48,now()),(49,now()),(50,now()),(51,now()),(52,now()),(53,now()),(54,now()),(55,now()),(56,now()),(57,now()),(58,now()),(59,now()) ;
create table db_backup
(id tinyint not null,
is_backup tinyint not null,
begin_time datetime not null,
primary key(id)
) ;
insert into db_backup values (1,0,now()) ;
grant select,update on db_monitor.* to nagios@'10.%' identified by '},XIRD#0oR6ZClX' with max_connections_per_hour 5400 max_user_connections 15 max_queries_per_hour 3600 max_updates_per_hour 1800 ;
grant select,update on db_monitor.* to nagios@'11.%' identified by '},XIRD#0oR6ZClX' with max_connections_per_hour 5400 max_user_connections 15 max_queries_per_hour 3600 max_updates_per_hour 1800 ;
grant lock tables,reload,super,replication client,select,Show view,process,trigger,event on *.* to backup@localhost identified by 'Yakl9LyRlvZN2Md9' ;
grant update on db_monitor.* to backup@'localhost' ;
CREATE TABLE `ucmha_check` (
`id` tinyint(4) NOT NULL,
`check_time` datetime NOT NULL,
`master_server_id` int(11) DEFAULT NULL COMMENT 'master server_id',
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO ucmha_check VALUES (0,NOW(),@@server_id),(1,NOW(), @@server_id),(2,NOW(), @@server_id),(3,NOW(), @@server_id),(4,NOW(), @@server_id),(5,NOW(), @@server_id),(6,NOW(), @@server_id),(7,NOW(), @@server_id),(8,NOW(), @@server_id),(9,NOW(), @@server_id),(10,NOW(), @@server_id),(11,NOW(),@@server_id),(12,NOW(),@@server_id),(13,NOW(),@@server_id),(14,NOW(),@@server_id),(15,NOW(),@@server_id),(16,NOW(),@@server_id),(17,NOW(),@@server_id),(18,NOW(),@@server_id),(19,NOW(),@@server_id),(20,NOW(),@@server_id),(21,NOW(),@@server_id),(22,NOW(),@@server_id),(23,NOW(),@@server_id),(24,NOW(),@@server_id),(25,NOW(),@@server_id),(26,NOW(),@@server_id),(27,NOW(),@@server_id),(28,NOW(),@@server_id),(29,NOW(),@@server_id),(30,NOW(),@@server_id),(31,NOW(),@@server_id),(32,NOW(),@@server_id),(33,NOW(),@@server_id),(34,NOW(),@@server_id),(35,NOW(),@@server_id),(36,NOW(),@@server_id),(37,NOW(),@@server_id),(38,NOW(),@@server_id),(39,NOW(),@@server_id),(40,NOW(),@@server_id),(41,NOW(),@@server_id),(42,NOW(),@@server_id),(43,NOW(),@@server_id),(44,NOW(),@@server_id),(45,NOW(),@@server_id),(46,NOW(),@@server_id),(47,NOW(),@@server_id),(48,NOW(),@@server_id),(49,NOW(),@@server_id),(50,NOW(),@@server_id),(51,NOW(),@@server_id),(52,NOW(),@@server_id),(53,NOW(),@@server_id),(54,NOW(),@@server_id),(55,NOW(),@@server_id),(56,NOW(),@@server_id),(57,NOW(),@@server_id),(58,NOW(),@@server_id),(59,NOW(),@@server_id) ;
grant select,update on db_monitor.* to ucmha_monitor@'10.%' identified by 'EPREsPlAnniBIlLo' with max_connections_per_hour 7200 max_user_connections 16 max_queries_per_hour 144000 max_updates_per_hour 4096 ;
grant REPLICATION CLIENT on *.* to ucmha_monitor@'10.%' ;
grant select,update on db_monitor.* to ucmha_monitor@'11.%' identified by 'EPREsPlAnniBIlLo' with max_connections_per_hour 7200 max_user_connections 16 max_queries_per_hour 144000 max_updates_per_hour 4096 ;
grant REPLICATION CLIENT on *.* to ucmha_monitor@'11.%' ;
grant select,update on db_controller.* to ucmha@'10.%' identified by 'ucmha' with max_connections_per_hour 7200 max_user_connections 16 max_queries_per_hour 14400 max_updates_per_hour 4096 ;
grant select,update on db_controller.* to ucmha@'11.%' identified by 'ucmha' with max_connections_per_hour 7200 max_user_connections 16 max_queries_per_hour 14400 max_updates_per_hour 4096 ;