FastDFS是一个开源的轻量级分布式文件系统,它对文件进行管理,功能包括:文件存储、文件同步、文件访问(文件上传、文件下载)等,解决了大容量存储和负载均衡的问题。特别适合以文件为载体的在线服务,如相册网站、视频网站等等。
FastDFS为互联网量身定制,充分考虑了冗余备份、负载均衡、线性扩容等机制,并注重高可用、高性能等指标,使用FastDFS很容易搭建一套高性能的文件服务器集群提供文件上传、下载等服务。
简介
FastDFS服务端有两个角色:跟踪器(tracker)和存储节点(storage)。跟踪器主要做调度工作,在访问上起负载均衡的作用。
存储节点存储文件,完成文件管理的所有功能:就是这样的存储、同步和提供存取接口,FastDFS同时对文件的metadata进行管理。所谓文件的meta data就是文件的相关属性,以键值对(key value)方式表示,如:width=1024,其中的key为width,value为1024。文件metadata是文件属性列表,可以包含多个键值对。
跟踪器和存储节点都可以由一台或多台服务器构成。跟踪器和存储节点中的服务器均可以随时增加或下线而不会影响线上服务。其中跟踪器中的所有服务器都是对等的,可以根据服务器的压力情况随时增加或减少。
为了支持大容量,存储节点(服务器)采用了分卷(或分组)的组织方式。存储系统由一个或多个卷组成,卷与卷之间的文件是相互独立的,所有卷的文件容量累加就是整个存储系统中的文件容量。一个卷可以由一台或多台存储服务器组成,一个卷下的存储服务器中的文件都是相同的,卷中的多台存储服务器起到了冗余备份和负载均衡的作用。
在卷中增加服务器时,同步已有的文件由系统自动完成,同步完成后,系统自动将新增服务器切换到线上提供服务。
当存储空间不足或即将耗尽时,可以动态添加卷。只需要增加一台或多台服务器,并将它们配置为一个新的卷,这样就扩大了存储系统的容量。
FastDFS中的文件标识分为两个部分:卷名和文件名,二者缺一不可。
主机名 | ip地址 | 角色 | ||
tracker1 | 172.16.1.15 | tracker | nginx反向代理 | vip1 192.168.1.205 |
tracker1 | 172.16.1.16 | tracker | nginx反向代理 | vip1 192.168.1.205 |
storage1-group1 | 172.16.1.17 | storage | ||
storage2-group1 | 172.16.1.18 | storage | ||
storage3-group2 | 172.16.1.19 | storage | ||
storage4-group2 | 172.16.1.20 | storage |
vip 1 192.168.1.205 绑定域名 image.e3mall.com
192.168.1.0/24 模拟外网
172.16.1.0/24 模拟内网
1.下载所需安装包
https://github.com/happyfish100/libfastcommon/archive/V1.0.36.zip https://github.com/happyfish100/fastdfs/archive/master.zip https://github.com/happyfish100/fastdfs-nginx-module/archive/master.zip http://labs.frickle.com/files/ngx_cache_purge-2.3.tar.gz wget https://github.com/happyfish100/libfastcommon/archive/V1.0.36.zip -O ./libfastcommon.zip wget https://github.com/happyfish100/fastdfs/archive/master.zip -O ./fastdfs.zip wget https://github.com/happyfish100/fastdfs-nginx-module/archive/master.zip -O ./fastdfs-nginx-module.zip wget http://labs.frickle.com/files/ngx_cache_purge-2.3.tar.gz [root@tracker1 tools]# ll total 9928 -rw-r--r-- 1 root root 8234674 Oct 27 2013 apache-tomcat-7.0.47.tar.gz -rw-r--r-- 1 root root 22192 May 18 12:28 fastdfs-nginx-module.zip -rw-r--r-- 1 root root 425546 May 18 12:25 fastdfs.zip -rw-r--r-- 1 root root 481342 May 18 12:25 libfastcommon.zip -rw-r--r-- 1 root root 981687 Apr 11 09:56 nginx-1.12.2.tar.gz -rw-r--r-- 1 root root 12248 Dec 24 2014 ngx_cache_purge-2.3.tar.gz [root@tracker1 tools]# 把tracker1上的安装包复制到其他机器 scp ./* [email protected]:/application/tools/ scp ./* [email protected]:/application/tools/ scp ./* [email protected]:/application/tools/ scp ./* [email protected]:/application/tools/ scp ./* [email protected]:/application/tools/ scp ./* [email protected]:/application/tools/ scp ./* [email protected]:/application/tools/
2.所有机器安装所需库
yum install readline-devel pcre-devel openssl-devel -y
3.安装libfastcommon(15-20上安装)
cd /application/tools/ unzip libfastcommon.zip cd libfastcommon-1.0.36 ./make.sh ./make.sh install
4.安装fastdfs(15-20上安装)
cd /application/tools/ unzip fastdfs.zip cd fastdfs-master/ ./make.sh ./make.sh install
5.配置tracker1,2
cd /etc/fdfs/ cp tracker.conf.sample tracker.conf vim tracker.conf 修改内容 base_path=/fastdfs/tracker store_lookup=0 (负载策略 0为轮询 这里修改为0下面进行测试 后面改回来) mkdir -p /fastdfs/tracker /etc/init.d/fdfs_trackerd start ps -ef | grep fdfs
6.配置storage
cd /etc/fdfs/ cp storage.conf.sample storage.conf group1 vim storage.conf base_path=/fastdfs/storage store_path0=/fastdfs/storage tracker_server=172.16.1.15:22122 tracker_server=172.16.1.16:22122 group2 vim storage.conf group_name=group2 base_path=/fastdfs/storage store_path0=/fastdfs/storage tracker_server=172.16.1.15:22122 tracker_server=172.16.1.16:22122 所有storage mkdir -p /fastdfs/storage /etc/init.d/fdfs_storaged start cd /fastdfs/storage tailf logs/storaged.log [root@storage1-group1 storage]# tailf logs/storaged.log mkdir data path: FF ... data path: /fastdfs/storage/data, mkdir sub dir done. [2018-05-18 14:44:10] INFO - file: storage_param_getter.c, line: 191, use_storage_id=0, id_type_in_filename=ip, storage_ip_changed_auto_adjust=1, store_path=0, reserved_storage_space=10.00%, use_trunk_file=0, slot_min_size=256, slot_max_size=16 MB, trunk_file_size=64 MB, trunk_create_file_advance=0, trunk_create_file_time_base=02:00, trunk_create_file_interval=86400, trunk_create_file_space_threshold=20 GB, trunk_init_check_occupying=0, trunk_init_reload_from_binlog=0, trunk_compress_binlog_min_interval=0, store_slave_file_use_link=0 [2018-05-18 14:44:10] INFO - file: storage_func.c, line: 257, tracker_client_ip: 172.16.1.17, my_server_id_str: 172.16.1.17, g_server_id_in_filename: 285321408 [2018-05-18 14:44:26] INFO - file: tracker_client_thread.c, line: 310, successfully connect to tracker server 172.16.1.16:22122, as a tracker client, my ip is 172.16.1.17 [2018-05-18 14:44:26] INFO - file: tracker_client_thread.c, line: 1947, tracker server: #0. 172.16.1.15:22122, my_report_status: -1 [2018-05-18 14:44:26] INFO - file: tracker_client_thread.c, line: 310, successfully connect to tracker server 172.16.1.15:22122, as a tracker client, my ip is 172.16.1.17 [2018-05-18 14:44:26] INFO - file: tracker_client_thread.c, line: 1947, tracker server: #0. 172.16.1.15:22122, my_report_status: -1 [2018-05-18 14:44:26] INFO - file: tracker_client_thread.c, line: 1263, tracker server 172.16.1.16:22122, set tracker leader: 172.16.1.16:22122 [2018-05-18 14:44:26] INFO - file: storage_sync.c, line: 2733, successfully connect to storage server 172.16.1.18:23000 可以看到,192.168.1.17这台设备成功与两个tracker设备连接了,其中选举了192.168.1.16作为tracker集群的leader。192.168.1.17和192.168.1.18这两台虚拟机同属一个分组(group1)
7.测试tracker的高可用
172.16.1.16作为tracker集群的leader 现在停掉1.16的tracker /etc/init.d/fdfs_trackerd stop 在storage1上查看日志 [root@storage1-group1 storage]# tailf logs/storaged.log [2018-05-18 14:44:26] INFO - file: tracker_client_thread.c, line: 310, successfully connect to tracker server 172.16.1.16:22122, as a tracker client, my ip is 172.16.1.17 [2018-05-18 14:44:26] INFO - file: tracker_client_thread.c, line: 1947, tracker server: #0. 172.16.1.15:22122, my_report_status: -1 [2018-05-18 14:44:26] INFO - file: tracker_client_thread.c, line: 310, successfully connect to tracker server 172.16.1.15:22122, as a tracker client, my ip is 172.16.1.17 [2018-05-18 14:44:26] INFO - file: tracker_client_thread.c, line: 1947, tracker server: #0. 172.16.1.15:22122, my_report_status: -1 [2018-05-18 14:44:26] INFO - file: tracker_client_thread.c, line: 1263, tracker server 172.16.1.16:22122, set tracker leader: 172.16.1.16:22122 [2018-05-18 14:44:26] INFO - file: storage_sync.c, line: 2733, successfully connect to storage server 172.16.1.18:23000 [2018-05-18 14:49:56] ERROR - file: tracker_client_thread.c, line: 1148, tracker server 172.16.1.16:22122, recv data fail, errno: 107, error info: Transport endpoint is not connected. [2018-05-18 14:49:57] ERROR - file: tracker_client_thread.c, line: 277, connect to tracker server 172.16.1.16:22122 fail, errno: 111, error info: Connection refused [2018-05-18 14:49:57] INFO - file: tracker_client_thread.c, line: 1263, tracker server 172.16.1.15:22122, set tracker leader: 172.16.1.15:22122 [2018-05-18 14:49:57] ERROR - file: connection_pool.c, line: 130, connect to 172.16.1.16:22122 fail, errno: 111, error info: Connection refused 连接tracker2 1.16失败 选举1.15为新leader 重新启动tracker2 /etc/init.d/fdfs_trackerd start 在storage1上查看日志 [2018-05-18 14:56:57] INFO - file: tracker_client_thread.c, line: 310, successfully connect to tracker server 172.16.1.16:22122, continuous fail count: 14, as a tracker client, my ip is 172.16.1.17 连接tracker2成功 但leader没有改别 当我们所有的tracker和storage节点都启动成功之后,我们可以在任意的一个存储节点上查看存储集群的信息,命令:/usr/bin/fdfs_monitor /etc/fdfs/storage.conf server_count=2, server_index=0 tracker server is 172.16.1.15:22122 group count: 2 Group 1: group name = group1 disk total space = 17944 MB disk free space = 15091 MB trunk free space = 0 MB storage server count = 2 active server count = 2 storage server port = 23000 storage HTTP port = 8888 store path count = 1 subdir count per path = 256 current write server index = 0 current trunk file id = 0 Storage 1: id = 172.16.1.17 ip_addr = 172.16.1.17 ACTIVE http domain = version = 5.12 join time = 2018-05-18 14:43:14 up time = 2018-05-18 14:43:14 total storage = 17944 MB free storage = 15091 MB 。。。。。。。。。。。。。。。省略 Storage 2: id = 172.16.1.18 ip_addr = 172.16.1.18 ACTIVE http domain = version = 5.12 join time = 2018-05-18 14:43:14 up time = 2018-05-18 14:43:14 total storage = 17944 MB free storage = 15120 MB 。。。。。。。。。。。。。。。省略 Group 2: group name = group2 disk total space = 17944 MB disk free space = 15120 MB trunk free space = 0 MB storage server count = 2 active server count = 2 storage server port = 23000 storage HTTP port = 8888 store path count = 1 subdir count per path = 256 current write server index = 0 current trunk file id = 0 Storage 1: id = 172.16.1.19 ip_addr = 172.16.1.19 ACTIVE http domain = version = 5.12 join time = 2018-05-18 14:43:14 up time = 2018-05-18 14:43:14 total storage = 17944 MB free storage = 15127 MB Storage 2: id = 172.16.1.20 ip_addr = 172.16.1.20 ACTIVE http domain = version = 5.12 join time = 2018-05-18 14:43:14 up time = 2018-05-18 14:43:14 total storage = 17944 MB free storage = 15120 MB 可以看到tracker Server有两个,当前提供服务的是172.16.1.15,group的数量是2,第一组的IP有172.16.1.17和172.16.1.18,第二组的IP有172.16.1.19和172.16.1.20,与我们规划的集群完全一致。
8.tracker和storage集群上传图片测试
tracker1 cd /etc/fdfs cp client.conf.sample client.conf vim client.conf # the base path to store log files base_path=/fastdfs/tracker # tracker_server can ocur more than once, and tracker_server format is # "host:port", host can be hostname or ip address tracker_server=172.16.1.15:22122 tracker_server=172.16.1.16:22122 上传/root下的1.png /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /root/1.png group2/M00/00/00/wKgBE1r-fICAcd3kAAHY-4ojheI481.png 其中group2表示这张图片被保存在了哪个组当中,M00代表磁盘目录,如果电脑只有一个磁盘那就只有M00, 如果有多个磁盘,那就M01、M02...等等。00/00代表磁盘上的两级目录,每级目录下是从00到FF共256个文件夹,两级就是256*256个。wKgBE1r-fICAcd3kAAHY-4ojheI481.png表示被存储到storage上的1.png被重命名的名字,这样做的目的是为了防止图片名字重复。 我们到两组group所在的四台设备的/fastdfs/storage/data/00/00目录下查看一下是否有我们刚才上传的图片,发现172.16.1.1和172.16.1.20两台设备上有该图片,而172.16.1.17和172.16.1.18两台设备上没有该图片。这是由于172.16.1.17和172.16.1.18两台设备属于group1,而172.16.1.19和172.16.1.20属于group2,返回的图片信息中明确说明了存储在了group2下面,因此可group1下面是没有该图片的。 [root@storage3-group2 /]# cd /fastdfs/storage/data/00/00/ [root@storage3-group2 00]# ls wKgBE1r-fICAcd3kAAHY-4ojheI481.png [root@storage3-group2 00]# pwd /fastdfs/storage/data/00/00 [root@storage3-group2 00]# [root@storage4-group2 storage]# cd /fastdfs/storage/data/00/00/ [root@storage4-group2 00]# ls wKgBE1r-fICAcd3kAAHY-4ojheI481.png [root@storage4-group2 00]# pwd /fastdfs/storage/data/00/00 [root@storage4-group2 00]# 我们在搭建集群的时候,配置的策略是轮询策略,那么我们现在再上传一次该图片,看是否会存储到group1下面。如下所示,发现这次返回的路径信息中显示存储到了group1下面。 [root@tracker1 fdfs]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /root/1.png group1/M00/00/00/wKgBEVr-f4eAAkWHAAHY-4ojheI290.png
9.配置Ngnix(所有storage)
cd /application/tools/ unzip fastdfs-nginx-module.zip useradd www -M -s /sbin/nologin -u 504 tar xf nginx-1.12.2.tar.gz cd nginx-1.12.2 ./configure --add-module=/application/tools/fastdfs-nginx-module-master/src/ --prefix=/application/nginx-1.12.2 --user=www --group=www --with-http_ssl_module --with-http_stub_status_module make && make install ln -s /application/nginx-1.12.2/ /application/nginx cd /application/tools/fastdfs-nginx-module-master/src/ cp mod_fastdfs.conf /etc/fdfs/ vim /etc/fdfs/mod_fastdfs.conf # connect timeout in seconds # default value is 30s connect_timeout=10 # network recv and send timeout in seconds # default value is 30s network_timeout=30 # the base path to store log files base_path=/tmp # if load FastDFS parameters from tracker server # since V1.12 # default value is false load_fdfs_parameters_from_tracker=true # storage sync file max delay seconds # same as tracker.conf # valid only when load_fdfs_parameters_from_tracker is false # since V1.12 # default value is 86400 seconds (one day) storage_sync_file_max_delay = 86400 # if use storage ID instead of IP address # same as tracker.conf # valid only when load_fdfs_parameters_from_tracker is false # default value is false # since V1.13 use_storage_id = false # specify storage ids filename, can use relative or absolute path # same as tracker.conf # valid only when load_fdfs_parameters_from_tracker is false # since V1.13 storage_ids_filename = storage_ids.conf # FastDFS tracker_server can ocur more than once, and tracker_server format is # "host:port", host can be hostname or ip address # valid only when load_fdfs_parameters_from_tracker is true tracker_server=172.16.1.15:22122 tracker_server=172.16.1.16:22122 # the port of the local storage server # the default value is 23000 storage_server_port=23000 # the group name of the local storage server group_name=group1 # if the url / uri including the group name # set to false when uri like /M00/00/00/xxx # set to true when uri like ${group_name}/M00/00/00/xxx, such as group1/M00/xxx # default value is false url_have_group_name = true # path(disk or mount point) count, default value is 1 # must same as storage.conf store_path_count=1 # store_path#, based 0, if store_path0 not exists, it's value is base_path # the paths must be exist # must same as storage.conf store_path0=/fastdfs/storage #store_path1=/home/yuqing/fastdfs1 # standard log level as syslog, case insensitive, value list: ### emerg for emergency ### alert ### crit for critical ### error ### warn for warning ### notice ### info ### debug log_level=info # set the log filename, such as /usr/local/apache2/logs/mod_fastdfs.log # empty for output to stderr (apache and nginx error_log file) log_filename= # response mode when the file not exist in the local file system ## proxy: get the content from other storage server, then send to client ## redirect: redirect to the original storage server (HTTP Header is Location) response_mode=proxy # the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a # multi aliases split by comma. empty value means auto set by OS type # this paramter used to get all ip address of the local host # default values is empty if_alias_prefix= # use "#include" directive to include HTTP config file # NOTE: #include is an include directive, do NOT remove the # before include #include http.conf # if support flv # default value is false tracker_server=172.16.1.15:22122 tracker_server=172.16.1.16:22122 # the port of the local storage server # the default value is 23000 storage_server_port=23000 # the group name of the local storage server group_name=group1 # if the url / uri including the group name # set to false when uri like /M00/00/00/xxx # set to true when uri like ${group_name}/M00/00/00/xxx, such as group1/M00/xxx # default value is false url_have_group_name = true # path(disk or mount point) count, default value is 1 # must same as storage.conf store_path_count=1 # store_path#, based 0, if store_path0 not exists, it's value is base_path # the paths must be exist # must same as storage.conf store_path0=/fastdfs/storage #store_path1=/home/yuqing/fastdfs1 # standard log level as syslog, case insensitive, value list: ### emerg for emergency ### alert ### crit for critical ### error ### warn for warning ### notice ### info ### debug log_level=info # set the log filename, such as /usr/local/apache2/logs/mod_fastdfs.log # empty for output to stderr (apache and nginx error_log file) log_filename= # response mode when the file not exist in the local file system ## proxy: get the content from other storage server, then send to client ## redirect: redirect to the original storage server (HTTP Header is Location) response_mode=proxy # the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a # multi aliases split by comma. empty value means auto set by OS type # this paramter used to get all ip address of the local host # default values is empty if_alias_prefix= # use "#include" directive to include HTTP config file # NOTE: #include is an include directive, do NOT remove the # before include #include http.conf # if support flv # default value is false # since v1.15 flv_support = true # flv file extension name # default value is flv # valid only when load_fdfs_parameters_from_tracker is true tracker_server=172.16.1.15:22122 tracker_server=172.16.1.16:22122 # the port of the local storage server # the default value is 23000 storage_server_port=23000 # the group name of the local storage server group_name=group1 # if the url / uri including the group name # set to false when uri like /M00/00/00/xxx # set to true when uri like ${group_name}/M00/00/00/xxx, such as group1/M00/xxx # default value is false url_have_group_name = true # path(disk or mount point) count, default value is 1 # must same as storage.conf store_path_count=1 # store_path#, based 0, if store_path0 not exists, it's value is base_path # the paths must be exist # must same as storage.conf store_path0=/fastdfs/storage #store_path1=/home/yuqing/fastdfs1 # standard log level as syslog, case insensitive, value list: ### emerg for emergency ### alert ### crit for critical ### error ### warn for warning ### notice ### info ### debug log_level=info # set the log filename, such as /usr/local/apache2/logs/mod_fastdfs.log # empty for output to stderr (apache and nginx error_log file) log_filename= # response mode when the file not exist in the local file system ## proxy: get the content from other storage server, then send to client ## redirect: redirect to the original storage server (HTTP Header is Location) response_mode=proxy # the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a # multi aliases split by comma. empty value means auto set by OS type # this paramter used to get all ip address of the local host # default values is empty if_alias_prefix= # use "#include" directive to include HTTP config file # NOTE: #include is an include directive, do NOT remove the # before include #include http.conf # if support flv # default value is false # since v1.15 flv_support = true # flv file extension name # default value is flv # since v1.15 flv_extension = flv # set the group count # set to none zero to support multi-group on this storage server # set to 0 for single group only # groups settings section as [group1], [group2], ..., [groupN] # default value is 0 # since v1.14 group_count = 2 # group settings for group #1 # since v1.14 # when support multi-group on this storage server, uncomment following section [group1] group_name=group1 storage_server_port=23000 store_path_count=1 store_path0=/fastdfs/storage #store_path1=/home/yuqing/fastdfs1 # group settings for group #2 # since v1.14 # when support multi-group, uncomment following section as neccessary [group2] group_name=group2 storage_server_port=23000 store_path_count=1 store_path0=/fastdfs/storage scp /etc/fdfs/mod_fastdfs.conf [email protected]:/etc/fdfs/ scp /etc/fdfs/mod_fastdfs.conf [email protected]:/etc/fdfs/ scp /etc/fdfs/mod_fastdfs.conf [email protected]:/etc/fdfs/ 修改1.19与1.20的group_name vim /etc/fdfs/mod_fastdfs.conf # the group name of the local storage server group_name=group2 cp /application/tools/fastdfs-master/conf/http.conf /etc/fdfs/ cp /application/tools/fastdfs-master/conf/mime.types /etc/fdfs/ ln -s /fastdfs/storage/data/ /fastdfs/storage/data/M00 cd /application/nginx/conf/ vim nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 8888; server_name localhost; location ~/group([0-9])/M00 { root /fastdfs/storage/data; ngx_fastdfs_module; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } scp /application/nginx/conf/nginx.conf [email protected]:/application/nginx/conf/ scp /application/nginx/conf/nginx.conf [email protected]:/application/nginx/conf/ scp /application/nginx/conf/nginx.conf [email protected]:/application/nginx/conf/ /application/nginx/sbin/nginx tracker1上传/root下的1.png [root@tracker1 ~]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /root/1.png group1/M00/00/00/rBABEVsOdc6AatijAAHY-4ojheI494.png
web访问测试
配置反向代理(tracker1 tracker2)
cd /application/tools/ useradd www -M -s /sbin/nologin -u 504 tar xf nginx-1.12.2.tar.gz cd nginx-1.12.2 ./configure --prefix=/application/nginx-1.12.2 --user=www --group=www --with-http_ssl_module --with-http_stub_status_module egrep -v "#|^$" /application/nginx/conf/nginx.conf.default > /application/nginx/conf/nginx.conf
vim /application/nginx/conf/nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; #group1的服务设置 upstream fdfs_group1 { server 172.16.1.17:8888 weight=1 max_fails=2 fail_timeout=30s; server 172.16.1.18:8888 weight=1 max_fails=2 fail_timeout=30s; } #group2的服务设置 upstream fdfs_group2 { server 172.16.1.19:8888 weight=1 max_fails=2 fail_timeout=30s; server 172.16.1.20:8888 weight=1 max_fails=2 fail_timeout=30s; } server { listen 80; server_name image.e3mall.com; #group1的负载均衡配置 location /group1/M00 { proxy_next_upstream http_502 http_504 error timeout invalid_header; #对应group1的服务设置 proxy_pass http://fdfs_group1; proxy_set_header Host $http_host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 300m; } #group2的负载均衡配置 location /group2/M00 { proxy_next_upstream http_502 http_504 error timeout invalid_header; #对应group1的服务设置 proxy_pass http://fdfs_group2; proxy_set_header Host $http_host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 300m; } location / { root html; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } }
keepalived配置(tracker1 tracker2)
yum -y install keepalived vim /etc/keepalived/keepalived.conf --------------------------------------------------------------------------------tracker1------------------------------------------------------------------------------ ! Configuration File for keepalived global_defs { router_id nginx1 #两台配置不同 } vrrp_script chk_status { #nginx健康检查 script "/server/scripts/chkec_status.sh" interval 2 #每隔2秒中去执行/etc/keepalived/check_status.sh脚本一次 weight -2 #脚本执行成功后把192.168.156.11这个节点的优先级降低2 } vrrp_instance VI_1 { state BACKUP #角色 interface eth1 #监控网络接口 virtual_router_id 51 #两台配置必须一样 priority 100 #优先级 优先级大的持有vip nopreempt #非抢占模式 宕机恢复后不会抢占vip advert_int 1 authentication { auth_type PASS auth_pass 1111 } track_script { chk_status } virtual_ipaddress { 192.168.1.205/24 dev eth0 } ---------------------------------------------------------------------------tracker2------------------------------------------------------------------------------ ! Configuration File for keepalived global_defs { router_id nginx2 #两台配置不同 } vrrp_script chk_status { #nginx健康检查 script "/server/scripts/check_status.sh" interval 2 #每隔2秒中去执行/etc/keepalived/check_status.sh脚本一次 weight -2 #脚本执行成功后把192.168.156.11这个节点的优先级降低2 } vrrp_instance VI_1 { state BACKUP #角色 interface eth1 #监控网络接口 virtual_router_id 51 #两台配置必须一样 priority 100 #优先级 优先级大的持有vip nopreempt #非抢占模式 宕机恢复后不会抢占vip advert_int 1 authentication { auth_type PASS auth_pass 1111 } track_script { chk_status } virtual_ipaddress { 192.168.1.205/24 dev eth0 }
chk_status.sh #!/bin/sh ip=`ifconfig eth1 | awk -F "[ :]+" 'NR==2 {print $4}'` name=$HOSTNAME A=`ps -C nginx --no-header |wc -l` B=`netstat -lntp | grep fdfs_trackerd | wc -l` if [ $A -eq 0 ];then killall keepalived sleep 2 /application/nginx/sbin/nginx if [ `ps -C nginx --no-header |wc -l` -eq 0 ];then echo -e "$name:$ip 的nginx进程中断 尝试重启失败" | mail -s "nginx宕掉 已经结束keepalived进程 主备已切换" [email protected] else /etc/init.d/keepalived restart echo -e "$name:$ip 的nginx进程中断 尝试重启成功" | mail -s "nginx宕掉 已经恢复 主备已切换 $name:$ip 已恢复 重新作为备机" [email protected] fi fi if [ $B -eq 0 ];then killall keepalived sleep 2 /etc/init.d/fdfs_trackerd stop /etc/init.d/fdfs_trackerd start sleep 5 if [ `netstat -lntp | grep fdfs_trackerd | wc -l` -eq 0 ];then echo -e "$name:$ip 的fdfs_trackerd进程中断 尝试重启失败" | mail -s "fdfs_trackerd宕掉 已经结束keepalived进程 主备已切换" [email protected] else /etc/init.d/keepalived restart echo -e "$name:$ip 的fdfs_trackerd进程中断 尝试重启成功" | mail -s "fdfs_trackerd宕掉 已经恢复 主备已切换 $name:$ip 已恢复 重新作为备机" [email protected] fi fi
chmod 755 /server/scripts/check_status.sh chkconfig keepalived on /etc/init.d/keepalived start
邮件设置
vim /etc/mail.rc set [email protected] set smtp=smtp.163.com set [email protected] set smtp-auth-password=a625013463 set smtp-auth=login