Fastdfs安装配置
前提:2台tracker(tracker1,tracker2),2台storage(storag1,storage2)
Storage挂载两块硬盘
[if !supportLists]一、[endif]编辑/etc/hosts文件(四台机器),添加如下
# vi /etc/hosts
10.32.4.11 tracker1
10.32.4.12 tracker2
10.32.4.13 storage1
10.32.4.14 storage2
二、下载安装 libfastcommon(四台机器)
libfastcommon是从 FastDFS 和 FastDHT 中提取出来的公共 C 函数库,基础环境,安装即可 。
[if !supportLists]① [endif] 下载libfastcommon
# wgethttps://github.com/happyfish100/libfastcommon/archive/V1.0.7.tar.gz
[if !supportLists]② [endif] 解压
# tar -zxvfV1.0.7.tar.gz
# cdlibfastcommon-1.0.7
[if !supportLists]③ [endif] 编译、安装
# ./make.sh
# ./make.shinstall
④ libfastcommon.so 安装到了/usr/lib64/libfastcommon.so,但是FastDFS主程序设置的lib目录是/usr/local/lib,所以需要创建软链接。
# ln -s/usr/lib64/libfastcommon.so /usr/local/lib/libfastcommon.so
# ln -s/usr/lib64/libfastcommon.so /usr/lib/libfastcommon.so
# ln -s/usr/lib64/libfdfsclient.so /usr/local/lib/libfdfsclient.so
# ln -s/usr/lib64/libfdfsclient.so /usr/lib/libfdfsclient.so
三、下载安装FastDFS(四台机器)
[if !supportLists]① [endif] 下载FastDFS
# wgethttps://github.com/happyfish100/fastdfs/archive/V5.05.tar.gz
[if !supportLists]② [endif] 解压
# tar -zxvfV5.05.tar.gz
# cdfastdfs-5.05
[if !supportLists]③ [endif] 编译、安装
# ./make.sh
# ./make.shinstall
④ 默认安装方式安装后的相应文件与目录
A、服务脚本:
/etc/init.d/fdfs_storaged
/etc/init.d/fdfs_trackerd
B、配置文件(这三个是作者给的样例配置文件):
/etc/fdfs/client.conf.sample
/etc/fdfs/storage.conf.sample
/etc/fdfs/tracker.conf.sample
C、命令工具在 /usr/bin/目录下:
fdfs_appender_test
fdfs_appender_test1
fdfs_append_file
fdfs_crc32
fdfs_delete_file
fdfs_download_file
fdfs_file_info
fdfs_monitor
fdfs_storaged
fdfs_test
fdfs_test1
fdfs_trackerd
fdfs_upload_appender
fdfs_upload_file
stop.sh
restart.sh
⑤ FastDFS 服务脚本设置的bin 目录是 /usr/local/bin, 但实际命令安装在 /usr/bin/ 下。
两种方式:
一是修改FastDFS 服务脚本中相应的命令路径,也就是把 /etc/init.d/fdfs_storaged 和/etc/init.d/fdfs_tracker 两个脚本中的 /usr/local/bin 修改成 /usr/bin。
# vim fdfs_trackerd
使用查找替换命令进统一修改:%s+/usr/local/bin+/usr/bin
# vim fdfs_storaged
使用查找替换命令进统一修改:%s+/usr/local/bin+/usr/bin
[if !vml]
[endif]
二是建立 /usr/bin 到/usr/local/bin 的软链接,我是用这种方式(推荐这种方式)。
# ln -s /usr/bin/fdfs_trackerd /usr/local/bin
# ln -s /usr/bin/fdfs_storaged /usr/local/bin
# ln -s /usr/bin/stop.sh /usr/local/bin
# ln -s /usr/bin/restart.sh /usr/local/bin
四、配置FastDFS跟踪器(Tracker)(两个tracker执行)
配置文件详细说明参考:FastDFS配置文件详解
① 进入 /etc/fdfs,复制 FastDFS 跟踪器样例配置文件 tracker.conf.sample,并重命名为 tracker.conf。
# cd/etc/fdfs
# cptracker.conf.sample tracker.conf
# vimtracker.conf
② 编辑tracker.conf ,标红的需要修改下,其它的默认即可。
# 配置文件是否不生效,false 为生效
disabled=false
[if !supportLineBreakNewLine]
[endif]
# 提供服务的端口
port=22122
[if !supportLineBreakNewLine]
[endif]
# Tracker 数据和日志目录地址(根目录必须存在,子目录会自动创建)
base_path=/ljzsg/fastdfs/tracker
[if !supportLineBreakNewLine]
[endif]
# HTTP 服务端口
http.server_port=80
③ 创建tracker基础数据目录,即base_path对应的目录
# mkdir -p /ljzsg/fastdfs/tracker
④ 防火墙中打开跟踪端口(默认的22122)
# vim /etc/sysconfig/iptables
[if !supportLineBreakNewLine]
[endif]
添加如下端口行:
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22122-j ACCEPT
[if !supportLineBreakNewLine]
[endif]
重启防火墙:
# service iptables restart
[if !vml]
[endif]
⑤ 启动Tracker
初次成功启动,会在 /ljzsg/fdfsdfs/tracker/ (配置的base_path)下创建 data、logs 两个目录。
可以用这种方式启动
# /etc/init.d/fdfs_trackerd start
也可以用这种方式启动,前提是上面创建了软链接,后面都用这种方式
# service fdfs_trackerd start
查看 FastDFS Tracker 是否已成功启动 ,22122端口正在被监听,则算是Tracker服务安装成功。
# netstat -unltp|grep fdfs
[if !vml]
[endif]
关闭Tracker命令:
# service fdfs_trackerd stop
⑥ 设置Tracker开机启动
# chkconfig fdfs_trackerd on
或者:
# vim /etc/rc.d/rc.local
加入配置:
/etc/init.d/fdfs_trackerd start
service fdfs_trackerd start
⑦ tracker server 目录及文件结构
Tracker服务启动成功后,会在base_path下创建data、logs两个目录。目录结构如下:
${base_path}
|__data
| |__storage_groups.dat:存储分组信息
| |__storage_servers.dat:存储服务器列表
|__logs
| |__trackerd.log: tracker server 日志文件
五、配置 FastDFS 存储(Storage)
Storage1,storage2操作:
① 进入 /etc/fdfs 目录,复制 FastDFS 存储器样例配置文件 storage.conf.sample,并重命名为storage.conf
# cd /etc/fdfs
# cp storage.conf.sample storage.conf
# vim storage.conf
② 编辑storage.conf
标红的需要修改,其它的默认即可。
[if !vml]
[endif]
# 配置文件是否不生效,false 为生效
disabled=false
# 指定此 storage
server 所在 组(卷)
group_name=group1
# storage server 服务端口
port=23000
# 心跳间隔时间,单位为秒 (这里是指主动向 tracker server 发送心跳)
heart_beat_interval=30
# Storage 数据和日志目录地址(根目录必须存在,子目录会自动生成)
base_path=/ljzsg/fastdfs/storage
# 存放文件时 storage
server 支持多个路径。这里配置存放文件的基路径数目,通常只配一个目录。
store_path_count=1
# 逐一配置store_path_count 个路径,索引号基于 0。
# 如果不配置store_path0,那它就和 base_path 对应的路径一样。
store_path0=/ljzsg/fastdfs/file
# FastDFS 存储文件时,采用了两级目录。这里配置存放文件的目录个数。
# 如果本参数只为 N(如: 256),那么 storage server 在初次运行时,会在 store_path 下自动创建 N * N 个存放文件的子目录。
subdir_count_per_path=256
# tracker_server 的列表 ,会主动连接tracker_server
# 有多个 tracker
server 时,每个 tracker server 写一行
tracker_server=tracker1:22122
tracker_server=tracker2:22122
# 允许系统同步的时间段 (默认是全天) 。一般用于避免高峰同步产生一些问题而设定。
sync_start_time=00:00
sync_end_time=23:59
# 访问端口
http.server_port=80
[if !vml]
[endif]
③ 创建Storage基础数据目录,对应base_path目录
# mkdir -p /ljzsg/fastdfs/storage
# 这是配置的store_path0路径
# mkdir -p /ljzsg/fastdfs/file
④ 防火墙中打开存储器端口(默认的 23000)
[if !vml]
[endif]
# vim /etc/sysconfig/iptables
添加如下端口行:
-A INPUT -m state --state NEW -m tcp -p tcp --dport 23000-j ACCEPT
重启防火墙:
# service iptables restart
[if !vml]
[endif]
[if !vml]
[endif]
⑤ 启动Storage
启动Storage前确保Tracker是启动的。初次启动成功,会在 /ljzsg/fastdfs/storage 目录下创建 data、 logs 两个目录。
可以用这种方式启动
# /etc/init.d/fdfs_storaged start
也可以用这种方式,后面都用这种
# service fdfs_storaged start
查看 Storage 是否成功启动,23000 端口正在被监听,就算 Storage 启动成功。
# netstat -unltp|grep fdfs
[if !vml]
[endif]
关闭Storage命令:
# service fdfs_storaged stop
查看Storage和Tracker是否在通信:
/usr/bin/fdfs_monitor /etc/fdfs/storage.conf
[if !vml]
[endif]
⑥ 设置 Storage 开机启动
# chkconfig fdfs_storaged on
或者:
# vim /etc/rc.d/rc.local
加入配置:
/etc/init.d/fdfs_storaged start
⑦ Storage 目录
同Tracker,Storage 启动成功后,在base_path下创建了data、logs目录,记录着 Storage Server 的信息。
在store_path0 目录下,创建了N*N个子目录:
[if !vml]
[endif]
六、文件上传测试
① 修改 Tracker 服务器中的客户端配置文件 (随便一个tracker)
# cd /etc/fdfs
# cp client.conf.sample client.conf
# vim client.conf
修改如下配置即可,其它默认。
# Client 的数据和日志目录
base_path=/ljzsg/fastdfs/client
# Tracker端口
tracker_server=tracker1:22122
② 上传测试
在linux内部执行如下命令上传 namei.jpeg 图片
# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf namei.jpeg
上传成功后返回文件ID号:group1/M00/00/00/wKgz6lnduTeAMdrcAAEoRmXZPp870.jpeg
[if !vml]
[endif]
返回的文件ID由group、存储目录、两级子目录、fileid、文件后缀名(由客户端指定,主要用于区分文件类型)拼接而成。
[if !vml]
[endif]七、安装Nginx
上面将文件上传成功了,但我们无法下载。因此安装Nginx作为服务器以支持Http方式访问文件。同时,后面安装FastDFS的Nginx模块也需要Nginx环境。
Nginx只需要安装到StorageServer所在的服务器即可,用于访问文件。我这里由于是单机,TrackerServer和StorageServer在一台服务器上。
1、安装nginx所需环境
① gcc 安装
# yum install gcc-c++
② PCRE pcre-devel 安装
# yum install -y pcre pcre-devel
③ zlib 安装
# yum install -y zlib zlib-devel
④ OpenSSL 安装
# yum install -y openssl openssl-devel
2、安装Nginx
① 下载nginx
# wget -c https://nginx.org/download/nginx-1.12.1.tar.gz
② 解压
# tar -zxvf nginx-1.12.1.tar.gz
# cd nginx-1.12.1
③ 使用默认配置
# ./configure
④ 编译、安装
# make
# make install
⑤ 启动nginx
[if !vml]
[endif]
# cd /usr/local/nginx/sbin/
# ./nginx
其它命令
# ./nginx -s stop
# ./nginx -s quit
# ./nginx -s reload
[if !vml]
[endif]
⑥ 设置开机启动
[if !vml]
[endif]
# vim /etc/rc.local
添加一行:
/usr/local/nginx/sbin/nginx
#
设置执行权限
#chmod 755 rc.local
[if !vml]
[endif]
⑦ 查看nginx的版本及模块
/usr/local/nginx/sbin/nginx -V
[if !vml]
[endif]
⑧ 防火墙中打开Nginx端口(默认的 80)
添加后就能在本机使用80端口访问了。
[if !vml]
[endif]
# vim /etc/sysconfig/iptables
添加如下端口行:
-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
重启防火墙:
# service iptables restart
[if !vml]
[endif]
[if !vml]
[endif]
3、访问文件
简单的测试访问文件
① 修改nginx.conf
[if !vml]
[endif]
# vim /usr/local/nginx/conf/nginx.conf
添加如下行,将 /group1/M00 映射到/ljzsg/fastdfs/file/data
location /group1/M00 {
alias /ljzsg/fastdfs/file/data;
}
#
重启nginx
# /usr/local/nginx/sbin/nginx -s reload
[if !vml]
[endif]
[if !vml]
[endif]
② 在浏览器访问之前上传的图片、成功。
http://10.32.4.14/group1/M00/00/00/CiAEDVs7DC-AAI2YAAAnompQzds934.png
http://10.32.4.13/group1/M00/00/00/CiAEDVs7DC-AAI2YAAAnompQzds934.png
八、FastDFS 配置 Nginx 模块
1、安装配置Nginx模块(两台storage操作)
① fastdfs-nginx-module 模块说明
FastDFS 通过 Tracker 服务器,将文件放在 Storage 服务器存储, 但是同组存储服务器之间需要进行文件复制, 有同步延迟的问题。
假设 Tracker 服务器将文件上传到了 192.168.51.128,上传成功后文件 ID已经返回给客户端。
此时 FastDFS 存储集群机制会将这个文件同步到同组存储192.168.51.129,在文件还没有复制完成的情况下,客户端如果用这个文件 ID 在 192.168.51.129 上取文件,就会出现文件无法访问的错误。
而 fastdfs-nginx-module 可以重定向文件链接到源服务器取文件,避免客户端由于复制延迟导致的文件无法访问错误。
② 下载 fastdfs-nginx-module、解压
[if !vml]
[endif]
# 这里为啥这么长一串呢,因为最新版的master与当前nginx有些版本问题。
# wget https://github.com/happyfish100/fastdfs-nginx-module/archive/5e5f3566bbfa57418b5506aaefbe107a42c9fcb1.zip
# 解压
# unzip 5e5f3566bbfa57418b5506aaefbe107a42c9fcb1.zip
# 重命名
# mv fastdfs-nginx-module-5e5f3566bbfa57418b5506aaefbe107a42c9fcb1 fastdfs-nginx-module-master
[if !vml]
[endif]
③ 配置Nginx
在nginx中添加模块
[if !vml]
[endif]
# 先停掉nginx服务
# /usr/local/nginx/sbin/nginx -s stop
进入解压包目录
# cd /softpackages/nginx-1.12.1/
# 添加模块
# ./configure --add-module=../fastdfs-nginx-module-master/src
重新编译、安装
# make && make install
[if !vml]
[endif]
④ 查看Nginx的模块
# /usr/local/nginx/sbin/nginx -V
有下面这个就说明添加模块成功
[if !vml]
[endif]
⑤ 复制 fastdfs-nginx-module 源码中的配置文件到/etc/fdfs 目录, 并修改
# cd /softpackages/fastdfs-nginx-module-master/src
# cp mod_fastdfs.conf /etc/fdfs/
修改如下配置,其它默认
[if !vml]
[endif]
# 连接超时时间
connect_timeout=10
# Tracker Server
tracker_server=file.ljzsg.com:22122
# StorageServer
默认端口
storage_server_port=23000
# 如果文件ID的uri中包含/group**,则要设置为true
url_have_group_name = true
# Storage 配置的store_path0路径,必须和storage.conf中的一致
store_path0=/ljzsg/fastdfs/file
[if !vml]
[endif]
⑥ 复制 FastDFS 的部分配置文件到/etc/fdfs 目录
# cd /softpackages/fastdfs-5.05/conf/
# cp anti-steal.jpg http.conf mime.types /etc/fdfs/
⑦ 配置nginx,修改nginx.conf
# vim /usr/local/nginx/conf/nginx.conf
修改配置,其它的默认
在80端口下添加fastdfs-nginx模块
location ~/group([0-9])/M00 {
ngx_fastdfs_module;
}
[if !vml]
[endif]
注意:
listen 80 端口值是要与 /etc/fdfs/storage.conf 中的http.server_port=80 (前面改成80了)相对应。如果改成其它端口,则需要统一,同时在防火墙中打开该端口。
location 的配置,如果有多个group则配置location ~/group([0-9])/M00 ,没有则不用配group。
⑧ 在/ljzsg/fastdfs/file文件存储目录下创建软连接,将其链接到实际存放数据的目录,这一步可以省略。
# ln -s /ljzsg/fastdfs/file/data/ /ljzsg/fastdfs/file/data/M00
⑨ 启动nginx
# /usr/local/nginx/sbin/nginx
打印处如下就算配置成功
[if !vml]
[endif]
⑩ 在地址栏访问。
能下载文件就算安装成功。注意和第三点中直接使用nginx路由访问不同的是,这里配置 fastdfs-nginx-module 模块,可以重定向文件链接到源服务器取文件。
http://file.ljzsg.com/group1/M00/00/00/wKgz6lnduTeAMdrcAAEoRmXZPp870.jpeg
最终部署结构图(盗的图):可以按照下面的结构搭建环境。
[if !vml]
[endif]
九、tracker反向代理配置(两台tracter都需要)
1.下载需要的软件
ngx_cache_purge-2.3.tar.gz nginx-1.13.12.tar.gz
2.安装依赖
# yum install pcre
# yum install pcre-devel
# yum install zlib
# yum install zlib-devel
3.解压nginx和ngx_cache_purge,并编译安装
# ./configure --add-module=../ngx_cache_purge-2.3
# make && make install
4. 进入/usr/local/nginx/conf,编辑nginx.conf如下:
user root;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
use epoll;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
#设置缓存
server_names_hash_bucket_size 128;
client_header_buffer_size 32k;
large_client_header_buffers 4 32k;
client_max_body_size 300m;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 16k;
proxy_buffers 4 64k;
proxy_busy_buffers_size 128k;
proxy_temp_file_write_size 128k;
#设置缓存存储路径、存储方式、分配内存大小、磁盘最大空间、缓存期限
proxy_cache_path /fastdfs/cache/nginx/proxy_cache levels=1:2
keys_zone=http-cache:200m max_size=1g inactive=30d;
proxy_temp_path /fastdfs/cache/nginx/proxy_cache/tmp;
#设置 group1 的服务器
upstream fdfs_group1 {
server 10.32.4.13 weight=1 max_fails=2 fail_timeout=30s;
server 10.32.4.14 weight=1 max_fails=2 fail_timeout=30s;
} #
# 设置 group2 的服务器
# upstream fdfs_group2 {
#
# server 10.32.1.220 weight=1 max_fails=2 fail_timeout=30s;
# server 192.168.50.140:8888 weight=1 max_fails=2 fail_timeout=30s;
# }
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
#设置 group 的负载均衡参数
location /group1/M0 {
proxy_next_upstream http_502 http_504 error timeout invalid_header;
proxy_cache http-cache;
proxy_cache_valid 200 304 12h;
proxy_cache_key $uri$is_args$args;
proxy_pass http://fdfs_group1;
expires 30d;
}
# location /group2/M0 {
# proxy_next_upstream http_502 http_504 error timeout invalid_header;
# proxy_cache http-cache;
# proxy_cache_valid 200 304 12h;
# proxy_cache_key $uri$is_args$args;
# proxy_pass http://fdfs_group2;
# expires 30d;
#}
#设置清除缓存的访问权限
location ~/purge(/.*) {
allow 127.0.0.1;
allow 192.168.1.0/24;
deny all;
proxy_cache_purge http-cache $1$is_args$args;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
检查配置是否正确
# /usr/local/nginx/sbin/nginx -t
nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful
4、建立nginx缓存目录
# mkdir -p /fastdfs/cache/nginx/proxy_cache
# mkdir -p /fastdfs/cache/nginx/proxy_cache/tmp
5.启动nginx
# /usr/local/nginx/sbin/nginx
6.测试
http://10.32.4.11/group1/M00/00/00/CiAEDVs7DC-AAI2YAAAnompQzds934.png
http://10.32.4.11/group1/M00/00/00/CiAEDVs7DC-AAI2YAAAnompQzds934.png
出来结果
到现在为止配置结束,也可以在前端添加nginx做反向代理
十、同一组里边再加一块盘(storage都要做)
为group1 添加第二块盘
编辑/etc/fdfs/storage.conf
store_path_count=2
store_path1=/data
编辑/etc/fdfs/mod_fastdfs.conf
[group1]
group_name=group1
storage_server_port=23000
store_path_count=2
store_path0=/ljzsg/fastdfs/file
store_path1=/data
编辑vi /usr/local/nginx/conf/nginx.conf
location ~/group([0-9])/M01 {
ngx_fastdfs_module;
}
改为:
location ~/group([0-9])/M0 {
ngx_fastdfs_module;
}
重启电脑即可
重启后查看
pid=9367
[root@storage1 conf]# /usr/bin/fdfs_monitor /etc/fdfs/storage.conf
[2018-07-03 15:47:48] DEBUG - base_path=/ljzsg/fastdfs/storage, connect_timeout=30, network_timeout=60, tracker_server_count=2, anti_steal_token=0, anti_steal_secret_key length=0, use_connection_pool=0, g_connection_pool_max_idle_time=3600s, use_storage_id=0, storage server id count: 0
server_count=2, server_index=0
tracker server is 10.32.4.11:22122
group count: 1
Group 1:
group name = group1
disk total space = 2038502 MB
disk free space = 2036924 MB
trunk free space = 0 MB
storage server count = 2
active server count = 2
storage server port = 23000
storage HTTP port = 80
store path count = 2
subdir count per path = 256
current write server index = 0
current trunk file id = 0
Storage 1:
id = 10.32.4.13
ip_addr = 10.32.4.13 (storage1) ACTIVE
http domain =
version = 5.05
join time = 2018-07-03 10:59:42
up time = 2018-07-03 15:40:13
total storage = 2038502 MB
free storage = 2036924 MB
upload priority = 10
store_path_count = 2
subdir_count_per_path = 256
storage_port = 23000
storage_http_port = 80
current_write_path = 0
source storage id =
if_trunk_server = 0
connection.alloc_count = 256
connection.current_count = 1
connection.max_count = 2
total_upload_count = 24
success_upload_count = 24
total_append_count = 0
success_append_count = 0
total_modify_count = 0
success_modify_count = 0
total_truncate_count = 0
测试:
在tracker添加一张图片
# /usr/bin/fdfs_upload_file client.conf 6.png
group1/M00/00/00/CiAEDVs7KnSAAh7IAAAnompQzds035.png
# /usr/bin/fdfs_upload_file client.conf 6.png
group1/M01/00/00/CiAEDVs7KpeAVgusAAAnompQzds340.png
访问:
http://10.32.4.11/group1/M01/00/00/CiAEDls7KUOABJsiAAAnompQzds393.png
http://10.32.4.12/group1/M01/00/00/CiAEDls7KUOABJsiAAAnompQzds393.png
http://10.32.4.11/group1/M01/00/00/CiAEDVs7KpeAVgusAAAnompQzds340.png
http://10.32.4.12/group1/M01/00/00/CiAEDVs7KpeAVgusAAAnompQzds340.png
正常打开成功
十一、配置文件:
(tracker配置文件)
Nginx
[root@tracker1 conf]# cat nginx.conf|grep -v '#' |tr-s '\n'
user root;
worker_processes 1;
events {
worker_connections 1024;
use epoll;
}
http {
include mime.types;
default_typeapplication/octet-stream;
sendfile on;
tcp_nopush on;
keepalive_timeout 65;
server_names_hash_bucket_size128;
client_header_buffer_size32k;
large_client_header_buffers 432k;
client_max_body_size 300m;
proxy_redirect off;
proxy_set_header Host$http_host;
proxy_set_header X-Real-IP$remote_addr;
proxy_set_header X-Forwarded-For$proxy_add_x_forwarded_for;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 16k;
proxy_buffers 4 64k;
proxy_busy_buffers_size128k;
proxy_temp_file_write_size128k;
proxy_cache_path/fastdfs/cache/nginx/proxy_cache levels=1:2
keys_zone=http-cache:200mmax_size=1g inactive=30d;
proxy_temp_path/fastdfs/cache/nginx/proxy_cache/tmp;
upstream fdfs_group1 {
server 10.32.4.13 weight=1max_fails=2 fail_timeout=30s;
server 10.32.4.14 weight=1max_fails=2 fail_timeout=30s;
server {
listen 80;
server_name localhost;
location /group1/M0 {
proxy_next_upstreamhttp_502 http_504 error timeout invalid_header;
proxy_cachehttp-cache;
proxy_cache_valid 200304 12h;
proxy_cache_key $uri$is_args$args;
proxy_passhttp://fdfs_group1;
expires 30d;
}
location ~/purge(/.*) {
allow 127.0.0.1;
allow192.168.1.0/24;
deny all;
proxy_cache_purge http-cache$1$is_args$args;
}
error_page 500 502 503 504/50x.html;
location = /50x.html{
root html;
}
}
}
Tracker配置
[root@tracker1 fdfs]# cat tracker.conf|grep -v ^# |tr-s '\n'
disabled=false
bind_addr=
port=22122
connect_timeout=30
network_timeout=60
base_path=/ljzsg/fastdfs/tracker
max_connections=256
accept_threads=1
work_threads=4
store_lookup=2
store_group=group2
store_server=0
store_path=0
download_server=0
reserved_storage_space = 10%
log_level=info
run_by_group=
run_by_user=
allow_hosts=*
sync_log_buff_interval = 10
check_active_interval = 120
thread_stack_size = 64KB
storage_ip_changed_auto_adjust = true
storage_sync_file_max_delay = 86400
storage_sync_file_max_time = 300
use_trunk_file = false
slot_min_size = 256
slot_max_size = 16MB
trunk_file_size = 64MB
trunk_create_file_advance = false
trunk_create_file_time_base = 02:00
trunk_create_file_interval = 86400
trunk_create_file_space_threshold = 20G
trunk_init_check_occupying = false
trunk_init_reload_from_binlog = false
trunk_compress_binlog_min_interval = 0
use_storage_id = false
storage_ids_filename = storage_ids.conf
id_type_in_filename = ip
store_slave_file_use_link = false
rotate_error_log = false
error_log_rotate_time=00:00
rotate_error_log_size = 0
log_file_keep_days = 0
use_connection_pool = false
connection_pool_max_idle_time = 3600
http.server_port=80
http.check_alive_interval=30
http.check_alive_type=tcp
http.check_alive_uri=/status.html
[root@tracker1 fdfs]# cat client.conf|grep -v ^# |tr-s '\n'
connect_timeout=30
network_timeout=60
base_path=/ljzsg/fastdfs/client
tracker_server=tracker1:22122
log_level=info
use_connection_pool = false
connection_pool_max_idle_time = 3600
load_fdfs_parameters_from_tracker=false
use_storage_id = false
storage_ids_filename = storage_ids.conf
http.tracker_server_port=80
Storage配置文件
# cat storage.conf |grep -v ^# |tr -s '\n'
disabled=false
group_name=group1
bind_addr=
client_bind=true
port=23000
connect_timeout=30
network_timeout=60
heart_beat_interval=30
stat_report_interval=60
base_path=/ljzsg/fastdfs/storage
max_connections=256
buff_size = 256KB
accept_threads=1
work_threads=4
disk_rw_separated = true
disk_reader_threads = 1
disk_writer_threads = 1
sync_wait_msec=50
sync_interval=0
sync_start_time=00:00
sync_end_time=23:59
write_mark_file_freq=500
store_path_count=2
store_path0=/ljzsg/fastdfs/file
store_path1=/data
subdir_count_per_path=256
tracker_server=tracker1:22122
tracker_server=tracker2:22122
log_level=info
run_by_group=
run_by_user=
allow_hosts=*
file_distribute_path_mode=0
file_distribute_rotate_count=100
fsync_after_written_bytes=0
sync_log_buff_interval=10
sync_binlog_buff_interval=10
sync_stat_file_interval=300
thread_stack_size=512KB
upload_priority=10
if_alias_prefix=
check_file_duplicate=0
file_signature_method=hash
key_namespace=FastDFS
keep_alive=0
use_access_log = false
rotate_access_log = false
access_log_rotate_time=00:00
rotate_error_log = false
error_log_rotate_time=00:00
rotate_access_log_size = 0
rotate_error_log_size = 0
log_file_keep_days = 0
file_sync_skip_invalid_record=false
use_connection_pool = false
connection_pool_max_idle_time = 3600
http.domain_name=
http.server_port=80
# cat mod_fastdfs.conf |grep -v ^# |tr -s '\n'
connect_timeout=2
network_timeout=30
base_path=/tmp
load_fdfs_parameters_from_tracker=true
storage_sync_file_max_delay = 86400
use_storage_id = false
storage_ids_filename = storage_ids.conf
tracker_server=tracker1:22122
tracker_server=tracker1:22122
storage_server_port=23000
group_name=group1
url_have_group_name = true
store_path_count=1
store_path0=/ljzsg/fastdfs/file
log_level=info
log_filename=
response_mode=proxy
if_alias_prefix=
flv_support = true
flv_extension = flv
group_count = 1
[group1]
group_name=group1
storage_server_port=23000
store_path_count=2
store_path0=/ljzsg/fastdfs/file
store_path1=/data
nginx配置
[root@storage1 conf]# cat nginx.conf|grep -v '#' |tr-s '\n'
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
location / {
root html;
index index.html index.htm;
}
location ~/group([0-9])/M0 {
ngx_fastdfs_module;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}