nginx分片上传和FastDFS文件服务器

nginx分片上传和FastDFS文件服务器

FastDFS分布式文件系统安装和配置_亲测成功

FastDFS配置Nginx访问

CentOS7 安装FastDFS

yum -y install wget vim gcc-c++ git pcre pcre-devel zlib zlib-devel openssl openssl-devel

mkdir -p /data/software
cd /data/software

wget -O libfastcommon-1.0.7.tar.gz https://codeload.github.com/happyfish100/libfastcommon/tar.gz/V1.0.7
wget -O fastdfs-5.05.tar.gz https://codeload.github.com/happyfish100/fastdfs/tar.gz/V5.05

tar -zxvf libfastcommon-1.0.7.tar.gz
tar -zxvf fastdfs-5.05.tar.gz

cd /data/software/libfastcommon-1.0.7
./make.sh
./make.sh install
ln -s /usr/lib64/libfastcommon.so /usr/local/lib/libfastcommon.so
ln -s /usr/lib64/libfastcommon.so /usr/lib/libfastcommon.so

cd /data/software/fastdfs-5.05
./make.sh
./make.sh install

cd /etc/fdfs
cp client.conf.sample client.conf
cp storage.conf.sample storage.conf
cp tracker.conf.sample tracker.conf
cp /data/software/fastdfs-5.05/conf/http.conf /etc/fdfs
cp /data/software/fastdfs-5.05/conf/mime.types /etc/fdfs

配置 启动tracker

#配置tracker
mkdir -p /data/apps/fdfs/tracker
vim /etc/fdfs/tracker.conf
    disabled=false #启用配置文件(默认启用)
    port=22122 #设置tracker的端口号,通常采用22122这个默认端口
    base_path=/data/apps/fdfs/tracker #设置tracker的数据文件和日志目录
    http.server_port=6666 #设置http端口号,默认为8080

#启动tracker服务
ln -s /usr/bin/fdfs_trackerd /usr/local/bin
ln -s /usr/bin/stop.sh /usr/local/bin
ln -s /usr/bin/restart.sh /usr/local/bin

service fdfs_trackerd start 启动
#/usr/local/bin/fdfs_trackerd /etc/fdfs/tracker.conf start 指定配置文件启动
如果启动命令执行成功,那么同时在刚才创建的tracker文件目录/data/apps/fdfs/tracker中就可以看到启动后新生成的data和logs目录,tracker服务的端口也应当被正常监听,最后再通过netstat命令查看一下端口监听情况:netstat -unltp|grep fdfs
可以看到tracker服务运行的22122端口正常被监听

#设置开机启动
echo 'service fdfs_trackerd start' >> /etc/rc.d/rc.local
#echo '/usr/local/bin/fdfs_trackerd /etc/fdfs/tracker.conf start' >> /etc/rc.d/rc.local
如果重启后发现未能自动启动则通过命令: ll /etc/rc.d/rc.local检查一下rc.local是否具备可执行权限,若是无可执行权限则通过chmod +x /etc/rc.d/rc.local进行授

配置 启动storage

#配置storage
mkdir -p /data/apps/fdfs/storage
mkdir -p /data/datas/fdfs
vim /etc/fdfs/storage.conf

    disabled=false #启用配置文件(默认启用)
    group_name=group1 #组名,根据实际情况修改
    port=23000 #设置storage的端口号,默认是23000,同一个组的storage端口号必须一致
    base_path=/data/apps/fdfs/storage #设置storage数据文件和日志目录
    store_path_count=2 #存储路径个数,需要和store_path个数匹配
    store_path0=/data/datas/fdfs/path0 #实际文件存储路径
    store_path1=/data/datas/fdfs/path1 #实际文件存储路径
    tracker_server=192.168.111.11:22122 #tracker 服务器的 IP地址和端口号,如果是单机搭建,IP不要写127.0.0.1,否则启动不成功
    http.server_port=8888 #设置 http 端口号
    
#启动storage服务
ln -s /usr/bin/fdfs_storaged /usr/local/bin
#启动需要一点时间
mkdir -p /data/datas/fdfs/path0
mkdir -p /data/datas/fdfs/path1
service fdfs_storaged start  启动
#/usr/local/bin/fdfs_storaged /etc/fdfs/storage.conf start  指定配置文件启动
如果启动成功,/data/apps/fdfs/storage中就可以看到启动后新生成的data和logs目录,端口23000也应被正常监听,还有一点就是文件存储路径下会生成多级存储目录
netstat -unltp|grep fdfs
如果启动失败,端口23000没有监听, 进入/data/apps/fdfs/storage/logs目录下并打开storaged.log日志文件,查看错误

/data/datas/fdfs/path0 和 /data/datas/fdfs/path1
的data下有256个1级目录,每级目录下又有256个2级子目录,总共65536个文件,新写的文件会以hash的方式被路由到其中某个子目录下,然后将文件数据直接作为一个本地文件存储到该目录中。

#设置开机启动
echo 'service fdfs_storaged start' >> /etc/rc.d/rc.local
echo '/usr/local/bin/fdfs_storaged /etc/fdfs/storage.conf start' >> /etc/rc.d/rc.local
如果重启后发现未能自动启动则通过命令: ll /etc/rc.d/rc.local检查一下rc.local是否具备可执行权限,若是无可执行权限则通过chmod +x /etc/rc.d/rc.local进行授

还有一项工作就是看看storage服务器是否已经登记到 tracker服务器(也可以理解为tracker与storage是否整合成功)
/usr/bin/fdfs_monitor /etc/fdfs/storage.conf
说明storage服务器已经成功登记到了tracker服务器

至此我们就已经完成了fastdfs的全部配置,此时也就可以用客户端工具进行文件上传下载的测试了。

初步测试

测试时需要设置客户端的配置文件,编辑/etc/fdfs目录下的client.conf 文件

vim /etc/fdfs/client.conf
base_path=/data/apps/fdfs/tracker #tracker服务器文件路径
tracker_server=192.168.229.181:22122 #tracker服务器IP地址和端口号
http.tracker_server_port=6666 # tracker 服务器的 http 端口号,必须和tracker的设置对应起来

配置完成后就可以模拟文件上传了,先给/data/apps/目录下放一张图片1.jpg
然后通过执行客户端上传命令尝试上传:
/usr/bin/fdfs_upload_file  /etc/fdfs/client.conf  /data/apps/1.jpg

返回路径 group1/M00/00/00/CgLltVkgBHKAb6G7AABt6g_HL_o790.jpg
这就表示我们的文件已经上传成功了,当文件存储到某个子目录后,即认为该文件存储成功,
接下来会为该文件生成一个文件名,文件名由group、存储目录、两级子目录、fileid、文件后缀名(由客户端指定,主要用于区分文件类型)拼接而成
同时在之前配置的storage服务器的实际文件存储路径中也可以根据返回的路径找到实际文件
ll /data/datas/fdfs/path0/data/00/00/

接下来尝试用浏览器发送HTTP请求访问一下文件
http://192.168.229.181:9999/group1/M00/00/00/CgLltVkgBHKAb6G7AABt6g_HL_o790.jpg
此时发现并不能访问,因为FastDFS目前已不支持http协议,我们在FastDFS 4.0.5的版本更新日志中可以看到这样一条信息

4.0.5版本开始移除了自带的HTTP支持(因为之前自带的HTTP服务较为简单,无法提供负载均衡等高性能服务),所以余大(淘宝资深架构师余庆大神)提供了nginx上使用FastDFS的模块fastdfs-nginx-module

下载地址如下:https://github.com/happyfish100/fastdfs-nginx-module,这样做最大的好处就是提供了HTTP服务并且解决了group中storage服务器的同步延迟问题

fastdfs-nginx-module 的安装,提供负载均衡下载图片

参考查看 HttpServerNginx安装-withSSL

为storage安装nginx模块

1.安装编译工具及库文件
(CentOS)
yum -y install make zlib zlib-devel gcc-c++ libtool  openssl openssl-devel

(Ubuntu)
apt-get install make zlib1g zlib1g-dev build-essential libtool openssl libssl-dev
2.安装PCRE
wget http://downloads.sourceforge.net/project/pcre/pcre/8.35/pcre-8.35.tar.gz
tar zxvf pcre-8.35.tar.gz

cd pcre-8.35
./configure
make && make install

pcre-config --version
安装Nginx
wget http://nginx.org/download/nginx-1.10.2.tar.gz
git clone https://github.com/happyfish100/fastdfs-nginx-module.git
tar -xzvf nginx-1.10.2.tar.gz
cd nginx-1.10.2

./configure --prefix=/data/apps/nginx-storaged \
    --pid-path=/data/logs/nginx-storaged/nginx.pid \
    --lock-path=/data/apps/nginx-storaged/nginx.lock \
    --error-log-path=/data/logs/nginx-storaged/error.log \
    --http-log-path=/data/logs/nginx-storaged/access.log \
    --http-client-body-temp-path=/data/temps/nginx-storaged/client_body_temp \
    --http-proxy-temp-path=/data/temps/nginx-storaged/proxy_temp \
    --http-fastcgi-temp-path=/data/temps/nginx-storaged/fastcgi_temp \
    --http-uwsgi-temp-path=/data/temps/nginx-storaged/uwsgi_temp \
    --http-scgi-temp-path=/data/temps/nginx-storaged/scgi_temp \
    --with-http_stub_status_module \
    --add-module=/data/software/fastdfs-nginx-module/src

make
make install
配置Nginx
mkdir -p /data/datas/fdfs/path0/data
mkdir -p /data/datas/fdfs/path1/data

建立 M00和M01至存储目录的符号连接
ln -s /data/datas/fdfs/path0/data  /data/datas/fdfs/path0/data/M00
ln -s /data/datas/fdfs/path1/data  /data/datas/fdfs/path0/data/M01
ll 查看
M00 -> /data/datas/fdfs/path0/data
M01 -> /data/datas/fdfs/path1/data

修改一下nginx的配置文件,进入conf目录并打开nginx.conf文件加入以下配置
vim /data/apps/nginx-storaged/conf/nginx.conf
 	user  root;
 	#根据CPU个数配置
        worker_processes  2;

        listen       9999;
        
        access_log  /data/logs/nginx-storaged/access.log;

        location ~/group1 {
    
    
            root /data/datas/fdfs/path0/data;
            ngx_fastdfs_module;
        }


然后进入FastDFS的安装目录/data/software/fastdfs-5.05/目录下的conf目录,将http.conf和mime.types拷贝到/etc/fdfs目录下:
cp -r /data/software/fastdfs-5.05/conf/http.conf /etc/fdfs/
cp -r /data/software/fastdfs-5.05/conf/mime.types /etc/fdfs/
接下来还需要把fastdfs-nginx-module安装目录中src目录下的mod_fastdfs.conf也拷贝到/etc/fdfs目录下:
cp -r /data/software/fastdfs-nginx-module/src/mod_fastdfs.conf /etc/fdfs/

vim /etc/fdfs/mod_fastdfs.conf

    base_path=/data/logs/fdstdfs-ng-module #保存日志目录
    tracker_server=192.168.229.181:22122 #tracker服务器的IP地址以及端口号
    #tracker_server=192.168.10.82:22122   
    #tracker_server=192.168.10.83:22122 
    #tracker_server=192.168.10.84:22122
    #tracker_server=192.168.10.85:22122
    storage_server_port=23000 #storage服务器的端口号
    url_have_group_name = true #文件 url 中是否有 group 名
    
    store_path_count=2 #存储路径个数,需要和store_path个数匹配
    store_path0=/data/datas/fdfs/path0 #实际文件存储路径
    store_path0=/data/datas/fdfs/path1 #实际文件存储路径

    
    group_count = 2 #设置组的个数,事实上这次只使用了group1    需要在文件尾部追加这2个group setting

    [group1]
    group_name=group1
    storage_server_port=23000
    store_path_count=2
    store_path0=/data/datas/fdfs/path0
    store_path1=/data/datas/fdfs/path1

    [group2]
    group_name=group2
    storage_server_port=23000
    store_path_count=2
    store_path0=/data/datas/fdfs/path0
    store_path1=/data/datas/fdfs/path1

#启动Nginx
mkdir -pv /data/temps/nginx-storaged/client_body_temp
/data/apps/nginx-storaged/sbin/nginx
ps aux|grep nginx

开启启动
echo '/data/apps/nginx-storaged/sbin/nginx' >> /etc/rc.d/rc.local

访问图片成功 http://192.168.229.181:9999/group1/M00/00/00/CgLltVkgBHKAb6G7AABt6g_HL_o790.jpg

测试安装是否成功

mkdir -p /data/apps/fdfs/client
vim /etc/fdfs/client.conf

base_path=/data/apps/fdfs/client
tracker_server=192.168.229.181:22122

#tracker_server=192.168.10.82:22122 
#tracker_server=192.168.10.83:22122 
#tracker_server=192.168.10.84:22122
#tracker_server=192.168.10.85:22122

http.tracker_server_port=9999

上传测试
fdfs_test1 /etc/fdfs/client.conf upload /data/apps/a.txt
或者
#/usr/bin/fdfs_upload_file  /etc/fdfs/client.conf  /data/apps/b.txt

返回了访问路径:
http://192.168.229.181:9999/group1/M00/00/00/CgLltVkgDpmAODTUAAAAB35s_Lg232.txt
http://192.168.229.181:9999/group1/M00/00/00/CgLltVkgDpmAODTUAAAAB35s_Lg232_big.txt
访问成功

tracker服务器的nginx安装

同理,再装一个nginx,目录命名为nginx-main作为前端负载均衡,安装路径/data/apps/nginx-main下,
由于和之前一样,此处就不再做详细解释:按上面方式安装

修改nginx-main的配置文件,进入conf目录并打开nginx.conf文件加入以下配置,无需修改listen端口,即默认的80端口,并将upstream指向tracker的nginx地址

upstream fdfs_group1 {
    
    
     server 192.168.229.181:9999;
}

location /group1/M00 {
    
    
     proxy_pass http://fdfs_group1;
}

/data/apps/nginx-main/sbin/nginx
ps aux|grep nginx

开发环境配置

192.168.10.181 NginxDownload_NginxFDFSModel_FDFS
192.168.10.182 NginxUpload_NginxFDFSModel_FDFS
192.168.10.191 NginxUploadModel_HttpServerUpload_HttpServerAuth_FFMPEG

181nginx-download nginx-upload-main

181nginx-storaged 182nginx-storaged authServer(HttpServerAuth) checkServer(HttpServerUpload) upload-nginx(HttpServerUpload)
(旁路认证,检查用户合法) (校验文件md5是否已存在) (nginx模块,分片上传)
FDFS FDFS

192.168.10.181 两台nginx-storaged
cat /data/apps/nginx-storaged/conf/nginx.conf

user  root;
worker_processes  2;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    
    
    worker_connections  1024;
}


http {
    
    
    include       mime.types;
    default_type  application/octet-stream;

    sendfile        on;

    keepalive_timeout  65;

    server {
    
    
        listen       9999;
        server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;
        access_log  /data/logs/nginx-storaged/access.log;

        location ~/group1 {
    
    
            root /data/datas/fdfs/path0/data;
            ngx_fastdfs_module;
        }

        location / {
    
    
            root   html;
            index  index.html index.htm;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
    
    
            root   html;
        }

       
    }

}

192.168.10.182 两台nginx-storaged
cat /data/apps/nginx-storaged/conf/nginx.conf

user  root;
worker_processes  2;

events {
    
    
    worker_connections  1024;
}


http {
    
    
    include       mime.types;
    default_type  application/octet-stream;

    sendfile        on;
  
    keepalive_timeout  65;

    server {
    
    
        listen       9999;
        server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;
        access_log  /data/logs/nginx-storaged/access.log;


        location ~/group1 {
    
    
            root /data/datas/fdfs/path0/data;
            ngx_fastdfs_module;
        }

        location / {
    
    
            root   html;
            index  index.html index.htm;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
    
    
            root   html;
        }
      
    }

}

192.168.10.181 前端在加一台nginx-download 负载均衡到两台181和182的nginx-storaged
cat /data/apps/nginx-download/conf/nginx.conf

user  root;
worker_processes  2;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    
    
    worker_connections  1024;
}

http {
    
    
    include       mime.types;
    default_type  application/octet-stream;

  
    sendfile        on;
   
    keepalive_timeout  65;

    upstream fdfs_group1 {
    
    
        server 192.168.10.181:9999;
        server 192.168.10.182:9999;
    }

    server {
    
    
        listen       80;
        server_name  localhost;

        location /group1 {
    
    
            proxy_pass http://fdfs_group1;
        }

        # redirect server error pages to the static page /50x.html
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
    
    
            root   html;
        }
    }

    server {
    
    
        listen       443 ssl;
        server_name  localhost;
        ssl_certificate      /data/apps/nginx-download/xxxchat.crt;
        ssl_certificate_key  /data/apps/nginx-download/xxxchat.key;

        ssl_verify_depth 3;
        ssl_session_cache    shared:SSL:10m;
        ssl_session_timeout  10m;

        ssl_protocols SSLv2 SSLv3 TLSv1;
        ssl_ciphers ALL:!kEDH!ADH:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
        ssl_prefer_server_ciphers  on;

        location /group1 {
    
    
            proxy_pass http://fdfs_group1;
        }
        
	error_page   500 502 503 504  /50x.html;
        location = /50x.html {
    
    
                 root html;
        }
    }

}

192.168.10.182 nginx-upload-main 上传的
cat /data/apps/nginx-upload-main/conf/nginx.conf

user  root;
worker_processes  2;

error_log  /data/logs/nginx-upload-main/error.log;

events {
    
    
    worker_connections  1024;
}


http {
    
    
    include       mime.types;
    default_type  application/octet-stream;

   
    access_log  /data/logs/nginx-upload-main/access.log;

    sendfile        on;
  
    keepalive_timeout  65;

    underscores_in_headers on;


    upstream auth-server {
    
    
        server 192.168.10.191:8080;
    }

    upstream upload-server {
    
    
        hash $http_x_Liang_uid;
        server 192.168.10.191:8090;
    }

    upstream check-server {
    
    
        server 192.168.10.191:8980;
    }

    server {
    
    
        listen       80;
        server_name  localhost;

        client_max_body_size 100m;
        location ~/media/file/\S+/checksum{
    
    
                auth_request /auth;
                auth_request_set $auth_status $upstream_status;
                proxy_pass http://check-server;
        }

        location ~/media/file/\S+/upload {
    
    
                auth_request /auth;
                auth_request_set $auth_status $upstream_status;
                proxy_pass http://upload-server;
        }

        location = /auth {
    
    
                internal;
                proxy_pass http://auth-server;
                proxy_set_header Content-Length "";
        }

        
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
    
    
            root   html;
        }
    }

}

192.168.10.191 nginx-upload nginx自带分片上传文件功能
cat /data/apps/nginx-upload/conf/nginx.conf

user  root;
worker_processes  2;
error_log  /data/logs/nginx-upload/error.log;
pid        /data/logs/nginx-upload/nginx.pid;


events {
    
    
    worker_connections  1024;
}


http {
    
    
    include       mime.types;
    default_type  application/octet-stream;

    sendfile        on;
  
    keepalive_timeout  65;


    server {
    
    
	client_max_body_size 100m;
        listen       8090;
        upload_resumable on;
    
        access_log  /data/logs/nginx-upload/access.log;

	location ~/media/file/\S+/upload{
    
    
                upload_pass @fileUpload;
                upload_state_store /data/temps/upload_temp/upload_state;
                upload_store /data/temps/upload_temp/upload_tempfile;
                upload_set_form_field "name" "$upload_file_name";
                upload_set_form_field "content_type" "$upload_content_type";
                upload_set_form_field "path" "$upload_tmp_path";
                upload_aggregate_form_field "md5" "$upload_file_md5";
                upload_aggregate_form_field "size" "$upload_file_size";
                upload_pass_form_field "^submit$|^description$";
                upload_cleanup 400 404 499 500-505;
                upload_pass_args on;
        }
        
        location @fileUpload {
    
    
                proxy_pass http://127.0.0.1:8980;
        }


        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
    
    
            root   html;
        }

      
    }


}

文件上传时的Header:
X-Liang-Token: “23sd2ji9f8ewry87”
X-Liang-Uid: “1001”
X-Liang-Client: “1.0.1;Andirod;4.1;zh-CN;China/Hunan;3G”
X-Liang-Md5: “md5” 文件的md5
Liang: “320” 图片的宽度
X-Session-ID: 111231 多台nginx上传时,会落到不同的nginx,此ID分片上传时,要保持一样
X-Content-Range: bytes 0-7310/7311 分片大小
Content-type:application/octet-stream
Content-Disposition:attachment;filename=“test.jpg”

参考链接:
http://blog.csdn.net/wlwlwlwl015/article/details/52619851

猜你喜欢

转载自blog.csdn.net/yinjl123456/article/details/128807237
今日推荐