安装nginx-kafka插件

版权声明:本文为博主原创文章,转载请附带本文链接 https://blog.csdn.net/qq_37933685/article/details/82284930

安装nginx-kafka插件

nginx可以直接把数据写到kafka里面去。

1.安装git

    yum install -y git

2.切换到/usr/local/src目录,然后将kafka的c客户端源码clone到本地

    cd /usr/local/src
    git clone https://github.com/edenhill/librdkafka

3.进入到librdkafka,然后进行编译

    cd librdkafka
    yum install -y gcc gcc-c++ pcre-devel zlib-devel
    ./configure
    make && make install

4.安装nginx整合kafka的插件,进入到/usr/local/src,clone nginx整合kafka的源码

    cd /usr/local/src
    git clone https://github.com/brg-liuwei/ngx_kafka_module

5.进入到nginx的源码包目录下 (编译nginx,然后将将插件同时编译)

    cd /usr/local/src/nginx-1.12.2
    ./configure --add-module=/usr/local/src/ngx_kafka_module/
    make
    make install

6.修改nginx的配置文件,详情请查看当前目录的nginx.conf


#user  nobody;
worker_processes  1;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';
    #access_log  logs/access.log  main;
    sendfile        on;
    #tcp_nopush     on;
    #keepalive_timeout  0;
    keepalive_timeout  65;
    #gzip  on;

    kafka;
    kafka_broker_list node-1.xiaoniu.com:9092 node-2.xiaoniu.com:9092 node-3.xiaoniu.com:9092;  

    server {
        listen       80;
        server_name  node-6.xiaoniu.com;
        #charset koi8-r;
        #access_log  logs/host.access.log  main;

        location = /kafka/track {
                kafka_topic track;
        }

        location = /kafka/user {
                kafka_topic user;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

    }

}

主要是添加kafka 和 location,在liuwei的git仓库里面的用法说明有提到。

7.启动zk和kafka集群(创建topic)

    /bigdata/zookeeper-3.4.9/bin/zkServer.sh start
    /bigdata/kafka_2.11-0.10.2.1/bin/kafka-server-start.sh -daemon /bigdata/kafka_2.11-0.10.2.1/config/server.properties

8.启动nginx,报错,找不到kafka.so.1的文件

    error while loading shared libraries: librdkafka.so.1: cannot open shared object file: No such file or directory

原因是没有加载库编译

9.加载so库

    echo "/usr/local/lib" >> /etc/ld.so.conf
    ldconfig

10.测试前把nginx开启,记得要ping通才能测试,而且开启相应的端口,开始测试:向nginx中写入数据,然后观察kafka的消费者能不能消费到数据

    curl localhost/kafka/track -d "message send to kafka topic"

猜你喜欢

转载自blog.csdn.net/qq_37933685/article/details/82284930