post,get请求nginx记录日志kafka自动扫描程序



        kafka的介绍以及使用安装请查看博文http://blog.csdn.net/hmsiwtv/article/details/46960053

讲的很详细,基本一看就会,这边主要介绍一下nginx记录日志kafka扫描

    一  nginx主要配置

log_format ht-video  '$request|$msec|$http_x_forwarded_for|$http_user_agent|$request_body';
	server{
                server_name 192.168.50.42;

                location / {
                        root html;
                        index index.html;
                }
                location /ht{
                        #root   html;
                        #index  index.html index.htm;
                        if ( $request ~ "GET" ) {
                                #access_log logs/test.log tes;
                                access_log /htlog/test.log test;
                        }

                        client_max_body_size 8000M;
                        #error_page 405 =200 $1;
                }
        }

 然后可以发送get请求  http://192.168.50.42:8080/ht/p?data=11211111111

注意在nginx的html目录下边需要有ht目录 然后目录下边需要建一个p文件,里面随便输点内容,是为了能请求到对应的文件

二 修改Kafka文件


vi /usr/local/kafka_2.11-0.9.0.1/config/server.properties
   这个文件里要改下面5点:
broker.id=0
port=9092
host.name=192.168.50.42

log.dirs=/log
zookeeper.connect=192.168.50.42:2181

三、启动zookeeper和kafka


/usr/local/kafka_2.11-0.9.0.1/bin/zookeeper-server-start.sh -daemon /usr/local/kafka_2.11-0.9.0.1/config/zookeeper.properties
/usr/local/kafka_2.11-0.9.0.1/bin/kafka-server-start.sh -daemon /usr/local/kafka_2.11-0.9.0.1/config/server.properties

四,创建topic

输入命令创建一个叫test的topic:bin/kafka-console-producer.sh --broker-list 192.168.50.42:9092 --topic test

消费topic的命令:bin/kafka-console-consumer.sh --zookeeper 192.168.50.42:2181 --topic test --from-beginning

 


 
 

五,扫描Nginx日志发送到kafka

tail -n 0 -f  /htlog/ht-video.log | /usr/local/kafka_2.10-0.9.0.1/bin/kafka-console-producer.sh --broker-list 192.168.50.42:9092 --topic test

六,java客户端消费日志

public static List<String> getKafkaData(){
    	List<String> list=new ArrayList<String>();
    	Properties props = new Properties();
    	props.put("zookeeper.connect", "192.168.50.42:2181");
    	props.put("group.id", "group");
    	props.put("zookeeper.session.timeout.ms", "40000");
    	props.put("zookeeper.sync.time.ms", "200");
    	props.put("auto.commit.interval.ms", "1000");
    	ConsumerConnector consumer = kafka.consumer.Consumer.createJavaConsumerConnector(new ConsumerConfig(props));
        Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
        topicCountMap.put("test", new Integer(1));
        Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumer.createMessageStreams(topicCountMap);
        KafkaStream<byte[], byte[]> stream = consumerMap.get(Kafkaproperties.topic).get(0);
        ConsumerIterator<byte[], byte[]> it = stream.iterator();
        String message="GGGG";
        while (it.hasNext()) {
        	message=new String(it.next().message());
            System.out.println("receive:" + message);
            list.add(message);
        }
    	return list;
    }

 

 

猜你喜欢

转载自840327220.iteye.com/blog/2291270