使用live555 直播来自v4l2的摄像头数据

    使用live555 直播来自v4l2的摄像头数据,在我的这个工程中,基本思路是:使用V4L2采集摄像头数据,然后使用x264库对摄像头数据进行编码,编成H264数据格式,然后把数据写入到命名管道中。最后使用live555 从管道中读出数据发送出去,实现视频的直播。

    在我的工程调试过程中,使用的是罗技C270摄像头,出来的是YUYV数据格式。x264和live555 使用最新的库。工程目录如下:

├── H264FramedLiveSource.cpp
├── H264VideoStreamer.cpp
├── include
│   ├── encoder
│   │   ├── encoder_define.hh
│   │   ├── H264FramedLiveSource.hh
│   │   └── stdint.h
│   ├── live555
│   │   ├── basicUsageEnvironment
│   │   ├── groupsock
│   │   ├── liveMedia
│   │   └── usageEnvironment
│   └── x264
│       ├── x264_config.h
│       └── x264.h
├── lib
│   ├── livelib
│   │   ├── libBasicUsageEnvironment.a
│   │   ├── libgroupsock.a
│   │   ├── libliveMedia.a
│   │   └── libUsageEnvironment.a
│   └── x264lib
│       ├── libx264.a
│       └── libx264.so.148
└── Makefile
    H264FramedLiveSource.cpp 中新建了一个Device 类,用来实现v4l2数据的采集和x264编码的实现。其函数定义如下:

/*=============================================================================
 * #     FileName: H264FramedLiveSource.hh
 * #         Desc: 
 * #               
 * #       Author: licaibiao
 * #      Version: 
 * #   LastChange: 2017-02-24 
 * =============================================================================*/
#ifndef _H264FRAMEDLIVESOURCE_HH
#define _H264FRAMEDLIVESOURCE_HH
#include <FramedSource.hh>
#include <UsageEnvironment.hh>
#include "encoder_define.hh"

class Device
{
public:
    void init_mmap(void);
    void init_camera(void);
    void init_encoder(void);
    void open_camera(void);  
    void close_camera(void);
    void read_one_frame(void);
    void getnextframe(void);
    void start_capture(void);
    void stop_capture(void);
    void close_encoder(); 
    int  camera_able_read(void);
    void compress_begin(Encoder *en, int width, int height);
    int  compress_frame(Encoder *en, int type, char *in, int len, char *out);
    void compress_end(Encoder *en);
    void Init();
    void intoloop();;
    void Destory();
public:
    int fd;
	FILE *save_fd;
    int n_nal;
    int frame_len;
	char *h264_buf;
	unsigned int n_buffer;
	Encoder en;
    FILE *h264_fp;
    BUFTYPE *usr_buf;
	FILE *pipe_fd;
};
#endif
    H264VideoStreamer.cpp 实现RTSP的创建。其代码实现如下:

/**********
This library is free software; you can redistribute it and/or modify it under
the terms of the GNU Lesser General Public License as published by the
Free Software Foundation; either version 3 of the License, or (at your
option) any later version. (See <http://www.gnu.org/copyleft/lesser.html>.)

This library is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE.  See the GNU Lesser General Public License for
more details.

You should have received a copy of the GNU Lesser General Public License
along with this library; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
**********/
// Copyright (c) 1996-2017, Live Networks, Inc.  All rights reserved
// A test program that reads a H.264 Elementary Stream video file
// and streams it using RTP
// main program
//
// NOTE: For this application to work, the H.264 Elementary Stream video file *must* contain SPS and PPS NAL units,
// ideally at or near the start of the file.  These SPS and PPS NAL units are used to specify 'configuration' information
// that is set in the output stream's SDP description (by the RTSP server that is built in to this application).
// Note also that - unlike some other "*Streamer" demo applications - the resulting stream can be received only using a
// RTSP client (such as "openRTSP")

#include <liveMedia.hh>
#include <BasicUsageEnvironment.hh>
#include <GroupsockHelper.hh>
#include <H264FramedLiveSource.hh>
#include <sys/types.h>  
#include <sys/stat.h> 

UsageEnvironment* env;
char const* inputFileName = "/tmp/fifo";
char *ptr;
H264VideoStreamFramer* videoSource;
RTPSink* videoSink;
class Device Camera; 

void play(); // forward

EventTriggerId DeviceSource::eventTriggerId = 0;

int main(int argc, char** argv) {
  // Begin by setting up our usage environment:
  TaskScheduler* scheduler = BasicTaskScheduler::createNew();
  env = BasicUsageEnvironment::createNew(*scheduler);

  // Create 'groupsocks' for RTP and RTCP:
  struct in_addr destinationAddress;
  destinationAddress.s_addr = chooseRandomIPv4SSMAddress(*env);
  // Note: This is a multicast address.  If you wish instead to stream
  // using unicast, then you should use the "testOnDemandRTSPServer"
  // test program - not this test program - as a model.

  const unsigned short rtpPortNum = 18888;
  const unsigned short rtcpPortNum = rtpPortNum+1;
  const unsigned char ttl = 255;

  const Port rtpPort(rtpPortNum);
  const Port rtcpPort(rtcpPortNum);

  Camera.Init();
  mkfifo(inputFileName, 0777);
  if(0 == fork())
  {
	Camera.pipe_fd = fopen(inputFileName, "w");
	if(NULL == Camera.pipe_fd)
	{
		printf("===============child process open pipe err =======\n ");
	}
	while(1)
	{
		usleep(15000);
		Camera.getnextframe();
	}
	
  }

  Groupsock rtpGroupsock(*env, destinationAddress, rtpPort, ttl);
  rtpGroupsock.multicastSendOnly(); // we're a SSM source
  Groupsock rtcpGroupsock(*env, destinationAddress, rtcpPort, ttl);
  rtcpGroupsock.multicastSendOnly(); // we're a SSM source

  
		
  // Create a 'H264 Video RTP' sink from the RTP 'groupsock':
  OutPacketBuffer::maxSize = 600000;
  videoSink = H264VideoRTPSink::createNew(*env, &rtpGroupsock, 96);

  // Create (and start) a 'RTCP instance' for this RTP sink:
  const unsigned estimatedSessionBandwidth = 10000; // in kbps; for RTCP b/w share
  const unsigned maxCNAMElen = 100;
  unsigned char CNAME[maxCNAMElen+1];
  gethostname((char*)CNAME, maxCNAMElen);
  CNAME[maxCNAMElen] = '\0'; // just in case
  RTCPInstance* rtcp
  = RTCPInstance::createNew(*env, &rtcpGroupsock,
			    estimatedSessionBandwidth, CNAME,
			    videoSink, NULL /* we're a server */,
			    True /* we're a SSM source */);
  // Note: This starts RTCP running automatically

  RTSPServer* rtspServer = RTSPServer::createNew(*env, 8554);
  if (rtspServer == NULL) {
    *env << "Failed to create RTSP server: " << env->getResultMsg() << "\n";
    exit(1);
  }
  ServerMediaSession* sms
    = ServerMediaSession::createNew(*env, "testStream", inputFileName,
		   "Session streamed by \"testH264VideoStreamer\"",
					   True /*SSM*/);
  sms->addSubsession(PassiveServerMediaSubsession::createNew(*videoSink, rtcp));
  rtspServer->addServerMediaSession(sms);

  char* url = rtspServer->rtspURL(sms);
  *env << "Play this stream using the URL \"" << url << "\"\n";
  delete[] url;

  // Start the streaming:
  *env << "Beginning streaming...\n";
  play();

  env->taskScheduler().doEventLoop(); // does not return

  return 0; // only to prevent compiler warning
}

void afterPlaying(void* /*clientData*/) {
  *env << "...done reading from file\n";
  videoSink->stopPlaying();
  Medium::close(videoSource);
  Camera.Destory();
  // Note that this also closes the input file that this source read from.

  // Start playing once again:
  play();
}

void play() {
  // Open the input file as a 'byte-stream file source':
  ByteStreamFileSource* fileSource
    = ByteStreamFileSource::createNew(*env, inputFileName);
  if (fileSource == NULL) {
    *env << "Unable to open file \"" << inputFileName
         << "\" as a byte-stream file source\n";
    exit(1);
  }

  FramedSource* videoES = fileSource;

  // Create a framer for the Video Elementary Stream:
  videoSource = H264VideoStreamFramer::createNew(*env, videoES);

  // Finally, start playing:
  *env << "Beginning to read from file...\n";
  videoSink->startPlaying(*videoSource, afterPlaying, videoSink);
}
   x264 和live555库的下载编译安装,在前面的博客中已经讲过,这里不再重复。编译运行整个工程如下:

[root@redhat pipelive]# ls
H264FramedLiveSource.cpp  H264FramedLiveSource.o  H264VideoStreamer  H264VideoStreamer.cpp  H264VideoStreamer.o  include  lib  Makefile
[root@redhat pipelive]# ./H264VideoStreamer

camera driver name is : uvcvideo
camera device name is : UVC Camera (046d:0825)
camera bus information: usb-0000:00:1a.0-1.1
n_buffer = 4
x264 [warning]: lookaheadless mb-tree requires intra refresh or infinite keyint
x264 [info]: using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
x264 [info]: profile High 4:2:2, level 3.0, 4:2:2 8-bit
Play this stream using the URL "rtsp://192.168.0.127:8554/testStream"
Beginning streaming...
Beginning to read from file...
    在VLC 播放器中打开网络流rtsp://192.168.0.127:8554/testStream 可以看到摄像头中的数据:

扫描二维码关注公众号,回复: 55941 查看本文章



     实际测试,使用管道来传输摄像头数据会出现一些延时,如果图片尺寸设置在320*240 延时会很小,但是图片尺寸设置在640*480,延时现象就会非常明显。这与我使用的摄像头也有关系,UVC摄像头读取数据相对会比较慢。

    完整的工程可以从这里下载:live555直播来自v4l2 的摄像头数据



   ================2018-03-16 更新==================

   因为工程中包含x264动态库文件,某个网友下载工程文件后杀毒软件报病毒......  被举报之后下载链接失效了.......

   如果有需要工程代码做参考的可以在评论区留个邮箱,看到后会邮件发送给有需要者。



    



猜你喜欢

转载自blog.csdn.net/li_wen01/article/details/59523963