FFmpeg音视频播放器系列(第二篇:音视频播放同步)


在上一篇中,基本实现了音视频的播放,但是音频与视频的播放完全不同步,就是一个简单的延时,不忍直视!为了写好这一篇音视频播放同步,我将从源头分析,然后一步步想办法如何实现同步。

音视频同步基本知识点

在解决音视频播放同步前,有一些基本的知识点我需要说明一下。

音频采样、编码、播放

  • 采样:正常人听觉的频率范围大约在20Hz~20kHz之间,根据奈奎斯特采样理论,为了保证声音不失真,采样频率应该在人耳所能听到声音频率最大值的2倍,那么40KHz的采样率已经足够,但是为了保证人耳听到的声音质量不降低,业界一般采用44.1KHz的采样率,即每秒采样44100次,更精确的采样率为48KHz
  • 编码:声音的采样过程其实是一个模拟信号转为数字信号的过程,数字信号必然有一个范围,可以用1字节、2字节、4字节表示一个采样点的数值。业界也一般采用2字节(16bit),来表示一个采样点数值,是一个16位有符号的整数,表示范围是-32768~32767,总计65536种数值。
    我们听到的声音还有声道一说,常见的为左右声道,这在FFmpeg里面称之为声道布局,常见的有
    AV_CH_LAYOUT_STEREO:普通音响,即左、右布局
    AV_CH_LAYOUT_2POINT1:普通音响加低音,即左、右布局,加低音炮
    AV_CH_LAYOUT_SURROUND:环绕声,左、右、前中布局
    AV_CH_LAYOUT_5POINT1:环绕声 + 左边际 + 右边际 + 低音炮
    就常见的CD音频左右声道来说,1秒采样44100次,每个采样点16bit,2个通道,产生数据:
    44100x16x2bits,这就是声音的原始数据,称为脉冲调制数据PCM(PulseCodeModulation),在保存PCM数据时,一般按照声道依次排列:(左右左右左右…)
    描述一个PCM格式的数据需要一下几个概念:采样格式(即bit位数)、采样率、声道数。
    PCM数据的存储还可以分为小端与大端格式,常见的是小端格式,
    如果直接保存PCM声音原始数据,按照CD格式的音频数据,1分钟可以产生10M左右的数据,显然偏大,因此需要对PCM数据进行编码,编码的目的就是压缩数据。
    这里简单说明一下常用的MP3与AAC编码的特点。
    MP3:编码一帧,一般是1152个采样点,这样其数据大小是1152x2x2=4608字节
    AAC:编码一帧,一般是1024个采样点,这样其数据大小是1024x2x2=4096字节
  • 播放:理论上只要音频的播放与采样率一致,就可以完美的还原声音,但是因为编码一帧,就需要按帧进行解码,MP3播放这一帧耗时为:1152 / 44100 = 26.122449ms,AAC播放一帧耗时为:1024 / 44100 = 23.2199546ms,计时系统很难达到如此精确的计时,必然有一定的误差,我们只能以最快的速度将数据传递给播放设备,否则中间延迟过长,就会听到声音卡顿。

视频采样、编码、播放

  • 采样:视频的采样,通过图像传感器采集到一副完整的图像,图像格式可能是RGB或者YUV格式,一副图像的大小基本都是以MB为单位,而为了看到动态的视频图像,必须在1秒内采样24幅图像,然后在1秒钟内播放出来,人眼才不会感觉到图像的卡顿,因此如果不对图像进行编码压缩,1个90分钟的电影,如果按照未编码的RGB或者YUV格式存储,将需要海量的存储空间
  • 编码:最常见的是H264格式,H264编码格式的视频帧有I帧、P帧、B帧、GOP等概念,具体可以参见我的博客:H264帧格式解析
  • 播放:视频的播放,需要将视频中的H264解码,然后按采样率fps播放,即每秒的采样次数,例如每秒24帧,1帧播放时间为1/24=41.67ms,更常见的fps有25fps(一帧播放时间为40ms),30fps(一帧播放时间为33.3ms)

获取播放文件的信息

上面一些音视频的基本知识点,是解决音视频播放同步的主要因素,因此必须先通过媒体文件,获取里面的音频与视频的信息,根据这些信息才能做好同步操作。那么如何获得这些信息呢?

	AVFormatContext	*pFormatCtx;
	pFormatCtx = avformat_alloc_context();
	avformat_open_input(&pFormatCtx, filepath, NULL, NULL)avformat_find_stream_info(pFormatCtx,NULL)
	av_dump_format(pFormatCtx, 0, filepath, 0);
	//以下是函数av_dump_format输出的信息
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'bootloader.mp4':
Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: mp42mp41
    creation_time   : 2017-12-29T09:16:47.000000Z
Duration: 00:14:10.67, start: 0.000000, bitrate: 1128 kb/s
	Stream #0:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 1024x768, 808 kb/s, 8 fps, 8 tbr, 16 tbn, 16 tbc (default)
	Metadata:
      creation_time   : 2017-12-29T09:16:47.000000Z
      handler_name    : Alias Data Handler
      encoder         : AVC Coding
    Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 317 kb/s (default)
    Metadata:
      creation_time   : 2017-12-29T09:16:47.000000Z
      handler_name    : Alias Data Handler

我们主要关注以下几点信息

  • 文件时长:Duration: 00:14:10.67,此信息位于结构体AVFormatContextduration成员,其实还可以获取其他信息例如bit_rate、packet_size

  • 视频流:Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 1024x768, 808 kb/s, 8 fps, 8 tbr, 16 tbn, 16 tbc (default)

  • 音频流:Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 317 kb/s (default)

那么这些数据都是从哪里得到的呢?
在获得并根据多媒体文件更新一个AVFormatContext结构体变量之后,就可以在此结构的AVStream **streams成员中查找音视频流,并获得音视频流的各种信息

获取音频信息

获取音频相关信息主要依靠struct AVCodecContext结构体,此结构体的变量位于AVStream结构中,当在AVFormatContext结构体的AVStream **streams成员中查找到音频流之后,就可以用以下方式获取音频信息:

  • 音频编码方式:pFormatCtx->streams[AudioIndex]->codec->codec_id,这是一个枚举变量
  • 音频采样率:pFormatCtx->streams[AudioIndex]->codec->sample_rate
  • 一个音频编码帧的采样个数:pFormatCtx->streams[AudioIndex]->codec->frame_size
  • 音频通道数:pFormatCtx->streams[AudioIndex]->codec->channels
  • 采样格式:pFormatCtx->streams[AudioIndex]->codec->sample_fmt,这是一个枚举变量

获取视频信息

获取视频相关信息与音频类似,当在AVFormatContext结构体的AVStream **streams成员中查找到视频流之后,就可以用以下方式获取视频信息:

  • 视频频编码方式:pFormatCtx->streams[VideoIndex]->codec->codec_id,这是一个枚举变量
  • 视频分辨率:pFormatCtx->streams[VideoIndex]->codec->width / height
  • 视频帧率:pFormatCtx->streams[VideoIndex]->codec->framerate,这是一个AVRational类型的变量,次结构用来表示一个分数,其成员num表示分子,den成员表示分母,这个结构在以下部分会常用到

同步的分析

通过以上的步骤,分别获取多媒体文件的音频与视频信息之后,就可以进行解码并播放。理论上只需要分别按照各自的时间要求播放音频与视频,他们本身应该就是同步的。假设一个多媒体文件的音频流为AAC编码,2声道,格式为16bit,采样率44.1KHz,视频流为H264编码,帧率为25fps,理论播放同步如下:

时间轴 0 23.2 40 46.4 69.6 80 92.8 116 20
音频时间点 0 23.2 46.4 69.6 92.8 116
视频时间点 0 40 80 120

理论上只要按照上面的时间点,各自播放音频与视频,就可以同步了,但实际上,音频与视频播放都分别需要经过解码、重采样、播放3个步骤,每个步骤的耗时不一样,无法做到精确计时。

由此衍生出了3种同步的方法 :

  • 以音频为基准,视频向音频同步
  • 以视频为基准,音频向视频同步
  • 以外部参考时钟为基准,音视频向此时钟同步

其实我更倾向于理论的方法,音频与视频各自播放互不打扰,从音视频播放的特点来说,人的听觉更为敏感,稍微的停顿都可以听出来,但是视觉就不一样了,人的视觉有暂留的效应;
因此根据理论的同步方式,对音频的播放不多加计算,尽快按照硬件所需数据的速度向硬件输入播放数据,又因为音频编解码的帧使用的解码时间戳DTS、播放时间戳PTS永远是一样的,因此只需要按照顺序进行解码播放即可
对于视频播放,由于H264编码的视频帧存在I帧、P帧、B帧,尤其是存在B帧的视频、其解码的顺序与播放顺序可能不一致,因此视频播放要先按解码顺序解码视频,然后按照音频播放的时间,在合适的时间点(PTS对应的时间)播放视频,由于不能精确计时,视频的早一点、迟一点,人的视觉几乎感觉不到,只要误差时间不超过视觉暂留的时间,并且误差不要累积;这实际上就是以音频为基准,视频向音频同步的过程
由以上分析可以看出,同步不是一次性完成的,而是时时刻刻在进行的,直到播放完毕。

关于DTS与PTS:

  • DTS(Decoding Time Stamp, 解码时间戳),表示packet的解码时间。
  • PTS(Presentation Time Stamp, 显示时间戳),表示packet解码后数据的显示时间。
  • DTS与PTS的时间单位,在各自流的结构里面使用AVRational类型的变量,time_base成员来表示,实际的时间需要乘以time_base所表示的单位时间

那么如何获取音视频的DTS与PTS呢?
通过函数av_read_frame(pFormatCtx, Packet)读取一个AVPacket,在此结构中保存有每一帧的DTS、PTS信息

音频DTS与PTS

因为音频是顺序播放,因此音频中DTS和PTS是相同的。

printf("stream audio time_base.num:%d, time_base.den:%d, avg_frame_rate.num:%d, avg_frame_rate.den:%d, duration:%ld\n",
		pFormatCtx->streams[AudioIndex]->time_base.num,
		pFormatCtx->streams[AudioIndex]->time_base.den,
		pFormatCtx->streams[AudioIndex]->avg_frame_rate.num,
		pFormatCtx->streams[AudioIndex]->avg_frame_rate.den,
		pFormatCtx->streams[AudioIndex]->duration);
//输出:stream audio time_base.num:1, time_base.den:48000, avg_frame_rate.num:0, avg_frame_rate.den:0, duration:40830000
av_read_frame(pFormatCtx, Packet);
avcodec_decode_audio4( pAudioCodecCtx, pAudioFrame,&GotAudioPicture, Packet);
printf("Auduo index:%5d\t pts:%ld\t pts:%ld\t packet size:%d, pFrame->nb_samples:%d\n",
		audioCnt, Packet->dts, Packet->pts, Packet->size, pAudioFrame->nb_samples);
//Auduo index:    0	 pts:0	 pts:0	 packet size:847, pFrame->nb_samples:1024
//Auduo index:    1	 pts:1024	 pts:1024	 packet size:846, pFrame->nb_samples:1024
//Auduo index:    2	 pts:2048	 pts:2048	 packet size:846, pFrame->nb_samples:1024
//Auduo index:    3	 pts:3072	 pts:3072	 packet size:847, pFrame->nb_samples:1024
//Auduo index:    4	 pts:4096	 pts:4096	 packet size:846, pFrame->nb_samples:1024
//Auduo index:    5	 pts:5120	 pts:5120	 packet size:846, pFrame->nb_samples:1024
		
  • 时间单位:time_base是一个AVRational类型的变量,可以从输出看出时间单位是1 / 48000,那么用DTS×(1 / 48000)就是解码时间戳,PTS×(1 / 48000)就是播放时间戳,
  • 因为音频没有帧率的概念,因此avg_frame_rate的值都为0
  • duration表示因音频流的时长,duration ×(1 / 48000)= 4083000 ×(1 / 48000)= 850.625S = 14分10秒,与获取播放文件的信息中打印出的文件时长 Duration: 00:14:10.67 基本一致
  • 通过av_read_frame函数读取一帧音频可以输出相关信息

视频DTS与PTS

视频中由于B帧需要双向预测,B帧依赖于其前和其后的帧,因此含B帧的视频解码顺序与显示顺序不同,即DTS与PTS不同;不含B帧的视频,其DTS和PTS是相同的。

printf("stream video time_base.num:%d, time_base.den:%d, avg_frame_rate.num:%d, avg_frame_rate.den:%d, duration:%ld\n",
		pFormatCtx->streams[VideoIndex]->time_base.num,
		pFormatCtx->streams[VideoIndex]->time_base.den,
		pFormatCtx->streams[VideoIndex]->avg_frame_rate.num,
		pFormatCtx->streams[VideoIndex]->avg_frame_rate.den,
		pFormatCtx->streams[VideoIndex]->duration);
//输出:stream video time_base.num:1, time_base.den:16, avg_frame_rate.num:8, avg_frame_rate.den:1, duration:13610

av_read_frame(pFormatCtx, Packet);
printf("Video index:%5d\t dts:%ld\t, pts:%ld\t packet size:%d\n",
		videoCnt, Packet->dts, Packet->pts, Packet->size);
//Video index:    0	 dts:-2	, pts:0	 packet size:91041
//Video index:    1	 dts:0	, pts:8	 packet size:191
//Video index:    2	 dts:2	, pts:2	 packet size:103
//Video index:    3	 dts:4	, pts:4	 packet size:103
//Video index:    4	 dts:6	, pts:6	 packet size:103
  • 时间单位:time_base是一个AVRational类型的变量,可以从输出看出时间单位是1 / 16,那么用DTS×(1 / 16)就是解码时间戳,PTS×(1 / 16)就是播放时间戳,
  • 视频的平均帧率为:avg_frame_rate.num / avg_frame_rate.den = (8 / 1) = 8fps,在播放时每秒播放8帧,即125ms播放一帧
  • duration表示因视频流的时长,duration ×(1 / 16)= 13610 ×(1 / 16)= 822.5S = 13分42.5秒,与获取播放文件的信息中打印出的文件时长 Duration: 00:14:10.67 误差较大,

同步的实现

以上部分把同步播放需要的信息,全都得到了,那么怎么实现音视频播放同步呢?很自然的我们需要多线程,不可能在一个线程里完成这些事情

  • 主线程:负责读取多媒体文件信息,准备编解码器上下文,在主循环中读取文件的音视频流,分别保存到音视频的队列,等待解码
  • video线程:从视频对列中按照DTS的顺序解码一个视频帧,进行重采样,并根据视频播放信号,将解码后的视频,帧使用SDL渲染到屏幕
  • Audio线程:从音频队列按照DTS顺序解码音频帧,进行重采样之后,使用回调函数的方式,尽快的流畅播放重采样后的音频数据
  • 视频播放信号产生线程,此线程根据获取的视频流信息,主要是帧率信息,根据帧率信息换算出每一帧占用的时间,按照这个时间间隔定时向 “video线程” 发送视频播放信号
  • SDL事件监听线程:主要监控暂停、退出,以及自定义的信号,完成退出、暂停等SDL的GUI界面操作,简单实现了暂停,恢复、退出等操作

通过以上的介绍,可以看出,我并没有刻意的使用将视频同步到音频,而是各自按照自己的速度去播放,貌似也还可以。下面就把代码贴上吧。

/*
 * ffmpeg_sdl2_avpalyer.cpp
 *
 *  Created on: 2019年4月4日
 *      Author: luke
 *      实现音视频播放同步
 */

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define __STDC_CONSTANT_MACROS

#ifdef __cplusplus
extern "C"
{
#endif
#include <libavutil/time.h>
#include <libavutil/imgutils.h>
#include <libavutil/mathematics.h>
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavdevice/avdevice.h>
#include <libswscale/swscale.h>
#include <libswresample/swresample.h>
#include <SDL2/SDL.h>

#include <errno.h>

#include <unistd.h>
#include <assert.h>
#include <pthread.h>
#include <semaphore.h>

#ifdef __cplusplus
};
#endif


#define MAX_AUDIO_FRAME_SIZE 192000 // 1 second of 48khz 32bit audio

#define PACKET_ARRAY_SIZE			(60)
typedef struct __PacketStruct
{
	AVPacket Packet;
	int64_t dts;
	int64_t pts;
	int state;
}PacketStruct;

typedef struct
{
	unsigned int rIndex;
	unsigned int wIndex;
	PacketStruct PacketArray[PACKET_ARRAY_SIZE];
}PacketArrayStruct;

typedef struct __AudioCtrlStruct
{
	AVFormatContext	*pFormatCtx;
	AVStream 		*pStream;
	AVCodec			*pCodec;
	AVCodecContext	*pCodecCtx;
	SwrContext 		*pConvertCtx;

	Uint8  	*audio_chunk;
	Sint32  audio_len;
	Uint8  	*audio_pos;
	int 	AudioIndex;
	int 	AudioCnt;
	uint64_t AudioOutChannelLayout;
	int out_nb_samples;				//nb_samples: AAC-1024 MP3-1152
	AVSampleFormat out_sample_fmt;
	int out_sample_rate;
	int out_channels;
	int out_buffer_size;
	unsigned char* pAudioOutBuffer;

	sem_t frame_put;
	sem_t frame_get;

	PacketArrayStruct 	Audio;
}AudioCtrlStruct;


typedef struct __VideoCtrlStruct
{
	AVFormatContext	*pFormatCtx;
	AVStream 		*pStream;
	AVCodec			*pCodec;
	AVCodecContext	*pCodecCtx;
	SwsContext 		*pConvertCtx;
	AVFrame			*pVideoFrame, *pFrameYUV;
	unsigned char 	*pVideoOutBuffer;
	int 			VideoIndex;
	int 			VideoCnt;
	int 			RefreshTime;
	int screen_w,screen_h;
	SDL_Window *screen;
	SDL_Renderer* sdlRenderer;
	SDL_Texture* sdlTexture;
	SDL_Rect sdlRect;
	SDL_Thread *video_tid;

	sem_t frame_put;
	sem_t video_refresh;
	PacketArrayStruct Video;
}VideoCtrlStruct;


//Refresh Event
#define SFM_REFRESH_VIDEO_EVENT  	(SDL_USEREVENT + 1)
#define SFM_REFRESH_AUDIO_EVENT  	(SDL_USEREVENT + 2)
#define SFM_BREAK_EVENT  			(SDL_USEREVENT + 3)

int thread_exit = 0;
int thread_pause = 0;

VideoCtrlStruct VideoCtrl;
AudioCtrlStruct AudioCtrl;
//video time_base.num:1, time_base.den:16, avg_frame_rate.num:8, avg_frame_rate.den:1
//audio time_base.num:1, time_base.den:48000, avg_frame_rate.num:0, avg_frame_rate.den:0
int IsPacketArrayFull(PacketArrayStruct* p)
{
	int i = 0;
	i = p->wIndex % PACKET_ARRAY_SIZE;
	if(p->PacketArray[i].state != 0) return 1;

	return 0;
}

int IsPacketArrayEmpty(PacketArrayStruct* p)
{
	int i = 0;
	i = p->rIndex % PACKET_ARRAY_SIZE;
	if(p->PacketArray[i].state == 0) return 1;

	return 0;
}

int SDL_event_thread(void *opaque)
{
	SDL_Event event;

	while(1)
	{
		SDL_WaitEvent(&event);
		if(event.type == SDL_KEYDOWN)
		{
			//Pause
			if(event.key.keysym.sym == SDLK_SPACE)
			{
				thread_pause = !thread_pause;
				printf("video got pause event!\n");
			}
		}
		else if(event.type == SDL_QUIT)
		{
			thread_exit = 1;
			printf("------------------------------>video got SDL_QUIT event!\n");
			break;
		}
		else if(event.type == SFM_BREAK_EVENT)
		{
			break;
		}
	}

	printf("---------> SDL_event_thread end !!!! \n");
	return 0;
}

int video_refresh_thread(void *opaque)
{
	while (1)
	{
		if(thread_exit) break;
		if(thread_pause)
		{
			SDL_Delay(40);
			continue;
		}
		usleep(VideoCtrl.RefreshTime);
		sem_post(&VideoCtrl.video_refresh);
	}
	printf("---------> video_refresh_thread end !!!! \n");
	return 0;
}

static void *thread_audio(void *arg)
{
	AVCodecContext	*pAudioCodecCtx;
	AVFrame			*pAudioFrame;
	unsigned char 	*pAudioOutBuffer;
	AVPacket 		*Packet;
	int 			i, ret, GotAudioPicture;
	struct SwrContext *AudioConvertCtx;

	AudioCtrlStruct* AudioCtrl = (AudioCtrlStruct*)arg;
	pAudioCodecCtx = AudioCtrl->pCodecCtx;
	pAudioOutBuffer = AudioCtrl->pAudioOutBuffer;
	AudioConvertCtx = AudioCtrl->pConvertCtx;
	printf("---------> thread_audio start !!!! \n");
	pAudioFrame = av_frame_alloc();
	while(1)
	{
		if(thread_exit) break;
		if(thread_pause)
		{
			usleep(10000);
			continue;
		}
		//sem_wait(&AudioCtrl->frame_put);
		if(IsPacketArrayEmpty(&AudioCtrl->Audio))
		{
			SDL_Delay(1);
			printf("---------> thread_audio empty !!!! \n");
			continue;
		}
		i = AudioCtrl->Audio.rIndex;
		Packet = &AudioCtrl->Audio.PacketArray[i].Packet;

		if(Packet->stream_index == AudioCtrl->AudioIndex)
		{
			ret = avcodec_decode_audio4( pAudioCodecCtx, pAudioFrame, &GotAudioPicture, Packet);
			if ( ret < 0 )
			{
				printf("Error in decoding audio frame.\n");
				return 0;
			}
			if ( GotAudioPicture > 0 )
			{
				swr_convert(AudioConvertCtx,&pAudioOutBuffer, MAX_AUDIO_FRAME_SIZE,
						(const uint8_t **)pAudioFrame->data , pAudioFrame->nb_samples);
				//printf("Auduo index:%5d\t pts:%ld\t packet size:%d, pFrame->nb_samples:%d\n",
				//		AudioCtrl->AudioCnt, Packet->pts, Packet->size, pAudioFrame->nb_samples);

				AudioCtrl->AudioCnt++;
			}

			while(AudioCtrl->audio_len > 0)//Wait until finish
				SDL_Delay(1);

			//Set audio buffer (PCM data)
			AudioCtrl->audio_chunk = (Uint8 *) pAudioOutBuffer;
			AudioCtrl->audio_pos = AudioCtrl->audio_chunk;
			AudioCtrl->audio_len = AudioCtrl->out_buffer_size;

			//sem_post(&AudioCtrl->frame_get);
			av_packet_unref(Packet);

			AudioCtrl->Audio.PacketArray[i].state = 0;
			i++;
			if(i >= PACKET_ARRAY_SIZE) i = 0;
			AudioCtrl->Audio.rIndex = i;
		}
	}

	printf("---------> thread_audio end !!!! \n");
	return 0;
}

static void *thread_video(void *arg)
{
	AVCodecContext	*pVideoCodecCtx;
	AVFrame			*pVideoFrame,*pFrameYUV;
	AVPacket 		*Packet;
	int 			i, ret, GotPicture;
	struct SwsContext *VideoConvertCtx;

	VideoCtrlStruct* VideoCtrl = (VideoCtrlStruct*)arg;
	pVideoCodecCtx = VideoCtrl->pCodecCtx;
	VideoConvertCtx = VideoCtrl->pConvertCtx;
	pVideoFrame = VideoCtrl->pVideoFrame;
	pFrameYUV   = VideoCtrl->pFrameYUV;
	printf("---------> thread_video start !!!! \n");
	while(1)
	{
		if(thread_exit) break;
		//sem_wait(&VideoCtrl->frame_put);
		if(IsPacketArrayEmpty(&VideoCtrl->Video))
		{
			SDL_Delay(1);
			continue;
		}
		i = VideoCtrl->Video.rIndex;
		Packet = &VideoCtrl->Video.PacketArray[i].Packet;

		if(Packet->stream_index == VideoCtrl->VideoIndex)
		{
			ret = avcodec_decode_video2(pVideoCodecCtx, pVideoFrame, &GotPicture, Packet);
			if(ret < 0)
			{
				printf("Video Decode Error.\n");
				return 0;
			}
			//printf("Video index:%5d\t dts:%ld\t, pts:%ld\t packet size:%d, GotVideoPicture:%d\n",
			//		VideoCtrl->VideoCnt, Packet->dts, Packet->pts, Packet->size, GotPicture);
//			printf("Video index:%5d\t pFrame->pkt_dts:%ld, pFrame->pkt_pts:%ld, pFrame->pts:%ld, pFrame->pict_type:%d, "
//					"pFrame->best_effort_timestamp:%ld, pFrame->pkt_pos:%ld, pVideoFrame->pkt_duration:%ld\n",
//					VideoCtrl->VideoCnt, pVideoFrame->pkt_dts, pVideoFrame->pkt_pts, pVideoFrame->pts,
//					pVideoFrame->pict_type, pVideoFrame->best_effort_timestamp,
//					pVideoFrame->pkt_pos, pVideoFrame->pkt_duration);
			VideoCtrl->VideoCnt++;
			if(GotPicture)
			{
				sws_scale(VideoConvertCtx, (const unsigned char* const*)pVideoFrame->data,
						  pVideoFrame->linesize, 0, pVideoCodecCtx->height, pFrameYUV->data, pFrameYUV->linesize);

				sem_wait(&VideoCtrl->video_refresh);
				//SDL---------------------------
				SDL_UpdateTexture( VideoCtrl->sdlTexture, NULL, pFrameYUV->data[0], pFrameYUV->linesize[0] );
				SDL_RenderClear( VideoCtrl->sdlRenderer );
				//SDL_RenderCopy( sdlRenderer, sdlTexture, &sdlRect, &sdlRect );
				SDL_RenderCopy( VideoCtrl->sdlRenderer, VideoCtrl->sdlTexture, NULL, NULL);
				SDL_RenderPresent( VideoCtrl->sdlRenderer );
				//SDL End-----------------------
			}

			av_packet_unref(Packet);
			VideoCtrl->Video.PacketArray[i].state = 0;
			i++;
			if(i >= PACKET_ARRAY_SIZE) i = 0;
			VideoCtrl->Video.rIndex = i;
		}
	}
	printf("---------> thread_video end !!!! \n");
	return 0;
}

/* The audio function callback takes the following parameters:
 * stream: A pointer to the audio buffer to be filled
 * len: The length (in bytes) of the audio buffer
*/
void  fill_audio(void *udata,Uint8 *stream,int len)
{
	AudioCtrlStruct* AudioCtrl = (AudioCtrlStruct*)udata;
	//SDL 2.0
	SDL_memset(stream, 0, len);
	if(AudioCtrl->audio_len == 0) return;

	len=(len > AudioCtrl->audio_len ? AudioCtrl->audio_len : len);	/*  Mix  as  much  data  as  possible  */

	SDL_MixAudio(stream, AudioCtrl->audio_pos, len, SDL_MIX_MAXVOLUME);
	AudioCtrl->audio_pos += len;
	AudioCtrl->audio_len -= len;
}


int main(int argc, char* argv[])
{
	AVFormatContext	*pFormatCtx;
	AVCodecContext	*pVideoCodecCtx, *pAudioCodecCtx;
	AVCodec			*pVideoCodec, *pAudioCodec;
	AVPacket		*Packet;
	unsigned char 	*pVideoOutBuffer, *pAudioOutBuffer;

	int 			ret;
	unsigned int    i;
	pthread_t 		audio_tid, video_tid;

	uint64_t AudioOutChannelLayout;
	int out_nb_samples;				//nb_samples: AAC-1024 MP3-1152
	AVSampleFormat out_sample_fmt;
	int out_sample_rate;
	int out_channels;
	int out_buffer_size;

	struct SwsContext *VideoConvertCtx;
	struct SwrContext *AudioConvertCtx;
	int VideoIndex, VideoCnt;
	int AudioIndex, AudioCnt;

	memset(&AudioCtrl, 0, sizeof(AudioCtrlStruct));
	memset(&VideoCtrl, 0, sizeof(VideoCtrlStruct));
	char *filepath = argv[1];
	sem_init(&VideoCtrl.video_refresh, 0, 0);
	sem_init(&VideoCtrl.frame_put, 0, 0);
	sem_init(&AudioCtrl.frame_put, 0, 0);
	thread_exit = 0;
	thread_pause = 0;
	av_register_all();
	avformat_network_init();
	pFormatCtx = avformat_alloc_context();

	if(avformat_open_input(&pFormatCtx, filepath, NULL, NULL) !=0 )
	{
		printf("Couldn't open input stream.\n");
		return -1;
	}
	if(avformat_find_stream_info(pFormatCtx,NULL) < 0)
	{
		printf("Couldn't find stream information.\n");
		return -1;
	}

	VideoIndex = -1;
	AudioIndex = -1;
	for(i = 0; i < pFormatCtx->nb_streams; i++)
	{
		if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO)
		{
			VideoIndex = i;
			//打印输出视频流的信息
			printf("video time_base.num:%d, time_base.den:%d, avg_frame_rate.num:%d, avg_frame_rate.den:%d\n",
					pFormatCtx->streams[VideoIndex]->time_base.num,
					pFormatCtx->streams[VideoIndex]->time_base.den,
					pFormatCtx->streams[VideoIndex]->avg_frame_rate.num,
					pFormatCtx->streams[VideoIndex]->avg_frame_rate.den);
		}

		if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO)
		{
			AudioIndex = i;
			//打印输出音频流的信息
			printf("audio time_base.num:%d, time_base.den:%d, avg_frame_rate.num:%d, avg_frame_rate.den:%d\n",
					pFormatCtx->streams[AudioIndex]->time_base.num,
					pFormatCtx->streams[AudioIndex]->time_base.den,
					pFormatCtx->streams[AudioIndex]->avg_frame_rate.num,
					pFormatCtx->streams[AudioIndex]->avg_frame_rate.den);
		}
	}

	if(VideoIndex != -1)
	{	//准备视频的解码操作上下文数据结构,
		pVideoCodecCtx = pFormatCtx->streams[VideoIndex]->codec;
		pVideoCodec = avcodec_find_decoder(pVideoCodecCtx->codec_id);
		if(pVideoCodec == NULL)
		{
			printf("Video Codec not found.\n");
			return -1;
		}
		if(avcodec_open2(pVideoCodecCtx, pVideoCodec,NULL) < 0)
		{
			printf("Could not open video codec.\n");
			return -1;
		}

		// prepare video
		VideoCtrl.pVideoFrame = av_frame_alloc();
		VideoCtrl.pFrameYUV = av_frame_alloc();

		ret = av_image_get_buffer_size(AV_PIX_FMT_YUV420P, pVideoCodecCtx->width, pVideoCodecCtx->height, 1);
		pVideoOutBuffer = (unsigned char *)av_malloc(ret);
		av_image_fill_arrays(VideoCtrl.pFrameYUV->data, VideoCtrl.pFrameYUV->linesize, pVideoOutBuffer,
							AV_PIX_FMT_YUV420P, pVideoCodecCtx->width, pVideoCodecCtx->height, 1);

		VideoConvertCtx = sws_getContext(pVideoCodecCtx->width, pVideoCodecCtx->height, pVideoCodecCtx->pix_fmt,
										 pVideoCodecCtx->width, pVideoCodecCtx->height,
										 AV_PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL);

		VideoCtrl.pFormatCtx = pFormatCtx;
		VideoCtrl.pStream = pFormatCtx->streams[VideoIndex];
		VideoCtrl.pCodec = pVideoCodec;
		VideoCtrl.pCodecCtx = pFormatCtx->streams[VideoIndex]->codec;
		VideoCtrl.pConvertCtx = VideoConvertCtx;
		VideoCtrl.pVideoOutBuffer = pVideoOutBuffer;
		VideoCtrl.VideoIndex = VideoIndex;

		if(pFormatCtx->streams[VideoIndex]->avg_frame_rate.num == 0 ||
		   pFormatCtx->streams[VideoIndex]->avg_frame_rate.den == 0)
		{
			VideoCtrl.RefreshTime = 40000;
		}
		else
		{	//计算视频每一帧的时间,使用此时间间隔在发送视频播放信号
			VideoCtrl.RefreshTime = 1000000 * pFormatCtx->streams[VideoIndex]->avg_frame_rate.den;
			VideoCtrl.RefreshTime /= pFormatCtx->streams[VideoIndex]->avg_frame_rate.num;
		}
		printf("VideoCtrl.RefreshTime:%d\n", VideoCtrl.RefreshTime);
	}
	else
	{
		printf("Didn't find a video stream.\n");
	}

	if(AudioIndex != -1)
	{	//准备音频的解码操作上下文数据结构,
		pAudioCodecCtx = pFormatCtx->streams[AudioIndex]->codec;
		pAudioCodec = avcodec_find_decoder(pAudioCodecCtx->codec_id);
		if(pAudioCodec == NULL)
		{
			printf("Audio Codec not found.\n");
			return -1;
		}
		if(avcodec_open2(pAudioCodecCtx, pAudioCodec,NULL) < 0)
		{
			printf("Could not open audio codec.\n");
			return -1;
		}
		// prepare Out Audio Param
		AudioOutChannelLayout  	= AV_CH_LAYOUT_STEREO;
		out_nb_samples 			= pAudioCodecCtx->frame_size;	//nb_samples: AAC-1024 MP3-1152
		out_sample_fmt 			= AV_SAMPLE_FMT_S16;
		out_sample_rate			= pAudioCodecCtx->sample_rate;
		// 此处一定使用pAudioCodecCtx->sample_rate这个变量赋值,否则使用不一样的值会造成音频少采样或者过采样,导致音频播放出现杂音
		out_channels			= av_get_channel_layout_nb_channels(AudioOutChannelLayout);
		out_buffer_size			= av_samples_get_buffer_size(NULL,out_channels ,out_nb_samples,out_sample_fmt, 1);

		//mp3:out_nb_samples:1152, out_channels:2, out_buffer_size:4608, pCodecCtx->channels:2
		//aac:out_nb_samples:1024, out_channels:2, out_buffer_size:4096, pCodecCtx->channels:2
		printf("out_nb_samples:%d, out_channels:%d, out_buffer_size:%d, pCodecCtx->channels:%d\n",
				out_nb_samples, out_channels, out_buffer_size, pAudioCodecCtx->channels);
		pAudioOutBuffer 			= (uint8_t *)av_malloc(MAX_AUDIO_FRAME_SIZE*2);

		//FIX:Some Codec's Context Information is missing
		int64_t in_channel_layout	= av_get_default_channel_layout(pAudioCodecCtx->channels);
		//Swr
		AudioConvertCtx 			= swr_alloc();
		AudioConvertCtx				= swr_alloc_set_opts(AudioConvertCtx, AudioOutChannelLayout,
														out_sample_fmt, out_sample_rate,
														in_channel_layout, pAudioCodecCtx->sample_fmt ,
														pAudioCodecCtx->sample_rate, 0, NULL);
		swr_init(AudioConvertCtx);

		AudioCtrl.pFormatCtx = pFormatCtx;
		AudioCtrl.pStream = pFormatCtx->streams[AudioIndex];
		AudioCtrl.pCodec = pAudioCodec;
		AudioCtrl.pCodecCtx = pFormatCtx->streams[AudioIndex]->codec;
		AudioCtrl.pConvertCtx = AudioConvertCtx;

		AudioCtrl.AudioOutChannelLayout = AudioOutChannelLayout;
		AudioCtrl.out_nb_samples = out_nb_samples;
		AudioCtrl.out_sample_fmt = out_sample_fmt;
		AudioCtrl.out_sample_rate = out_sample_rate;
		AudioCtrl.out_channels = out_channels;
		AudioCtrl.out_buffer_size = out_buffer_size;
		AudioCtrl.pAudioOutBuffer = pAudioOutBuffer;
		AudioCtrl.AudioIndex = AudioIndex;
	}
	else
	{
		printf("Didn't find a audio stream.\n");
	}

	//Output Info-----------------------------
	printf("---------------- File Information ---------------\n");
	av_dump_format(pFormatCtx, 0, filepath, 0);
	printf("-------------- File Information end -------------\n");

	if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER))
	{
		printf( "Could not initialize SDL - %s\n", SDL_GetError());
		return -1;
	}

	if(VideoIndex != -1)
	{
		//SDL 2.0 Support for multiple windows
		//SDL_VideoSpec
		VideoCtrl.screen_w = pVideoCodecCtx->width;
		VideoCtrl.screen_h = pVideoCodecCtx->height;
		VideoCtrl.screen = SDL_CreateWindow("Simplest ffmpeg player's Window", SDL_WINDOWPOS_UNDEFINED,
								  SDL_WINDOWPOS_UNDEFINED, VideoCtrl.screen_w, VideoCtrl.screen_h, SDL_WINDOW_OPENGL);

		if(!VideoCtrl.screen)
		{
			printf("SDL: could not create window - exiting:%s\n",SDL_GetError());
			return -1;
		}
		VideoCtrl.sdlRenderer = SDL_CreateRenderer(VideoCtrl.screen, -1, 0);
		//IYUV: Y + U + V  (3 planes)
		//YV12: Y + V + U  (3 planes)
		VideoCtrl.sdlTexture = SDL_CreateTexture(VideoCtrl.sdlRenderer, SDL_PIXELFORMAT_IYUV, SDL_TEXTUREACCESS_STREAMING,
									   pVideoCodecCtx->width, pVideoCodecCtx->height);

		VideoCtrl.sdlRect.x = 0;
		VideoCtrl.sdlRect.y = 0;
		VideoCtrl.sdlRect.w = VideoCtrl.screen_w;
		VideoCtrl.sdlRect.h = VideoCtrl.screen_h;

		VideoCtrl.video_tid = SDL_CreateThread(video_refresh_thread, NULL, NULL);
		ret = pthread_create(&video_tid, NULL, thread_video, &VideoCtrl);
		if (ret)
		{
			printf("create thr_rvs video thread failed, error = %d \n", ret);
			return -1;
		}
	}

	if(AudioIndex != -1)
	{
		//SDL_AudioSpec
		SDL_AudioSpec AudioSpec;
		AudioSpec.freq 		= out_sample_rate;
		AudioSpec.format 	= AUDIO_S16SYS;
		AudioSpec.channels 	= out_channels;
		AudioSpec.silence 	= 0;
		AudioSpec.samples 	= out_nb_samples;
		AudioSpec.callback 	= fill_audio;
		AudioSpec.userdata 	= (void*)&AudioCtrl;

		if (SDL_OpenAudio(&AudioSpec, NULL) < 0)
		{
			printf("can't open audio.\n");
			return -1;
		}

		ret = pthread_create(&audio_tid, NULL, thread_audio, &AudioCtrl);
		if (ret)
		{
			printf("create thr_rvs video thread failed, error = %d \n", ret);
			return -1;
		}
		SDL_PauseAudio(0);
	}

	SDL_Thread *event_tid;
	event_tid = SDL_CreateThread(SDL_event_thread, NULL, NULL);

	VideoCnt = 0;
	AudioCnt = 0;
	Packet = (AVPacket *)av_malloc(sizeof(AVPacket));
	av_init_packet(Packet);

	while(1)
	{
		if(thread_exit) break;
		if(av_read_frame(pFormatCtx, Packet) < 0)
		{	//读取的到文件结束,自动退出,想SDL事件监听线程发送退出信号
			thread_exit = 1;
			SDL_Event event;
			event.type = SFM_BREAK_EVENT;
			SDL_PushEvent(&event);
			printf("---------> av_read_frame < 0, thread_exit = 1  !!!\n");
			break;
		}
		if(Packet->stream_index == VideoIndex)
		{
			if(VideoCtrl.Video.wIndex >= PACKET_ARRAY_SIZE)
			{
				VideoCtrl.Video.wIndex = 0;
			}
			while(IsPacketArrayFull(&VideoCtrl.Video))
			{
				usleep(5000);
				//printf("---------> VideoCtrl.Video.PacketArray FULL !!!\n");
			}
			i = VideoCtrl.Video.wIndex;
			VideoCtrl.Video.PacketArray[i].Packet = *Packet;
			VideoCtrl.Video.PacketArray[i].dts = Packet->dts;
			VideoCtrl.Video.PacketArray[i].pts = Packet->pts;
			VideoCtrl.Video.PacketArray[i].state = 1;
			VideoCtrl.Video.wIndex++;
			//printf("VideoCtrl.frame_put, VideoCnt:%d\n", VideoCnt++);
			//sem_post(&VideoCtrl.frame_put);
		}

		if(Packet->stream_index == AudioIndex)
		{
			if(AudioCtrl.Audio.wIndex >= PACKET_ARRAY_SIZE)
			{
				AudioCtrl.Audio.wIndex = 0;
			}
			while(IsPacketArrayFull(&AudioCtrl.Audio))
			{
				usleep(5000);
				//printf("---------> AudioCtrl.Audio.PacketArray FULL !!!\n");
			}
			i = AudioCtrl.Audio.wIndex;
			AudioCtrl.Audio.PacketArray[i].Packet = *Packet;
			AudioCtrl.Audio.PacketArray[i].dts = Packet->dts;
			AudioCtrl.Audio.PacketArray[i].pts = Packet->pts;
			AudioCtrl.Audio.PacketArray[i].state = 1;
			AudioCtrl.Audio.wIndex++;
			//printf("AudioCtrl.frame_put, AudioCnt:%d\n", AudioCnt++);
			//sem_post(&AudioCtrl.frame_put);
		}
	}

	SDL_WaitThread(event_tid, NULL);
	//printf("--------------------------->main exit 0 !!\n");
	SDL_WaitThread(VideoCtrl.video_tid, NULL);
	//printf("--------------------------->main exit 1 !!\n");
	pthread_join(audio_tid, NULL);
	//printf("--------------------------->main exit 2 !!\n");
	pthread_join(video_tid, NULL);
	//printf("--------------------------->main exit 3 !!\n");
	SDL_CloseAudio();//Close SDL
	//printf("--------------------------->main exit 4 !!\n");
	SDL_Quit();
	//printf("--------------------------->main exit 5 !!\n");
	swr_free(&AudioConvertCtx);
	sws_freeContext(VideoConvertCtx);
	//printf("--------------------------->main exit 6 !!\n");
	av_free(pVideoOutBuffer);
	avcodec_close(pVideoCodecCtx);
	//printf("--------------------------->main exit 7 !!\n");
	av_free(pAudioOutBuffer);
	avcodec_close(pAudioCodecCtx);
	avformat_close_input(&pFormatCtx);
	printf("--------------------------->main exit 8 !!\n");
}

猜你喜欢

转载自blog.csdn.net/zhaoyun_zzz/article/details/89103335