AVAssetReader+AVAssetReaderTrackOutput

AVAssetReader+AVAssetReaderTrackOutput

既然AVPlayer在播放视频时会有性能问题,我们不如做自己的播放器。AVAssetReader可以从原始数据里获取解码后的音视频数据。结合AVAssetReaderTrackOutput,能读取一帧帧的CMSampleBufferRef。CMSampleBufferRef可以转化成CGImageRef。为此,我们可以写个MMovieDecoder的类,负责视频解码,每读出一个SampleBuffer就往上层回调:

 

AVAssetReader* reader = [[AVAssetReader alloc] initWithAsset:m_asset error:&error];
NSArray* videoTracks = [m_asset tracksWithMediaType:AVMediaTypeVideo];
AVAssetTrack* videoTrack = [videoTracks objectAtIndex:0];
// 视频播放时,m_pixelFormatType=kCVPixelFormatType_32BGRA
// 其他用途,如视频压缩,m_pixelFormatType=kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
NSDictionary* options = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:
        (int)m_pixelFormatType] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
AVAssetReaderTrackOutput* videoReaderOutput = [[AVAssetReaderTrackOutput alloc] 
        initWithTrack:videoTrack outputSettings:options];
[reader addOutput:videoReaderOutput];
[reader startReading];
// 要确保nominalFrameRate>0,之前出现过android拍的0帧视频
while ([reader status] == AVAssetReaderStatusReading && videoTrack.nominalFrameRate > 0) {
    // 读取video sample
    CMSampleBufferRef videoBuffer = [videoReaderOutput copyNextSampleBuffer];
    [m_delegate mMovieDecoder:self onNewVideoFrameReady:videoBuffer);
    CFRelease(videoBuffer);    
    // 根据需要休眠一段时间;比如上层播放视频时每帧之间是有间隔的
    [NSThread sleepForTimeInterval:sampleInternal];
}
    
// 告诉上层视频解码结束
[m_delegate mMovieDecoderOnDecodeFinished:self];

 

另一个是MVideoPlayerView,负责视频的显示,它接收MMovieDecoder回调的CMSampleBufferRef后,把它转为CGImageRef,然后设置layer.contents为这个CGImageRef对象。创建CGImageRef不会做图片数据的内存拷贝,它只会当Core Animation执行Transaction::commit()触发layer -display时,才把图片数据拷贝到layer buffer里。

AVAssetReader也能decode音频的SampleBuffer,不过本人还没想到如何播放CMSampleBufferRef的音频,目前只能静音播放。

猜你喜欢

转载自blog.csdn.net/yyjjyysleep/article/details/70210519
今日推荐