AVFoundation系列五:关于音视频的导出

版权声明:转载请标注原文地址。邮箱[email protected] https://blog.csdn.net/Xoxo_x/article/details/83474417

参考Apple:AVFoundation Programming Guide
本文参考简书:https://www.jianshu.com/p/a5d3ec793597

AVFoundation系列四:如何配置一个合格的Camera
AVFoundation系列三:音视频编辑
AVFoundation系列二:用AVPlayer播放视频
AVFoundation系列一:AVAsset的使用方法

在音视频编辑中,重要的是AVComposition的管理,对这个超类进行细分为:
引用: AVFoundation系列三:音视频编辑 中的片段:

1.对与音频来讲,AVMutableAudioMix可以用来控制音频的表现形式,2.AVMutableAudioMixInputParaments是先关参数;
3.对于视频来讲,AVMutableVideoComposition是设置视频的解释器,其中4.AVMutableVideoCompositionIntruments是参数说明,5.AVMutableVideoCompositionLayerInstruments是图层变换的表现形式指定,
6.AVMutableCompositionCoreAnimationTool是用于图层的animation动画,涉及layer,似乎如果我们想要加水印,就要对这个进行处理

在音视频导出中重要的就是:AVAssetExportSession,早在 AVFoundation系列一:AVAsset的使用方法 中的Demo中就已经使用过AVAssetExportSession进行了音视频的分离,合成。简单的导出我们已经使用了。但是对于更高的导出需求,请使用AVAssetReaderAVAssetWriter类。

已知AVComposition 是AVAsset 的子类,我们可以对其进行操作。

AVAssetReader:要对资源(asset)的内容执行操作时,如:读取资源(asset)的音轨以产生波形的可视化表示
AVAssetWriter: 从媒体(如样本缓冲区或静态图像)生成资源(asset)

注意: 资源的读写器类不能用于实时处理。 实际上,资源的读取器甚至不能用于从HTTP实时流中实时读取数据。 但是,如果您正在使用具有实时数据源的资源编写器(例如AVCaptureOutput 对象),需要将 expectsMediaDataInRealTime属性设置为YES。对于非实时数据源,将此属性设置为YES将导致文件不正确交叉。

AVComposition 对应的组件:
AVMutableAudioMix
AVMutableVideoComposition

AVAssetReaderAVAssetReaderOutput有三个对应子类:
AVAssetReaderTrackOutput
AVAssetReaderAudioMixOutput
AVAssetReaderVideoCompositionOutput

正如文中所说:每个AVAssetReader对象一次只能与单个资源相关联,此资源可能包含多个轨道
如果你了解了AVComposition ,你就会明白,除了音频轨道,视频轨道,还有其他的诸如:字幕轨道等。

以上是极简介绍。如果你读懂了,将会有助于你理解下边的实现代码。

下面我们开始接触 AVAssetExportSession

为了详细一点,我会列出代码:
初始化一个AVAssetReader:

NSError *outError;
AVAsset *someAsset = <#AVAsset that you want to read#>;
AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:someAsset error:&outError];
BOOL success = (assetReader != nil);

读取轨道信息:

AVAsset *localAsset = assetReader.asset;
// Get the audio track to read.
AVAssetTrack *audioTrack = [[localAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];

添加AVAssetReaderOutput输出:

// Decompression settings for Linear PCM
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the output with the audio track and decompression settings.
AVAssetReaderOutput *trackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:decompressionAudioSettings];
// Add the output to the reader if possible.
if ([assetReader canAddOutput:trackOutput])
    [assetReader addOutput:trackOutput];

decompressionAudioSettings中kAudioFormatLinearPCM 是想将音轨中的数据以PCM的格式读取出来

注意:要以存储格式从特定资源轨道读取媒体数据,请将outputSettings参数设置为nil。

您可以使用AVAssetReaderAudioMixOutputAVAssetReaderVideoCompositionOutput类来读取混合或合成在一起的媒体数据,分别使用AVAudioMix对象或AVVideoComposition对象。 通常,当您的资源读取器从AVComposition对象读取时,将使用这些输出

使用AVAssetReaderAudioMixOutput的方式如下:

AVAudioMix *audioMix = <#An AVAudioMix that specifies how the audio tracks from the AVAsset are mixed#>;

// Assumes that assetReader was initialized with an AVComposition object.
AVComposition *composition = (AVComposition *)assetReader.asset;

// Get the audio tracks to read.
NSArray *audioTracks = [composition tracksWithMediaType:AVMediaTypeAudio];

// Get the decompression settings for Linear PCM.
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };

// Create the audio mix output with the audio tracks and decompression setttings.
AVAssetReaderOutput *audioMixOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:audioTracks audioSettings:decompressionAudioSettings];

// Associate the audio mix used to mix the audio tracks being read with the output.
audioMixOutput.audioMix = audioMix;

// Add the output to the reader if possible.
if ([assetReader canAddOutput:audioMixOutput])
    [assetReader addOutput:audioMixOutput];

使用AVAssetReaderVideoCompositionOutput的方式如下:

AVVideoComposition *videoComposition = <#An AVVideoComposition that specifies how the video tracks from the AVAsset are composited#>;

// Assumes assetReader was initialized with an AVComposition.
AVComposition *composition = (AVComposition *)assetReader.asset;

// Get the video tracks to read.
NSArray *videoTracks = [composition tracksWithMediaType:AVMediaTypeVideo];

// Decompression settings for ARGB.
NSDictionary *decompressionVideoSettings = @{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32ARGB], (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary] };

// Create the video composition output with the video tracks and decompression setttings.
AVAssetReaderOutput *videoCompositionOutput = [AVAssetReaderVideoCompositionOutput assetReaderVideoCompositionOutputWithVideoTracks:videoTracks videoSettings:decompressionVideoSettings];

// Associate the video composition used to composite the video tracks being read with the output.
videoCompositionOutput.videoComposition = videoComposition;

// Add the output to the reader if possible.
if ([assetReader canAddOutput:videoCompositionOutput])
    [assetReader addOutput:videoCompositionOutput];

到目前为止设置完成了,下面开始读取:

// Start the asset reader up.
[self.assetReader startReading];
BOOL done = NO;
while (!done)
{
  // Copy the next sample buffer from the reader output.
  CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];

  if (sampleBuffer)
  {
    // Do something with sampleBuffer here.
    CFRelease(sampleBuffer);
    sampleBuffer = NULL;
  }
  else
  {
    // Find out why the asset reader output couldn't copy another sample buffer.
    if (self.assetReader.status == AVAssetReaderStatusFailed)
    {
      NSError *failureError = self.assetReader.error;
      // Handle the error here.
    }
    else
    {
      // The asset reader output has read all of its samples.
      done = YES;
    }
  }
}

AVAssetReader到此结束

想要了解更多知识请学习VideoToolBox
若是不知道怎么使用的请参考Demo:https://github.com/rFlex/SCRecorder

解读:AVAssetWriter

AVassetWriter 该类的作用是把 AVassetReader 的输出(AVAssetReaderTrackOutput) 通过回调 写成文件.

 1.AVAssetWriter类将媒体数据从多个源写入指定文件格式的单个文件。 
 2.不需要将资源写入对象与特定资源相关联,但必须为要创建的每个输出文件使用单独的资源写入器。
 3.由于资源写入对象可以从多个源写入媒体数据,因此必须为要写入输出文件的每个单独的轨道创建一个AVAssetWriterInput对象。 
 4.每个AVAssetWriterInput对象都希望接收CMSampleBufferRef对象格式的数据,
 5.但如果要将CVPixelBufferRef对象附加到资源写入中,请使用AVAssetWriterInputPixelBufferAdaptor类。

创建资源写入对象

NSError *outError;
NSURL *outputURL = <#NSURL object representing the URL where you want to save the video#>;
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:outputURL fileType:AVFileTypeQuickTimeMovie error:&outError];
BOOL success = (assetWriter != nil);

设置Asset Writer Inputs

要使资产写入对象能够编写媒体数据,必须至少设置一个资源写入对象输入(asset writer input)。 例如,如果您的媒体数据源已经将媒体样本作为CMSampleBufferRef对象,那么只需使用AVAssetWriterInput类。 要设置资源写入对象输入(asset writer input),将音频媒体数据压缩为128 kbps AAC并将其连接到资源写入对象,请执行以下操作:

// Configure the channel layout as stereo.
AudioChannelLayout stereoChannelLayout = {
    .mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
    .mChannelBitmap = 0,
    .mNumberChannelDescriptions = 0
};

// Convert the channel layout object to an NSData object.
NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];

// Get the compression settings for 128 kbps AAC.
NSDictionary *compressionAudioSettings = @{
    AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
    AVEncoderBitRateKey : [NSNumber numberWithInteger:128000],
    AVSampleRateKey : [NSNumber numberWithInteger:44100],
    AVChannelLayoutKey : channelLayoutAsData,
    AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
};

// Create the asset writer input with the compression settings and specify the media type as audio.
AVAssetWriterInput *assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:compressionAudioSettings];

将 AVAssetWriterInput 添加到 assetWriter


// Add the input to the writer if possible.
if ([assetWriter canAddInput:assetWriterInput])
    [assetWriter addInput:assetWriterInput];

注意: 如果要以存储格式写入媒体数据,可以将outputSettings参数设置为nil。 只有当资源写入对象是使用AVFileTypeQuickTimeMovie的文件类型初始化时,才可以设置为nil。

待续·····
参考Demo:https://github.com/HelloWorldYyx/SLVideoTool
参考文章:https://www.jianshu.com/p/3f2966f162a5

音频编码

/**
 编码音频

 @return 返回编码字典
 */
- (NSDictionary *)configAudioInput{
    AudioChannelLayout channelLayout = {
        .mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
        .mChannelBitmap = kAudioChannelBit_Left,
        .mNumberChannelDescriptions = 0
    };
    NSData *channelLayoutData = [NSData dataWithBytes:&channelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
    NSDictionary *audioInputSetting = @{
                                        AVFormatIDKey: @(kAudioFormatMPEG4AAC),
                                        AVSampleRateKey: @(44100),
                                        AVNumberOfChannelsKey: @(2),
                                        AVChannelLayoutKey:channelLayoutData
                                        };
    return audioInputSetting;
}

视频编码

/**
 编码视频

 @return 返回编码字典
 */
- (NSDictionary *)configVideoInput{
    NSDictionary *videoInputSetting = @{
                                        AVVideoCodecKey:AVVideoCodecH264,
                                        AVVideoWidthKey: @(374),
                                        AVVideoHeightKey: @(666)
                                        };
    return videoInputSetting;
}

SampleBufferRef
到这里我们AVassetWriter和AVassetReader的设置都已经设置成功,就开始读和写了.

    [assetReader startReading];
    [assetWriter startWriting];

然后我们创建两个同步队列,同时解析音频和视频的outPut生成的sampleBufferRef

 dispatch_queue_t rwAudioSerializationQueue = dispatch_queue_create("Audio Queue", DISPATCH_QUEUE_SERIAL);
    dispatch_queue_t rwVideoSerializationQueue = dispatch_queue_create("Video Queue", DISPATCH_QUEUE_SERIAL);
    dispatch_group_t dispatchGroup = dispatch_group_create();
  
    //这里开始时间是可以自己设置的
    [assetWriter startSessionAtSourceTime:kCMTimeZero];
    
    
    dispatch_group_enter(dispatchGroup);
    __block BOOL isAudioFirst = YES;
    [assetWriterAudioInput requestMediaDataWhenReadyOnQueue:rwAudioSerializationQueue usingBlock:^{
        
        while ([assetWriterAudioInput isReadyForMoreMediaData]&&assetReader.status == AVAssetReaderStatusReading) {
            CMSampleBufferRef nextSampleBuffer = [assetReaderAudioOutput copyNextSampleBuffer];
            if (isAudioFirst) {
                isAudioFirst = !isAudioFirst;
                continue;
            }
            if (nextSampleBuffer) {
                [assetWriterAudioInput appendSampleBuffer:nextSampleBuffer];
                CFRelease(nextSampleBuffer);
            } else {
                [assetWriterAudioInput markAsFinished];
                dispatch_group_leave(dispatchGroup);
                break;
            }
            
        }
        
    }];
    
    dispatch_group_enter(dispatchGroup);
    __block BOOL isVideoFirst = YES;
    [assetWriterVideoInput requestMediaDataWhenReadyOnQueue:rwVideoSerializationQueue usingBlock:^{
        
        while ([assetWriterVideoInput isReadyForMoreMediaData]&&assetReader.status == AVAssetReaderStatusReading) {
            
            CMSampleBufferRef nextSampleBuffer = [assetReaderVideoOutput copyNextSampleBuffer];
            if (isVideoFirst) {
                isVideoFirst = !isVideoFirst;
                continue;
            }
            if (nextSampleBuffer) {
                [assetWriterVideoInput appendSampleBuffer:nextSampleBuffer];
                CFRelease(nextSampleBuffer);
                NSLog(@"加载");
            } else {
                [assetWriterVideoInput markAsFinished];
                dispatch_group_leave(dispatchGroup);
                break;
            }
        }
    }];
    
    dispatch_group_notify(dispatchGroup, dispatch_get_main_queue(), ^{
        [assetWriter finishWritingWithCompletionHandler:^{
            if (assetWriter.status == AVAssetWriterStatusCompleted) {
                NSLog(@"加载完毕");
                
            } else {
                NSLog(@"加载失败");
            }
            if ([self.delegate respondsToSelector:@selector(synthesisResult)]) {
                [self.delegate synthesisResult];
            }

            
        }];
    });

猜你喜欢

转载自blog.csdn.net/Xoxo_x/article/details/83474417
今日推荐