CVPixelBufferRef 生成方式

在iOS里,我们经常能看到 CVPixelBufferRef 这个类型,在Camera 采集返回的数据里得到一个CMSampleBufferRef,而每个CMSampleBufferRef里则包含一个 CVPixelBufferRef,在视频硬解码的返回数据里也是一个 CVPixelBufferRef,它的格式NV12(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange或者kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange)这两种形式。由于是C对象,它是不受 ARC 管理的,就是说要开发者自己来管理引用计数,控制对象的生命周期,可以用CVPixelBufferRetain,CVPixelBufferRelease函数用来加减引用计数,其实和CFRetain和CFRelease是等效的,所以可以用 CFGetRetainCount来查看当前引用计数。

通过下面的方法CVImageBufferRef:

CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(videoSample);

从CVImageBufferRef 里面获取yuv数据,转为yuv420(NV12)

// AWVideoEncoder.m文件
-(NSData *) convertVideoSmapleBufferToYuvData:(CMSampleBufferRef) videoSample{
    // 获取yuv数据
    // 通过CMSampleBufferGetImageBuffer方法,获得CVImageBufferRef。
    // 这里面就包含了yuv420(NV12)数据的指针
    CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(videoSample);

    //表示开始操作数据
    CVPixelBufferLockBaseAddress(pixelBuffer, 0);

    //图像宽度(像素)
    size_t pixelWidth = CVPixelBufferGetWidth(pixelBuffer);
    //图像高度(像素)
    size_t pixelHeight = CVPixelBufferGetHeight(pixelBuffer);
    //yuv中的y所占字节数
    size_t y_size = pixelWidth * pixelHeight;
    //yuv中的uv所占的字节数
    size_t uv_size = y_size / 2;

    uint8_t *yuv_frame = aw_alloc(uv_size + y_size);

    //获取CVImageBufferRef中的y数据
    uint8_t *y_frame = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
    memcpy(yuv_frame, y_frame, y_size);

    //获取CMVImageBufferRef中的uv数据
    uint8_t *uv_frame = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
    memcpy(yuv_frame + y_size, uv_frame, uv_size);

    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

    //返回数据
    return [NSData dataWithBytesNoCopy:yuv_frame length:y_size + uv_size];
}

顾名思义,CVPixelBufferRef 是一种像素图片类型,由于CV开头,所以它是属于 CoreVideo 模块的。
反之,NV12数据可以填充CVPixelBufferRef,示例如下所示:

-(CVPixelBufferRef)createCVPixelBufferRefFromNV12buffer:(unsigned char *)buffer width:(int)w height:(int)h {
    NSDictionary *pixelAttributes = @{(NSString*)kCVPixelBufferIOSurfacePropertiesKey:@{}};
    
    CVPixelBufferRef pixelBuffer = NULL;
    
    CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                          w,
                                          h,
                                          kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
                                          (__bridge CFDictionaryRef)(pixelAttributes),
                                          &pixelBuffer);//kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
    
    CVPixelBufferLockBaseAddress(pixelBuffer,0);
    unsigned char *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
    
    // Here y_ch0 is Y-Plane of YUV(NV12) data.
    unsigned char *y_ch0 = buffer;
    memcpy(yDestPlane, y_ch0, w * h);
    unsigned char *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
    
    // Here y_ch1 is UV-Plane of YUV(NV12) data.
    unsigned char *y_ch1 = buffer + w * h;
    memcpy(uvDestPlane, y_ch1, w * h/2);
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
    
    if (result != kCVReturnSuccess) {
        NSLog(@"Unable to create cvpixelbuffer %d", result);
    }
    return pixelBuffer;
}

通过CVPixelBufferGetBaseAddressOfPlane可以得到每个平面的数据指针。在得到 Address之前需要调用CVPixelBufferLockBaseAddress,这说明CVPixelBufferRef的内部存储不仅是内存也可能是其它外部存储,比如现存,所以在访问前要lock下来实现地址映射,同时lock也保证了没有读写冲突。
在逐行copy数据的时候,pixel内部地址每个循环步进 current_row * bytesPerRowChrominance 的大小,这是pixelbuffer内部的内存排列。然后我的数据来源内存排列是紧密排列不考虑内存多少位对齐的问题的,所以每次的步进是 current_row * _outVideoWidth 也就是真正的视频帧的宽度。每次copy的大小也应该是真正的宽度。对于这个通道来说,宽度和高度都是亮度通道的一半,每个元素有UV两个信息,所以这个通道每一行占用空间和亮度通道应该是一样的。也就是每一行copy数据的大小是这样算出来的:_outVideoWidth / 2 * 2.

UIImage 生成 CVPixelBufferRef

- (CVPixelBufferRef)CVPixelBufferRefFromUiImage:(UIImage *)img
{
    CGImageRef image = [img CGImage];
    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                             nil];
    
    CVPixelBufferRef pxbuffer = NULL;
    
    CGFloat frameWidth = CGImageGetWidth(image);
    CGFloat frameHeight = CGImageGetHeight(image);
    
    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
                                          frameWidth,
                                          frameHeight,
                                          kCVPixelFormatType_32ARGB,
                                          (__bridge CFDictionaryRef) options,
                                          &pxbuffer);
    
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
    
    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
    NSParameterAssert(pxdata != NULL);
    
    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    
    CGContextRef context = CGBitmapContextCreate(pxdata,
                                                 frameWidth,
                                                 frameHeight,
                                                 8,
                                                 CVPixelBufferGetBytesPerRow(pxbuffer),
                                                 rgbColorSpace,
                                                 (CGBitmapInfo)kCGImageAlphaNoneSkipFirst);
    NSParameterAssert(context);
    CGContextConcatCTM(context, CGAffineTransformIdentity);
    CGContextDrawImage(context, CGRectMake(0,
                                           0,
                                           frameWidth,
                                           frameHeight),
                       image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);
    
    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
    
    return pxbuffer;
}

CVPixelBufferRef 生成 UIImage

-(UIImage* )createCVPixelBufferRefFromNV12buffer:(unsigned char *)buffer width:(int)w height:(int)h {
    NSDictionary *pixelAttributes = @{(NSString*)kCVPixelBufferIOSurfacePropertiesKey:@{}};
    
    CVPixelBufferRef pixelBuffer = NULL;
    
    CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                          w,
                                          h,
                                          kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
                                          (__bridge CFDictionaryRef)(pixelAttributes),
                                          &pixelBuffer);//kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
    
    CVPixelBufferLockBaseAddress(pixelBuffer,0);
    unsigned char *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
    
    // Here y_ch0 is Y-Plane of YUV(NV12) data.
    unsigned char *y_ch0 = buffer;
    memcpy(yDestPlane, y_ch0, w * h);
    unsigned char *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
    
    // Here y_ch1 is UV-Plane of YUV(NV12) data.
    unsigned char *y_ch1 = buffer + w * h;
    memcpy(uvDestPlane, y_ch1, w * h/2);
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
    
    if (result != kCVReturnSuccess) {
        NSLog(@"Unable to create cvpixelbuffer %d", result);
    }
    
    // CIImage Conversion
    CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];

    CIContext *MytemporaryContext = [CIContext contextWithOptions:nil];
    CGImageRef MyvideoImage = [MytemporaryContext createCGImage:coreImage
                                                       fromRect:CGRectMake(0, 0, w, h)];

    // UIImage Conversion
    UIImage *Mynnnimage = [[UIImage alloc] initWithCGImage:MyvideoImage
                                                     scale:1.0
                                               orientation:UIImageOrientationRight];

    CVPixelBufferRelease(pixelBuffer);
    CGImageRelease(MyvideoImage);
    
    return Mynnnimage;
}

## CVPixelBufferRef裁剪
![image.png](https://upload-images.jianshu.io/upload_images/1996279-6c1f4c38fb8bf40d.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)

如果使用vimage,则可以直接处理缓冲区数据,而无需将其转换为任何图像格式。
outImg包含裁剪和缩放的图像数据。 outWidth和cropWidth之间的关系设置缩放。将cropX0 = 0和cropY0 = 0以及cropWidth和cropHeight设置为原始大小意味着不进行裁剪(使用整个原始图像)。设置outWidth = cropWidth和outHeight = cropHeight会导致无缩放。请注意,inBuff.rowBytes应始终是完整源缓冲区的长度,而不是裁剪长度。

int cropX0, cropY0, cropHeight, cropWidth, outWidth, outHeight;

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);

vImage_Buffer inBuff;
inBuff.height = cropHeight;
inBuff.width = cropWidth;
inBuff.rowBytes = bytesPerRow;

int startpos = cropY0bytesPerRow+4cropX0;
inBuff.data = baseAddress+startpos;

unsigned char outImg= (unsigned char)malloc(4outWidthoutHeight);
vImage_Buffer outBuff = {outImg, outHeight, outWidth, 4*outWidth};

vImage_Error err = vImageScale_ARGB8888(&inBuff, &outBuff, NULL, 0);
if (err != kvImageNoError) NSLog(@" error %ld", err);

猜你喜欢

转载自blog.csdn.net/weixin_33743703/article/details/90916327
今日推荐