之前都是用的Object-C,最近在学习Swift,就用Swift写一个拍照界面。
自定义相机会使用到以下类:
AVCaptureSession:管理输入输出音视频流;
AVCaptureDevice:相机硬件的接口,用于控制硬件特性,例如前后镜头、闪光灯等;
AVCaptureDeviceInput:从AVCaptureDevice捕获媒体的接口
AVCapturePhotoOutput:捕获照片的接口;
AVCaptureVideoPreviewLayer:用于预览的接口;
获取摄像头权限
switch AVCaptureDevice.authorizationStatus(for: .video) {
case .authorized:
print("已获取权限")
break
case .notDetermined:
AVCaptureDevice.requestAccess(for: .video) {
granted in
print("获取权限\(granted ? "成功" : "失败")")
}
case .denied:
print("获取权限失败")
default:
break
}
获取音视频管理器
使用AVCaptureSession创建管理器:
fileprivate var session: AVCaptureSession!
session = AVCaptureSession()
session.sessionPreset = .photo
这里的sessionPreset指输出照片的分辨率,详细参数见文档。
获取摄像头
使用AVCaptureDevice类获取后置摄像头:
AVCaptureDevice是提供实时输入媒体数据(例如视频和音频)的物理设备。
fileprivate var device: AVCaptureDevice!
获取摄像头,
func cameraWithPosition(position: AVCaptureDevice.Position) -> AVCaptureDevice {
let tmpdevice: AVCaptureDevice!
let devices: AVCaptureDevice.DiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInUltraWideCamera], mediaType: AVMediaType.video, position: position) // 获取广角镜头
if devices.devices.count == 0 {
tmpdevice = AVCaptureDevice.default(for: AVMediaType.video)! // 普通镜头
} else {
tmpdevice = devices.devices.last ?? AVCaptureDevice(uniqueID: "")! // 广角镜头
}
return tmpdevice
}
因为我之前到项目里面,需要用的广角镜头,所以这里默认获取广角镜头,在没有广角镜头的时候,调用正常镜头。
device = cameraWithPosition(position: AVCaptureDevice.Position.back)
获取输入数据
使用AVCaptureDeviceInput类获取镜头输入的数据:
AVCaptureDeviceInput:从AVCaptureDevice设备获取数据(用于获取摄像头拍摄的数据),然后会被添加给AVCaptureSession管理。
fileprivate var input: AVCaptureDeviceInput!
从device中获取数据:
do {
try input = AVCaptureDeviceInput(device: device)
} catch {
}
将input添加到session中:
if session.canAddInput(input) {
session.addInput(input)
}
获取输出照片
使用AVCapturePhotoOutput类获取照片:
fileprivate var imageOutput: AVCapturePhotoOutput!
将imageOutput添加到session中:
if session.canAddOutput(imageOutput) {
session.addOutput(imageOutput)
}
实时预览
使用AVCaptureVideoPreviewLayer预览:
fileprivate var previewLayer: AVCaptureVideoPreviewLayer!
实现预览效果
previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer.frame = CGRect(x: 0, y: 0, width: screenWidth, height: screenWidth * 4 / 3)
previewLayer.videoGravity = .resizeAspectFill
preview.layer.addSublayer(previewLayer)
因为我选择的分辨率是.photo,所以图片原始尺寸比例是4/3,所以我这边预览的比例也就设为4/3了,如果照片比例和预览比例不一致,会出现预览存在黑边或显示不全的情况。
运行音视频管理器
DispatchQueue.global(qos: .background).async {
self.session.startRunning()
}
注意:这串代码要放在全局线程里面运行,否则界面会有卡顿的危险
获取照片
do {
try input.device.lockForConfiguration()
let videoPreviewLayerVideoOrientation = previewLayer?.connection?.videoOrientation
let photoOutputConnection = imageOutput.connection(with: .video)
photoOutputConnection?.videoOrientation = videoPreviewLayerVideoOrientation ?? .portrait // 设置照片方向
let photoSettings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg])
if input.device.isFlashAvailable {
//打开闪光灯
photoSettings.flashMode = .on
}
imageOutput.capturePhoto(with: photoSettings, delegate: self)
input.device.unlockForConfiguration()
} catch {
}
通过AVCapturePhotoOutput类的代理方法获取拍的照片
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?) {
let image: UIImage! = UIImage(data: photo.fileDataRepresentation()!) ?? UIImage(named: "AppIcon")
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
}
最终效果如下:
后续会陆续添加二维码扫描,实时边缘检测等功能,期待一波吧!