Use CameraX to take photos and record videos in Jetpack Compose

In the history of Android development, the Camera API has always been criticized. Anyone who has used it knows that the intuitive feeling is that the configuration is complex, bloated, difficult to use, and difficult to understand. It can be seen from the official API iteration route about Camera The official is also trying to continuously improve the developer experience on Camera. Camera API has gone through three versions: Camera (deprecated), Camera2 and CameraX so far .

insert image description here

The first-generation Camera API has been declared obsolete since 5.0, and the API of Camera2 is particularly difficult to use. After many people use it, it is not as good as the previous Camera , so there is CameraX , which is actually based on Camera2, but it is used. Some more humane optimizations, which are part of the Jetpack component library, are currently the official Camera solution. Therefore, if you have a new project involving Camera API or plan to upgrade the old Camera API, it is recommended to use CameraX directly.

This article explores how to use CameraX in Jetpack Compose.

CameraX Preparations

First add dependencies:

dependencies {
    
     
  def camerax_version = "1.3.0-alpha04" 
  // implementation "androidx.camera:camera-core:${camerax_version}" // 可选,因为camera-camera2 包含了camera-core
  implementation "androidx.camera:camera-camera2:${camerax_version}" 
  implementation "androidx.camera:camera-lifecycle:${camerax_version}" 
  implementation "androidx.camera:camera-video:${camerax_version}" 
  implementation "androidx.camera:camera-view:${camerax_version}"  
  implementation "androidx.camera:camera-extensions:${camerax_version}"
}

Note, the latest version information of the above libraries can be found here: https://developer.android.com/jetpack/androidx/releases/camera?hl=zh-cn

Since camera permission application is required to use the camera, a dependency needs to be added accompanist-permissionsto apply for permission in Compose:

val accompanist_version = "0.31.2-alpha"
implementation "com.google.accompanist:accompanist-permissions:$accompanist_version"

Note, the latest version information of the above libraries can be found here: https://github.com/google/accompanist/releases

Then remember AndroidManifest.xmlto add the permission declaration in the :

<manifest .. >
    <uses-permission android:name="android.permission.CAMERA" />
    ..
</manifest>

CameraX has the following minimum version requirements:

  • Android API level 21
  • Android Architecture Components 1.1.1

For a lifecycle-aware Activity, use FragmentActivity or AppCompatActivity.

CameraX camera preview

Let's mainly look at how CameraX performs camera preview

Create preview PreviewView

Since Jetpack Compose does not directly provide a separate component for Camera preview, the solution is to use AndroidViewthis Composable to integrate the native preview control into Compose for display. code show as below:

@Composable
private fun CameraPreviewExample() {
    
    
    Scaffold(modifier = Modifier.fillMaxSize()) {
    
     innerPadding: PaddingValues ->
        AndroidView(
            modifier = Modifier
                .fillMaxSize()
                .padding(innerPadding),
            factory = {
    
     context ->
                PreviewView(context).apply {
    
    
                    setBackgroundColor(Color.White.toArgb())
                    layoutParams = LinearLayout.LayoutParams(MATCH_PARENT, MATCH_PARENT)
                    scaleType = PreviewView.ScaleType.FILL_START
                    implementationMode = PreviewView.ImplementationMode.COMPATIBLE
                }
            }
        )
    }
}

Here PreviewViewis camera-viewa native View control in the library, let's take a look at several of its setting methods:

1. : This method is used to set the specific PreviewView.setImplementationMode()implementation mode suitable for the application

Implementation mode

PreviewViewThe preview stream can be rendered onto the target using one of the following modes View:

  • PERFORMANCEis the default mode and displays the video stream PreviewViewusing SurfaceView, but falls back to using in some cases TextureView. SurfaceViewWith a dedicated drawing interface, the object is more likely to implement a hardware overlay via an internal hardware compositor, especially if there are no other interface elements such as buttons on top of the preview video. By rendering using a hardware overlay, video frames are bypassed from the GPU path, resulting in lower platform power consumption and lower latency.

  • COMPATIBLEmode, in this mode, PreviewViewwill be used TextureView. Unlike SurfaceView, this object does not have a dedicated drawing surface. Therefore, the video cannot be displayed until it is rendered through blending. During this extra step, the app can perform additional processing, such as scaling and rotating the video without restriction.

Note: For PERFORMANCEis the default mode, if the device does not support it SurfaceView, PreviewViewit will fall back to use TextureView. Returned to when the API level is 24or lower, and the camera hardware support level is CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LEGACYor different from the display Preview.getTargetRotation()rotation . Do not use this mode if set to a value other than the monitor's rotation, as arbitrary rotations are not supported. Do not use this mode if the Preview View needs to be animated. Animations are not supported at API level or lower . Also, for the preview stream state provided in , the state may occur earlier if this mode is used.PreviewViewPreviewViewTextureView
Preview.Builder.setTargetRotation(int)SurfaceView24SurfaceViewgetPreviewStreamStatePreviewView.StreamState.streaming

Obviously if it is for performance considerations you should use PERFORMANCEthe mode, but if it is for compatibility considerations it is best to use COMPATIBLEthe mode.

2.PreviewView.setScaleType() : This method is used to set the scaling type that is most suitable for the application .

zoom type

When the preview video resolution PreviewViewdiffers from the target dimensions, the video content needs to be cropped or letterboxed to fit the view (maintaining the original aspect ratio). For this purpose, PreviewViewthe following are provided ScaleTypes:

  • FIT_CENTER, FIT_START, and FIT_ENDto add letterboxing . The entire video content is resized (scaled up or down) to PreviewViewthe largest size that can be displayed in the target . However, although the entire video frame will be displayed in its entirety, there may be blank portions of the screen shot. The video frame is aligned to the center, start, or end of the target view, depending on which of the three zoom types you selected above.

  • FILL_CENTER, FILL_STARTand FILL_ENDfor clipping . If the aspect ratio of the video PreviewViewdoes not match the , only part of the content will be displayed, but the video will still fill the entirety PreviewView.

CameraXThe default scaling type used is FILL_CENTER.

Note: The main purpose of the zoom type is to keep the preview from stretching and deformation. If you use the previous Camera or Camera2 API, my general approach is to get a list of preview resolutions supported by the camera, and select a preview resolution. Then align the aspect ratio of the SurfaceViewor TextureViewcontrol to the aspect ratio of the selected preview resolution, so that there will be no stretching and deformation problems during preview, and the final effect is actually exactly the same as the zoom type above. Fortunately, now with official API level support, developers no longer have to do these troublesome things manually.

For example, the left picture below is the normal preview display effect, while the right picture is the stretched deformation preview display effect:

insert image description here

This kind of experience is very bad. The biggest problem is that what you see is what you get (the saved picture or video file is inconsistent with the effect seen in the preview).

Take the 4:3 picture displayed on the 16:9 preview screen as an example, if no processing is done, stretching and deformation will occur 100% of the time:

insert image description here

The following image shows the effect of different scaling types applied :

insert image description here

There are some restrictions on using PreviewView. When using PreviewView, you cannot do any of the following:

  • Created SurfaceTextureto be set on TextureViewand .Preview.SurfaceProvider
  • TextureViewRetrieved from SurfaceTextureand Preview.SurfaceProviderset on the .
  • SurfaceViewGet it from Surfaceand Preview.SurfaceProviderset it on .

If any of the above conditions occur, Previewstreaming of frames to will stop PreviewView.

Binding Lifecycle CameraController

After creation PreviewView, the next step is to set one for the instance we created CameraController, which is an abstract class whose implementation is LifecycleCameraController, and then we can CameraControllerbind the created instance to the current lifecycle holder lifecycleOwner. code show as below:

@OptIn(ExperimentalComposeUiApi::class)
@Composable
private fun CameraPreviewExample() {
    
    
    val context = LocalContext.current
    val lifecycleOwner = LocalLifecycleOwner.current
    val cameraController = remember {
    
     LifecycleCameraController(context) }

    Scaffold(modifier = Modifier.fillMaxSize()) {
    
     innerPadding: PaddingValues ->
        AndroidView(
            modifier = Modifier
                .fillMaxSize()
                .padding(innerPadding),
            factory = {
    
     context ->
                PreviewView(context).apply {
    
    
                    setBackgroundColor(Color.White.toArgb())
                    layoutParams = LinearLayout.LayoutParams(MATCH_PARENT, MATCH_PARENT)
                    scaleType = PreviewView.ScaleType.FILL_START
                    implementationMode = PreviewView.ImplementationMode.COMPATIBLE
                }.also {
    
     previewView ->
                    previewView.controller = cameraController
                    cameraController.bindToLifecycle(lifecycleOwner)
                }
            },
            onReset = {
    
    },
            onRelease = {
    
    
                cameraController.unbind()
            }
        )
    }
}

Note that in the code above, we unbind the controller from itonRelease in the callback . PreviewViewThis way, we can ensure that AndroidViewthe camera resources are released when they are no longer used.

request for access

The preview Composable interface is displayed only after the application has obtained the Camera authorization, otherwise a placeholder Composable interface is displayed. The reference code for obtaining authorization is as follows:

@OptIn(ExperimentalPermissionsApi::class)
@Composable
fun ExampleCameraScreen() {
    
    
    val cameraPermissionState = rememberPermissionState(android.Manifest.permission.CAMERA)
    LaunchedEffect(key1 = Unit) {
    
    
        if (!cameraPermissionState.status.isGranted && !cameraPermissionState.status.shouldShowRationale) {
    
    
            cameraPermissionState.launchPermissionRequest()
        }
    }
    if (cameraPermissionState.status.isGranted) {
    
     // 相机权限已授权, 显示预览界面
        CameraPreviewExample()
    } else {
    
     // 未授权,显示未授权页面
        NoCameraPermissionScreen(cameraPermissionState = cameraPermissionState)
    }
}

@OptIn(ExperimentalPermissionsApi::class)
@Composable
fun NoCameraPermissionScreen(cameraPermissionState: PermissionState) {
    
    
    // In this screen you should notify the user that the permission
    // is required and maybe offer a button to start another camera perission request
    Column(horizontalAlignment = Alignment.CenterHorizontally) {
    
    
        val textToShow = if (cameraPermissionState.status.shouldShowRationale) {
    
    
            // 如果用户之前选择了拒绝该权限,应当向用户解释为什么应用程序需要这个权限
            "未获取相机授权将导致该功能无法正常使用。"
        } else {
    
    
            // 首次请求授权
            "该功能需要使用相机权限,请点击授权。"
        }
        Text(textToShow)
        Spacer(Modifier.height(8.dp))
        Button(onClick = {
    
     cameraPermissionState.launchPermissionRequest() }) {
    
     Text("请求权限") }
    }
}

For more information about how to apply for dynamic permissions in Compose, please refer to Accompanist in Jetpack Compose , so I won't go into details here.

full screen settings

In order to display the camera in full screen when previewing without the top status bar, you can add the following code before Activitythe onCreate()method :setContent

if (isFullScreen) {
    
    
   requestWindowFeature(Window.FEATURE_NO_TITLE)
    //这个必须设置,否则不生效。
    if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.P) {
    
    
        window.attributes.layoutInDisplayCutoutMode =
            WindowManager.LayoutParams.LAYOUT_IN_DISPLAY_CUTOUT_MODE_SHORT_EDGES
    }
    WindowCompat.setDecorFitsSystemWindows(window, false)
    val windowInsetsController = WindowCompat.getInsetsController(window, window.decorView)
    windowInsetsController.hide(WindowInsetsCompat.Type.statusBars()) // 隐藏状态栏
    windowInsetsController.hide(WindowInsetsCompat.Type.navigationBars()) // 隐藏导航栏
    //将底部的navigation操作栏弄成透明,滑动显示,并且浮在上面
    windowInsetsController.systemBarsBehavior = WindowInsetsController.BEHAVIOR_SHOW_TRANSIENT_BARS_BY_SWIPE;
}

Usually this code should work, if not, try to modify the theme theme:

// themes.xml
<?xml version="1.0" encoding="utf-8"?>
<resources> 
    <style name="Theme.MyComposeApplication" parent="android:Theme.Material.Light.NoActionBar.Fullscreen" >
        <item name="android:statusBarColor">@android:color/transparent</item>
        <item name="android:navigationBarColor">@android:color/transparent</item>
        <item name="android:windowTranslucentStatus">true</item>
    </style> 
</resources>

CameraX to take pictures

Taking pictures in CameraX mainly provides two overload methods:

  • takePicture(Executor, OnImageCapturedCallback): This method provides a memory buffer for the captured picture.
  • takePicture(OutputFileOptions, Executor, OnImageSavedCallback): This method saves the captured image to the provided file location.

Let's add a FloatingActionButtonbutton CameraPreviewExamplethat will trigger the camera function when clicked. code show as below:

@OptIn(ExperimentalComposeUiApi::class)
@Composable
private fun CameraPreviewExample() {
    
    
    val context = LocalContext.current
    val lifecycleOwner = LocalLifecycleOwner.current
    val cameraController = remember {
    
     LifecycleCameraController(context) }

    Scaffold(
        modifier = Modifier.fillMaxSize(),
        floatingActionButton = {
    
    
            FloatingActionButton(onClick = {
    
     takePhoto(context, cameraController) }) {
    
    
                Icon(
                    imageVector = ImageVector.vectorResource(id = R.drawable.ic_camera_24),
                    contentDescription = "Take picture"
                )
            }
        },
        floatingActionButtonPosition = FabPosition.Center,
    ) {
    
     innerPadding: PaddingValues ->
        AndroidView(
            modifier = Modifier
                .fillMaxSize()
                .padding(innerPadding),
            factory = {
    
     context -> 
                PreviewView(context).apply {
    
    
                    setBackgroundColor(Color.White.toArgb())
                    layoutParams = LinearLayout.LayoutParams(MATCH_PARENT, MATCH_PARENT)
                    scaleType = PreviewView.ScaleType.FILL_START
                    implementationMode = PreviewView.ImplementationMode.COMPATIBLE
                }.also {
    
     previewView ->
                    previewView.controller = cameraController
                    cameraController.bindToLifecycle(lifecycleOwner)
                }
            },
            onReset = {
    
    },
            onRelease = {
    
    
                cameraController.unbind() 
            }
        )
    }
}
fun takePhoto(context: Context, cameraController: LifecycleCameraController) {
    
    
    val mainExecutor = ContextCompat.getMainExecutor(context)
    // Create time stamped name and MediaStore entry.
    val name = SimpleDateFormat(FILENAME, Locale.CHINA)
        .format(System.currentTimeMillis())
    val contentValues = ContentValues().apply {
    
    
        put(MediaStore.MediaColumns.DISPLAY_NAME, name)
        put(MediaStore.MediaColumns.MIME_TYPE, PHOTO_TYPE)
        if(Build.VERSION.SDK_INT > Build.VERSION_CODES.P) {
    
    
            val appName = context.resources.getString(R.string.app_name)
            put(MediaStore.Images.Media.RELATIVE_PATH, "Pictures/${
      
      appName}")
        }
    }
    // Create output options object which contains file + metadata
    val outputOptions = ImageCapture.OutputFileOptions
        .Builder(context.contentResolver, MediaStore.Images.Media.EXTERNAL_CONTENT_URI,
            contentValues)
        .build()
    cameraController.takePicture(outputOptions, mainExecutor, object : ImageCapture.OnImageSavedCallback {
    
    
            override fun onImageSaved(outputFileResults: ImageCapture.OutputFileResults) {
    
    
                val savedUri = outputFileResults.savedUri
                Log.d(TAG, "Photo capture succeeded: $savedUri")
                context.notifySystem(savedUri)
            }
            override fun onError(exception: ImageCaptureException) {
    
    
                Log.e(TAG, "Photo capture failed: ${
      
      exception.message}", exception)
            }
        }
    )
    context.showFlushAnimation()
}

In the method callback OnImageSavedCallbackof , the saved image file onImageSavedcan be obtained , and then further business processing can be done.outputFileResultsUri

If you want to execute the save logic yourself after taking a photo, or just for display without saving, you can use another callback OnImageCapturedCallback:

fun takePhoto2(context: Context, cameraController: LifecycleCameraController) {
    
    
    val mainExecutor = ContextCompat.getMainExecutor(context)
    cameraController.takePicture(mainExecutor,  object : ImageCapture.OnImageCapturedCallback() {
    
    
        override fun onCaptureSuccess(image: ImageProxy) {
    
    
            Log.e(TAG, "onCaptureSuccess: ${
      
      image.imageInfo}")
            // Process the captured image here
            try {
    
    
                // The supported format is ImageFormat.YUV_420_888 or PixelFormat.RGBA_8888.
                val bitmap = image.toBitmap()
                Log.e(TAG, "onCaptureSuccess bitmap: ${
      
      bitmap.width} x ${
      
      bitmap.height}")
            } catch (e: Exception) {
    
    
                Log.e(TAG, "onCaptureSuccess Exception: ${
      
      e.message}")
            }
        }
    })
    context.showFlushAnimation()
}

In this callback, ImageProxy#toBitmapthe method can be used to conveniently convert the raw data after taking pictures into Bitmapfor display. However, the default format obtained here is ImageFormat.JPEGthat toBitmapthe method conversion will fail. You can refer to the following code to solve it:

fun takePhoto2(context: Context, cameraController: LifecycleCameraController) {
    
    
    val mainExecutor = ContextCompat.getMainExecutor(context)
    cameraController.takePicture(mainExecutor,  object : ImageCapture.OnImageCapturedCallback() {
    
    
        override fun onCaptureSuccess(image: ImageProxy) {
    
    
            Log.e(TAG, "onCaptureSuccess: ${
      
      image.format}")
            // Process the captured image here
            try {
    
    
                var bitmap: Bitmap? = null
                // The supported format is ImageFormat.YUV_420_888 or PixelFormat.RGBA_8888.
                if (image.format == ImageFormat.YUV_420_888 || image.format == PixelFormat.RGBA_8888) {
    
    
                    bitmap = image.toBitmap()
                } else if (image.format == ImageFormat.JPEG) {
    
    
                    val planes = image.planes
                    val buffer = planes[0].buffer // 因为是ImageFormat.JPEG格式,所以 image.getPlanes()返回的数组只有一个,也就是第0个。
                    val size = buffer.remaining()
                    val bytes = ByteArray(size)
                    buffer.get(bytes, 0, size)
                    // ImageFormat.JPEG格式直接转化为Bitmap格式。
                    bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.size)
                }
                if (bitmap != null) {
    
    
                    Log.e(TAG, "onCaptureSuccess bitmap: ${
      
      bitmap.width} x ${
      
      bitmap.height}")
                }
            } catch (e: Exception) {
    
    
                Log.e(TAG, "onCaptureSuccess Exception: ${
      
      e.message}")
            }
        }
    })
    context.showFlushAnimation()
}

If the YUV format is obtained here, in addition to directly calling image.toBitmap()the method, the official also provides a tool class that can YUV_420_888convert the format to the object RGBof the format Bitmap, please refer to YuvToRgbConverter.kt .

The full code for the above example:

@Composable
fun ExampleCameraNavHost() {
    
    
    val navController = rememberNavController()
    NavHost(navController, startDestination = "CameraScreen") {
    
    
        composable("CameraScreen") {
    
    
            ExampleCameraScreen(navController = navController)
        }
        composable("ImageScreen") {
    
    
            ImageScreen(navController = navController)
        }
    }
}

@OptIn(ExperimentalPermissionsApi::class)
@Composable
fun ExampleCameraScreen(navController: NavHostController) {
    
    
    val cameraPermissionState = rememberPermissionState(Manifest.permission.CAMERA)
    LaunchedEffect(key1 = Unit) {
    
    
        if (!cameraPermissionState.status.isGranted && !cameraPermissionState.status.shouldShowRationale) {
    
    
            cameraPermissionState.launchPermissionRequest()
        }
    }
    if (cameraPermissionState.status.isGranted) {
    
     // 相机权限已授权, 显示预览界面
        CameraPreviewExample(navController)
    } else {
    
     // 未授权,显示未授权页面
        NoCameraPermissionScreen(cameraPermissionState = cameraPermissionState)
    }
}

@OptIn(ExperimentalPermissionsApi::class)
@Composable
fun NoCameraPermissionScreen(cameraPermissionState: PermissionState) {
    
    
    // In this screen you should notify the user that the permission
    // is required and maybe offer a button to start another camera perission request
    Column(horizontalAlignment = Alignment.CenterHorizontally) {
    
    
        val textToShow = if (cameraPermissionState.status.shouldShowRationale) {
    
    
            // 如果用户之前选择了拒绝该权限,应当向用户解释为什么应用程序需要这个权限
            "未获取相机授权将导致该功能无法正常使用。"
        } else {
    
    
            // 首次请求授权
            "该功能需要使用相机权限,请点击授权。"
        }
        Text(textToShow)
        Spacer(Modifier.height(8.dp))
        Button(onClick = {
    
     cameraPermissionState.launchPermissionRequest() }) {
    
     Text("请求权限") }
    }
}

private const val TAG = "CameraXBasic"
private const val FILENAME = "yyyy-MM-dd-HH-mm-ss-SSS"
private const val PHOTO_TYPE = "image/jpeg"

@OptIn(ExperimentalComposeUiApi::class)
@Composable
private fun CameraPreviewExample(navController: NavHostController) {
    
    
    val context = LocalContext.current
    val lifecycleOwner = LocalLifecycleOwner.current
    val cameraController = remember {
    
     LifecycleCameraController(context) }

    Scaffold(
        modifier = Modifier.fillMaxSize(),
        floatingActionButton = {
    
    
            FloatingActionButton(onClick = {
    
    
                 takePhoto(context, cameraController, navController)
                // takePhoto2(context, cameraController, navController)
                // takePhoto3(context, cameraController, navController)
            }) {
    
    
                Icon(
                    imageVector = ImageVector.vectorResource(id = R.drawable.ic_camera_24),
                    contentDescription = "Take picture"
                )
            }
        },
        floatingActionButtonPosition = FabPosition.Center,
    ) {
    
     innerPadding: PaddingValues ->
        AndroidView(
            modifier = Modifier
                .fillMaxSize()
                .padding(innerPadding),
            factory = {
    
     context ->
                cameraController.imageCaptureMode = CAPTURE_MODE_MINIMIZE_LATENCY
                PreviewView(context).apply {
    
    
                    setBackgroundColor(Color.White.toArgb())
                    layoutParams = LinearLayout.LayoutParams(MATCH_PARENT, MATCH_PARENT)
                    scaleType = PreviewView.ScaleType.FILL_CENTER
                    implementationMode = PreviewView.ImplementationMode.COMPATIBLE
                }.also {
    
     previewView ->
                    previewView.controller = cameraController
                    cameraController.bindToLifecycle(lifecycleOwner)
                }
            },
            onReset = {
    
    },
            onRelease = {
    
    
                cameraController.unbind()
            }
        )
    }
}

fun takePhoto(context: Context, cameraController: LifecycleCameraController, navController: NavHostController) {
    
    
    val mainExecutor = ContextCompat.getMainExecutor(context)
    // Create time stamped name and MediaStore entry.
    val name = SimpleDateFormat(FILENAME, Locale.CHINA)
        .format(System.currentTimeMillis())
    val contentValues = ContentValues().apply {
    
    
        put(MediaStore.MediaColumns.DISPLAY_NAME, name)
        put(MediaStore.MediaColumns.MIME_TYPE, PHOTO_TYPE)
        if(Build.VERSION.SDK_INT > Build.VERSION_CODES.P) {
    
    
            val appName = context.resources.getString(R.string.app_name)
            put(MediaStore.Images.Media.RELATIVE_PATH, "Pictures/${
      
      appName}")
        }
    }
    // Create output options object which contains file + metadata
    val outputOptions = ImageCapture.OutputFileOptions
        .Builder(context.contentResolver, MediaStore.Images.Media.EXTERNAL_CONTENT_URI,
            contentValues)
        .build()
    cameraController.takePicture(outputOptions, mainExecutor, object : ImageCapture.OnImageSavedCallback {
    
    
            override fun onImageSaved(outputFileResults: ImageCapture.OutputFileResults) {
    
    
                val savedUri = outputFileResults.savedUri
                Log.d(TAG, "Photo capture succeeded: $savedUri")
                context.notifySystem(savedUri)

                navController.currentBackStackEntry?.savedStateHandle?.set("savedUri", savedUri)
                navController.navigate("ImageScreen")
            }
            override fun onError(exception: ImageCaptureException) {
    
    
                Log.e(TAG, "Photo capture failed: ${
      
      exception.message}", exception)
            }
        }
    )
    context.showFlushAnimation()
}

fun takePhoto2(context: Context, cameraController: LifecycleCameraController, navController: NavHostController) {
    
    
    val mainExecutor = ContextCompat.getMainExecutor(context)
    cameraController.takePicture(mainExecutor,  object : ImageCapture.OnImageCapturedCallback() {
    
    
        override fun onCaptureSuccess(image: ImageProxy) {
    
    
            Log.e(TAG, "onCaptureSuccess: ${
      
      image.format}")
            // Process the captured image here
            val scopeWithNoEffect = CoroutineScope(SupervisorJob())
            scopeWithNoEffect.launch {
    
    
                val savedUri = withContext(Dispatchers.IO) {
    
    
                    try {
    
    
                        var bitmap: Bitmap? = null
                        // The supported format is ImageFormat.YUV_420_888 or PixelFormat.RGBA_8888.
                        if (image.format == ImageFormat.YUV_420_888 || image.format == PixelFormat.RGBA_8888) {
    
    
                            bitmap = image.toBitmap()
                        } else if (image.format == ImageFormat.JPEG) {
    
    
                            val planes = image.planes
                            val buffer = planes[0].buffer // 因为是ImageFormat.JPEG格式,所以 image.getPlanes()返回的数组只有一个,也就是第0个。
                            val size = buffer.remaining()
                            val bytes = ByteArray(size)
                            buffer.get(bytes, 0, size)
                            // ImageFormat.JPEG格式直接转化为Bitmap格式。
                            bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.size)
                        }
                        bitmap?.let {
    
    
                            // 保存bitmap到文件中
                            val photoFile = File(
                                context.getOutputDirectory(),
                                SimpleDateFormat(FILE_FORMAT, Locale.CHINA).format(System.currentTimeMillis()) + ".jpg"
                            )
                            BitmapUtilJava.saveBitmap(bitmap, photoFile.absolutePath, 100)
                            val savedUri = Uri.fromFile(photoFile)
                            savedUri
                        }
                    } catch (e: Exception) {
    
    
                        if (e is CancellationException) throw e
                        Log.e(TAG, "onCaptureSuccess Exception: ${
      
      e.message}")
                        null
                    }
                }
                mainExecutor.execute {
    
    
                	context.notifySystem(savedUri)
                    navController.currentBackStackEntry?.savedStateHandle?.set("savedUri", savedUri)
                    navController.navigate("ImageScreen")
                }
            }

        }
    })
    context.showFlushAnimation()
}
fun takePhoto3(context: Context, cameraController: LifecycleCameraController, navController: NavHostController) {
    
    
    val photoFile = File(
        context.getOutputDirectory(),
        SimpleDateFormat(FILENAME, Locale.CHINA).format(System.currentTimeMillis()) + ".jpg"
    )
    val outputOptions = ImageCapture.OutputFileOptions.Builder(photoFile).build()
    val mainExecutor = ContextCompat.getMainExecutor(context)
    cameraController.takePicture(outputOptions, mainExecutor, object: ImageCapture.OnImageSavedCallback {
    
    
        override fun onError(exception: ImageCaptureException) {
    
    
            Log.e(TAG, "Take photo error:", exception)
            onError(exception)
        }

        override fun onImageSaved(outputFileResults: ImageCapture.OutputFileResults) {
    
    
            val savedUri = Uri.fromFile(photoFile)
            Log.d(TAG, "Photo capture succeeded: $savedUri")
            context.notifySystem(savedUri)

            navController.currentBackStackEntry?.savedStateHandle?.set("savedUri", savedUri)
            navController.navigate("ImageScreen")
        }
    })
    context.showFlushAnimation()
}

// flash 动画
private fun Context.showFlushAnimation() {
    
    
    // We can only change the foreground Drawable using API level 23+ API
    if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
    
    
        // Display flash animation to indicate that photo was captured
        if (this is Activity) {
    
    
            val decorView = window.decorView
            decorView.postDelayed({
    
    
                decorView.foreground = ColorDrawable(android.graphics.Color.WHITE)
                decorView.postDelayed({
    
     decorView.foreground = null }, ANIMATION_FAST_MILLIS)
            }, ANIMATION_SLOW_MILLIS)
        }
    }
}
// 发送系统广播
private fun Context.notifySystem(savedUri: Uri?) {
    
    
    // 对于运行API级别>=24的设备,将忽略隐式广播,因此,如果您只针对24+级API,则可以删除此语句
    if (Build.VERSION.SDK_INT < Build.VERSION_CODES.N) {
    
    
        sendBroadcast(Intent(Camera.ACTION_NEW_PICTURE, savedUri))
    }
}
private fun Context.getOutputDirectory(): File {
    
    
    val mediaDir = externalMediaDirs.firstOrNull()?.let {
    
    
        File(it, resources.getString(R.string.app_name)).apply {
    
     mkdirs() }
    }

    return if (mediaDir != null && mediaDir.exists()) mediaDir else filesDir
}
// ImageScreen.kt 用于展示拍照结果的屏幕
@Composable
fun ImageScreen(navController: NavHostController) {
    
    
    val context = LocalContext.current
    var imageBitmap by remember {
    
     mutableStateOf<ImageBitmap?>(null) }
    val scope = rememberCoroutineScope()
    val savedUri = navController.previousBackStackEntry?.savedStateHandle?.get<Uri>("savedUri")
    savedUri?.run {
    
    
        scope.launch {
    
    
            withContext(Dispatchers.IO){
    
    
                val bitmap = BitmapUtilJava.getBitmapFromUri(context, savedUri)
                imageBitmap = BitmapUtilJava.scaleBitmap(bitmap, 1920, 1080).asImageBitmap()
            }
         }
        imageBitmap?.let {
    
    
            Image(it,
                contentDescription = null,
                modifier = Modifier.fillMaxWidth(),
                contentScale = ContentScale.Crop
            )
        }
    }
}
// 用到的几个工具类
public static void saveBitmap(Bitmap mBitmap, String filePath, int quality) {
    
    
	File f = new File(filePath);
	FileOutputStream fOut = null;
	try {
    
    
		fOut = new FileOutputStream(f);
	} catch (FileNotFoundException e) {
    
    
		e.printStackTrace();
	}
	mBitmap.compress(Bitmap.CompressFormat.JPEG, quality, fOut);
	try {
    
    
		if (fOut != null) {
    
    
			fOut.flush();
		}
	} catch (IOException e) {
    
    
		e.printStackTrace();
	}
}
/**
 * 宽高比取最大值缩放图片.
 *
 * @param bitmap     加载的图片
 * @param widthSize  缩放之后的图片宽度,一般就是屏幕的宽度.
 * @param heightSize 缩放之后的图片高度,一般就是屏幕的高度.
 */
public static Bitmap scaleBitmap(Bitmap bitmap, int widthSize, int heightSize) {
    
    
	int bmpW = bitmap.getWidth();
	int bmpH = bitmap.getHeight();
	float scaleW = ((float) widthSize) / bmpW;
	float scaleH = ((float) heightSize) / bmpH;
	//取宽高最大比例来缩放图片
	float max = Math.max(scaleW, scaleH);
	Matrix matrix = new Matrix();
	matrix.postScale(max, max);
	return Bitmap.createBitmap(bitmap, 0, 0, bmpW, bmpH, matrix, true);
}
/**
 * 根据Uri返回Bitmap对象
 * @param context
 * @param uri
 * @return
 */
public static Bitmap getBitmapFromUri(Context context, Uri uri){
    
    
	try {
    
    
		// 这种方式也可以
		// BitmapFactory.decodeStream(context.getContentResolver().openInputStream(uri));
		return MediaStore.Images.Media.getBitmap(context.getContentResolver(), uri);
	}catch (Exception e){
    
    
		e.printStackTrace();
		return null;
	}
}

Note: Bitmap-related operations are time-consuming tasks, and should be executed in a coroutine scheduler that is separated from the main thread using coroutines. In practice, it needs to be improved by itself if it is used in a production project.

CameraProvider 与 CameraController

All official CameraX codes actually provide CameraControllerand CameraProvidertwo implementations. Choose if you want the easiest way to use CameraX CameraController; choose if you need more flexibility CameraProvider.

To determine which implementation is right for you, the advantages of each are listed below:

CameraController CameraProvider
Requires little setup code allow for greater control
Allows CameraX to handle more of the setup process, meaning features like tap to focus and pinch to zoom work automatically Since the app developer handles the settings, there are more opportunities to customize the configuration, such as enabling output image rotation or setting the output image format in ImageAnalysis
Requires PreviewView for camera preview, allowing CameraX to provide a seamless end-to-end integration, as in our ML Suite integration, which takes machine learning model result coordinates (such as face bounding boxes) directly map to preview coordinates Ability to use a custom "Surface" for camera preview, allowing for more flexibility, such as using your existing "Surface" code that can be used as input to other parts of the app

Use CameraProvider to realize the camera function

For easy access CameraProvider, you can first define an extension function:

private suspend fun Context.getCameraProvider(): ProcessCameraProvider {
    
    
    return ProcessCameraProvider.getInstance(this).await()
}

Taking CameraProvidera photo using CameraControlleris very similar to using, the only difference is that it requires an ImageCaptureobject to call takePicturemethods on:

private fun takePhoto(
    context: Context,
    imageCapture: ImageCapture,
    onImageCaptured: (Uri) -> Unit,
    onError: (ImageCaptureException) -> Unit
) {
    
    
    val photoFile = File(
        context.getOutputDirectory(),
        SimpleDateFormat(FILE_FORMAT, Locale.CHINA).format(System.currentTimeMillis()) + ".jpg"
    )
    val outputOptions = ImageCapture.OutputFileOptions.Builder(photoFile).build()
    val mainExecutor = ContextCompat.getMainExecutor(context)
    imageCapture.takePicture(outputOptions, mainExecutor, object: ImageCapture.OnImageSavedCallback {
    
    
        override fun onError(exception: ImageCaptureException) {
    
    
            Log.e(TAG, "Take photo error:", exception)
            onError(exception)
        }

        override fun onImageSaved(outputFileResults: ImageCapture.OutputFileResults) {
    
    
            val savedUri = Uri.fromFile(photoFile)
            onImageCaptured(savedUri)
            context.notifySystem(savedUri)
        }
    })
    context.showFlushAnimation()
}

Call code:

@OptIn(ExperimentalComposeUiApi::class)
@Composable
fun CameraPreviewExample2(navController: NavHostController) {
    
    
    val context = LocalContext.current
    val lifecycleOwner = LocalLifecycleOwner.current

    val previewView = remember {
    
     PreviewView(context) }
    // Create Preview UseCase.
    val preview = remember {
    
    
        Preview.Builder().build().apply {
    
    
            setSurfaceProvider(previewView.surfaceProvider)
        }
    }
    val imageCapture: ImageCapture = remember {
    
     ImageCapture.Builder().build() }
    val cameraSelector = remember {
    
     CameraSelector.DEFAULT_BACK_CAMERA } // Select default back camera.
    var pCameraProvider: ProcessCameraProvider? = null
    
    LaunchedEffect(cameraSelector) {
    
    
        val cameraProvider = context.getCameraProvider()
        cameraProvider.unbindAll()  // Unbind UseCases before rebinding. 
        // Bind UseCases to camera. This function returns a camera
        // object which can be used to perform operations like zoom, flash, and focus.
        cameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector, preview, imageCapture)
        pCameraProvider = cameraProvider
    }

    Scaffold(
        modifier = Modifier.fillMaxSize(),
        floatingActionButton = {
    
    
            FloatingActionButton(onClick = {
    
    
                takePhoto(
                    context,
                    imageCapture = imageCapture,
                    onImageCaptured = {
    
     savedUri ->
                        Log.d(TAG, "Photo capture succeeded: $savedUri")
                        context.notifySystem(savedUri)
                        navController.currentBackStackEntry?.savedStateHandle?.set("savedUri", savedUri)
                        navController.navigate("ImageScreen")
                    },
                    onError = {
    
    
                        Log.e(TAG, "Photo capture failed: ${
      
      it.message}", it)
                    }
                )
            }) {
    
    
                Icon(
                    imageVector = ImageVector.vectorResource(id = R.drawable.ic_camera_24),
                    contentDescription = "Take picture"
                )
            }
        },
        floatingActionButtonPosition = FabPosition.Center,
    ) {
    
     innerPadding: PaddingValues ->
        AndroidView(
            modifier = Modifier
                .fillMaxSize()
                .padding(innerPadding),
            factory = {
    
     previewView },
            onReset = {
    
    },
            onRelease = {
    
    
                pCameraProvider?.unbindAll()
            }
        )
    }
}

The biggest difference here is that LaunchedEffectthe side effect API is used, and a coroutine is started to set up cameraProviderthe binding-related operations, because cameraProvidera suspend function is used when getting it here. Also, note that both here PreviewViewand are remembered ImageCaptureusing remember, so that they Composablesurvive across reorganizations, rather than recreating a new object each time they are reorganized.

CameraX common settings

Set the shooting mode

Whether using CameraControlleror , CameraProvideryou can setCaptureMode()set the shooting mode through the method.

The shooting modes supported by CameraX are:

  • CAPTURE_MODE_MINIMIZE_LATENCY: Shorten the delay time for taking pictures.
  • CAPTURE_MODE_MAXIMIZE_QUALITY: Improve the picture quality of picture capture.
  • CAPTURE_MODE_ZERO_SHUTTER_LAG: Zero shutter lag mode, available since 1.2. CAPTURE_MODE_MINIMIZE_LATENCYWhen Zero Shutter Lag is enabled, the lag time is significantly reduced compared to the default shooting mode , so you never miss a shot.

The shooting mode defaults to CAPTURE_MODE_MINIMIZE_LATENCY. See the setCaptureMode() reference documentation for details .

Zero Shutter Lag uses a ring buffer to store the three most recently captured frames. When the user presses the capture button, CameraX calls takePicture()and the ring buffer retrieves the capture frame whose timestamp is closest to the time the button was pressed. CameraX then reprocesses the capture session to generate a picture from that frame that is saved to disk in JPEG format.

Before enabling Zero Shutter Lag, use to isZslSupported()determine if the device in question meets the following requirements:

If the device does not meet the minimum requirements, CameraX will fall back to CAPTURE_MODE_MINIMIZE_LATENCY.

Zero shutter lag is only available for picture capture use cases . You can't enable it for video capture use cases or camera extensions. Finally, zero shutter lag won't work when the flash is on or in auto mode, since using the flash adds to the lag time.

Example of using CameraControllerto set shooting mode:

cameraController.imageCaptureMode = CAPTURE_MODE_MINIMIZE_LATENCY

Example of using CameraProviderto set shooting mode:

val imageCapture: ImageCapture = remember {
    
    
   ImageCapture.Builder().setCaptureMode(ImageCapture.CAPTURE_MODE_ZERO_SHUTTER_LAG).build()
}

set flash

The default flash mode is FLASH_MODE_OFF. To set the flash mode, use setFlashMode():

  • FLASH_MODE_ON: The flash is always on.
  • FLASH_MODE_AUTO: When shooting in low-light environment, the flash will be turned on automatically.

Example of using CameraControllerto set the flash mode:

cameraController.imageCaptureFlashMode = ImageCapture.FLASH_MODE_AUTO

Example of using CameraProviderto set the flash mode:

ImageCapture.Builder() 
   .setFlashMode(FLASH_MODE_AUTO)
   .build()

select camera

In CameraX, camera selection is CameraSelectorhandled through the class. CameraX makes the common case of using the default camera much easier. You can specify whether you want to use the default front camera, or the default rear camera.

Here is CameraControllerthe CameraX code by using the default rear camera:

var cameraController = LifecycleCameraController(baseContext)
// val selector = CameraSelector.Builder()
//    .requireLensFacing(CameraSelector.LENS_FACING_BACK).build()
val selector = CameraSelector.DEFAULT_BACK_CAMERA // 等价上面的代码
cameraController.cameraSelector = selector

Here is CameraProviderthe CameraX code to select the default front camera via :

val cameraSelector = CameraSelector.DEFAULT_FRONT_CAMERA
cameraProvider.unbindAll() 
var camera = cameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector, useCases)

tap to focus

When a camera preview is displayed on screen, a common control is to set focus when the user taps the preview.

CameraControllerwill listen PreviewViewfor touch events to automatically handle tap focus. You can setTapToFocusEnabled()enable and disable tap-to-focus via and getter isTapToFocusEnabled()check its value with the corresponding .

getTapToFocusState()method returns a LiveDataobject that tracks CameraControllerfocus state changes on the .

// CameraX: track the state of tap-to-focus over the Lifecycle of a PreviewView,
// with handlers you can define for focused, not focused, and failed states.

val tapToFocusStateObserver = Observer {
    
     state ->
    when (state) {
    
    
        CameraController.TAP_TO_FOCUS_NOT_STARTED ->
            Log.d(TAG, "tap-to-focus init")
        CameraController.TAP_TO_FOCUS_STARTED ->
            Log.d(TAG, "tap-to-focus started")
        CameraController.TAP_TO_FOCUS_FOCUSED ->
            Log.d(TAG, "tap-to-focus finished (focus successful)")
        CameraController.TAP_TO_FOCUS_NOT_FOCUSED ->
            Log.d(TAG, "tap-to-focus finished (focused unsuccessful)")
        CameraController.TAP_TO_FOCUS_FAILED ->
            Log.d(TAG, "tap-to-focus failed")
    }
}

cameraController.getTapToFocusState().observe(this, tapToFocusStateObserver)

When using CameraProvider, some setup is required for tap to focus to work properly. This example assumes you are using PreviewView. If not used, you'll need to adjust the logic to apply to a custom Surface.

When using PreviewView, follow these steps:

  1. Sets the gesture detector used to handle tap events.
  2. For tap events, use to MeteringPointFactory.createPoint()create one MeteringPoint.
  3. For MeteringPoint, create one FocusMeteringAction.
  4. CameraControlFor objects (returned from ) on Camera bindToLifecycle(), call it startFocusAndMetering()so that it is passed to FocusMeteringAction.
  5. (Optional) Response FocusMeteringResult.
  6. Set up a gesture detector to PreviewView.setOnTouchListener()respond to touch events in a .
// CameraX: implement tap-to-focus with CameraProvider.

// Define a gesture detector to respond to tap events and call
// startFocusAndMetering on CameraControl. If you want to use a
// coroutine with await() to check the result of focusing, see the
// "Android development concepts" section above.
val gestureDetector = GestureDetectorCompat(context,
    object : SimpleOnGestureListener() {
    
    
        override fun onSingleTapUp(e: MotionEvent): Boolean {
    
    
            val previewView = previewView ?: return
            val camera = camera ?: return
            val meteringPointFactory = previewView.meteringPointFactory
            val focusPoint = meteringPointFactory.createPoint(e.x, e.y)
            val meteringAction = FocusMeteringAction
                .Builder(meteringPoint).build()
            lifecycleScope.launch {
    
    
                val focusResult = camera.cameraControl
                    .startFocusAndMetering(meteringAction).await()
                if (!result.isFocusSuccessful()) {
    
    
                    Log.d(TAG, "tap-to-focus failed")
                }
            }
        }
    }
)

...

// Set the gestureDetector in a touch listener on the PreviewView.
previewView.setOnTouchListener {
    
     _, event ->
    // See pinch-to-zooom scenario for scaleGestureDetector definition.
    var didConsume = scaleGestureDetector.onTouchEvent(event)
    if (!scaleGestureDetector.isInProgress) {
    
    
        didConsume = gestureDetector.onTouchEvent(event)
    }
    didConsume
}

Pinch to zoom

Zooming the preview is another common direct manipulation of the camera preview. With more and more cameras on a device, users also want to automatically select the camera with the best focus after zooming.

Similar to tap-to-focus, touch events of CameraControllerare monitored PreviewViewto automatically handle pinch-to-zoom. You can setPinchToZoomEnabled()enable and disable pinch zoom with and getter isPinchToZoomEnabled()check its value with the corresponding .

getZoomState()method returns a LiveDataobject that tracks changes CameraControlleron the ZoomState.

// CameraX: track the state of pinch-to-zoom over the Lifecycle of
// a PreviewView, logging the linear zoom ratio.

val pinchToZoomStateObserver = Observer {
    
     state ->
    val zoomRatio = state.getZoomRatio()
    Log.d(TAG, "ptz-zoom-ratio $zoomRatio")
}

cameraController.getZoomState().observe(this, pinchToZoomStateObserver)

When using CameraProvider, some setup is required for pinch and zoom to work properly. If you are not using PreviewView, you will need to adjust the logic to apply custom Surface.

When using PreviewView, follow these steps:

  1. Sets the zoom gesture detector used to handle pinch events.
  2. Camera.CameraInfoObtained from the object, an instance is returned when ZoomStateyou call .bindToLifecycle()Camera
  3. If ZoomStatehas zoomRatioa value, save it as the current zoom. If ZoomStateNone zoomRatio, the camera's default zoom ( 1.0) is used.
  4. Gets the current zoom factor scaleFactormultiplied by to determine the new zoom factor, and passes it to CameraControl.setZoomRatio().
  5. Set up a gesture detector to PreviewView.setOnTouchListener()respond to touch events in a .
// CameraX: implement pinch-to-zoom with CameraProvider.

// Define a scale gesture detector to respond to pinch events and call
// setZoomRatio on CameraControl.
val scaleGestureDetector = ScaleGestureDetector(context,
    object : SimpleOnGestureListener() {
    
    
        override fun onScale(detector: ScaleGestureDetector): Boolean {
    
    
            val camera = camera ?: return
            val zoomState = camera.cameraInfo.zoomState
            val currentZoomRatio: Float = zoomState.value?.zoomRatio ?: 1f
            camera.cameraControl.setZoomRatio(
                detector.scaleFactor * currentZoomRatio
            )
        }
    }
)

...

// Set the scaleGestureDetector in a touch listener on the PreviewView.
previewView.setOnTouchListener {
    
     _, event ->
    var didConsume = scaleGestureDetector.onTouchEvent(event)
    if (!scaleGestureDetector.isInProgress) {
    
    
        // See pinch-to-zooom scenario for gestureDetector definition.
        didConsume = gestureDetector.onTouchEvent(event)
    }
    didConsume
}

CameraX captures video

A capture system typically records a video stream and an audio stream, compresses them, multiplexes the two streams, and writes the resulting streams to disk.

insert image description here

Overview of the VideoCapture API

In CameraX the solution for video capture is VideoCaptureuse case:

insert image description here

CameraX video capture consists of several high-level architectural components:

  • SurfaceProvider , representing the video source.
  • AudioSource , indicating the audio source.
  • Two encoders for encoding and compressing video/audio.
  • A media multiplexer for multiplexing two streams.
  • File saver for writing out results.

VideoCaptureThe API abstracts the complex capture engine to provide a simpler and more intuitive API for applications.

VideoCaptureis a CameraX use case that can be used alone or in combination with other use cases. The exact combinations supported depend on camera hardware capabilities, but the combination of Previewand VideoCapturefor this use case applies to all devices.

Note: is implemented in VideoCaptureCameraX's library, available in and later. The CameraX API is not final and may change over time.camera-video1.1.0-alpha10VideoCapture

VideoCaptureThe API consists of the following objects that can communicate with the application:

  • VideoCaptureis the top-level use case class. Bind to VideoCapturevia and other CameraX use cases .CameraSelectorLifecycleOwner
  • Recorderis an implementation VideoCapturethat is tightly coupled with VideoOutput. RecorderUsed to perform video and audio capture operations. Apply by Recordercreating a recording object.
  • PendingRecordingThe recording object is configured, with options such as enabling audio and setting event listeners. You must use Recorderto create PendingRecording. PendingRecordingNothing will be recorded.
  • RecordingThe actual recording operation will be performed. You must use PendingRecordingto create Recording.

The following figure shows the relationship between these objects:

insert image description here
legend:

  1. Use QualitySelectorcreate Recorder.
  2. Use one of these OutputOptionsconfigurations Recorder.
  3. withAudioEnabled()Use to enable audio if desired .
  4. Called with VideoRecordEventthe listener start()to start recording.
  5. RecordingUse for pause()/resume()/stop()to control recording operations.
  6. Respond inside the event listener VideoRecordEvents.

A detailed API list is located in current-txt within the source code .

Shoot video with CameraProvider

If you use CameraProvidera bound VideoCapureuse case, you need to create a VideoCaptureUseCase and pass in a Recorderobject. The video quality can then Recorder.Builderbe set via and optionally set FallbackStrategyin case the device does not meet the required quality specifications. Finally, VideoCapturethe instance is bound to along with other UseCases CameraProvider.

Create a QualitySelector object

Apps can configure the video resolution via QualitySelectorthe object Recorder.

CameraX Recordersupports the following predefined video resolution Qualities:

  • Quality.UHD for 4K Ultra HD video size ( 2160p )
  • Quality.FHD for Full HD video size ( 1080p )
  • Quality.HD for HD video size ( 720p )
  • Quality.SD for standard definition video sizes ( 480p )

Please note that other resolutions are available for CameraX upon app authorization. The exact video size for each option depends on the capabilities of your camera and encoder. See the documentation for CamcorderProfile for details .

You can create it using one of the following methods QualitySelector:

  1. Use fromOrderedList()provides several preferred resolutions and includes a fallback strategy in case none of the preferred resolutions are supported.

    CameraX can determine the best fallback match based on the capabilities of the selected camera. QualitySelectorSee the FallbackStrategy specification for details . For example, the following code requests the highest supported recording resolution; if none of the requested resolutions are supported, CameraX is authorized to choose the closest Quality.SDresolution:

val qualitySelector = QualitySelector.fromOrderedList(
         listOf(Quality.UHD, Quality.FHD, Quality.HD, Quality.SD),
         FallbackStrategy.lowerQualityOrHigherThan(Quality.SD))
  1. First query the supported resolutions of the camera, then QualitySelector::from()choose from the supported resolutions with:
val cameraInfo = cameraProvider.availableCameraInfos.filter {
    
    
    Camera2CameraInfo
    .from(it)
    .getCameraCharacteristic(CameraCharacteristics.LENS\_FACING) == CameraMetadata.LENS_FACING_BACK
}

val supportedQualities = QualitySelector.getSupportedQualities(cameraInfo[0])
val filteredQualities = arrayListOf (Quality.UHD, Quality.FHD, Quality.HD, Quality.SD)
                       .filter {
    
     supportedQualities.contains(it) }

// Use a simple ListView with the id of simple_quality_list_view
viewBinding.simpleQualityListView.apply {
    
    
    adapter = ArrayAdapter(context,
                           android.R.layout.simple_list_item_1,
                           filteredQualities.map {
    
     it.qualityToString() })

    // Set up the user interaction to manually show or hide the system UI.
    setOnItemClickListener {
    
     _, _, position, _ ->
        // Inside View.OnClickListener,
        // convert Quality.* constant to QualitySelector
        val qualitySelector = QualitySelector.from(filteredQualities[position])

        // Create a new Recorder/VideoCapture for the new quality
        // and bind to lifecycle
        val recorder = Recorder.Builder()
            .setQualitySelector(qualitySelector).build()

         // ...
    }
}

// A helper function to translate Quality to a string
fun Quality.qualityToString() : String {
    
    
    return when (this) {
    
    
        Quality.UHD -> "UHD"
        Quality.FHD -> "FHD"
        Quality.HD -> "HD"
        Quality.SD -> "SD"
        else -> throw IllegalArgumentException()
    }
}

Note that QualitySelector.getSupportedQualities()the function returned must be applicable to VideoCaptureusecase or a combination of VideoCaptureand Previewusecase. When binding with ImageCapturethe or ImageAnalysisuse case, CameraX may still fail to bind if the requested camera does not support the desired combination.

Create and bind a VideoCapture object

Once you have QualitySelectora , you can create VideoCapturethe object and perform the binding. Note that this binding is the same as for other use cases:

val recorder = Recorder.Builder()
    .setExecutor(cameraExecutor)
    .setQualitySelector(QualitySelector.from(Quality.FHD))
    .build()
val videoCapture = VideoCapture.withOutput(recorder)

try {
    
    
    // Bind use cases to camera
    cameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector, preview, videoCapture)
} catch(exc: Exception) {
    
    
    Log.e(TAG, "Use case binding failed", exc)
}

RecorderThe format that is most appropriate for the system will be chosen. The most common video codec is H.264 AVC, and its container format is MPEG-4.

Note: It is currently not possible to configure the final video codec and container format.

Configure and generate Recording object

Various configurations can then videoCapture.outputbe performed on the property to generate a Recordingobject that can be used to pause, resume, or stop recording. Finally, call start()to start recording, passing in the context and Consumer<VideoRecordEvent>an event listener to handle video recording events.

val name = SimpleDateFormat(FILENAME_FORMAT, Locale.US)
    .format(System.currentTimeMillis())
val contentValues = ContentValues().apply {
    
    
    put(MediaStore.MediaColumns.DISPLAY_NAME, name)
    put(MediaStore.MediaColumns.MIME_TYPE, "video/mp4")
    if (Build.VERSION.SDK_INT > Build.VERSION_CODES.P) {
    
    
        put(MediaStore.Video.Media.RELATIVE_PATH, "Movies/CameraX-Video")
    }
}
// Create MediaStoreOutputOptions for our recorder
val mediaStoreOutputOptions = MediaStoreOutputOptions
    .Builder(contentResolver, MediaStore.Video.Media.EXTERNAL_CONTENT_URI)
    .setContentValues(contentValues)
    .build()
    
// 2. Configure Recorder and Start recording to the mediaStoreOutput.
val recording = videoCapture.output
    .prepareRecording(context, mediaStoreOutputOptions)
    .withAudioEnabled() // 启用音频
    .start(ContextCompat.getMainExecutor(this), captureListener) // 启动并注册录制事件监听

OutputOptions

RecorderThe following types of configuration are supported OutputOptions:

  • FileDescriptorOutputOptionsfor capturing into FileDescriptora .
  • FileOutputOptions, for capturing into a File.
  • MediaStoreOutputOptionsfor capturing into the MediaStore.

Regardless of the OutputOptionstype of , you can setFileSizeLimit()set a maximum file size through . Other options are specific to individual output types, such as ParcelFileDescriptorspecific to FileDescriptorOutputOptions.

Pause, resume and stop

When you call start()the function, the object Recorderis returned Recording. The app can use this Recordingobject to complete the capture or perform other actions such as suspend or resume. You can pause, resume and stop an ongoing session with Recording:

  • pause(), to pause the currently active recording.
  • resume(), to resume a paused active recording.
  • stop(), which completes the recording and clears all associated recording objects.

stop()Note that you can call to terminate whether the recording is paused or active Recording.

RecorderOne Recordingobject is supported at a time. After calling or on the preceding Recordingobject , you can start a new recording.Recording.stop()Recording.close()

 if (recording != null) {
    
    
      // Stop the current recording session.
      recording.stop()
      recording = null
      return
  }
  ..
  recording = ..

event listener

If you have registered with PendingRecording.start(), EventListeneryou Recordingwill VideoRecordEventcommunicate using .

CameraX sends a event whenever recording starts on the corresponding camera device VideoRecordEvent.Start.

  • VideoRecordEvent.StatusFor recording statistics, such as the size of the current file and the time span of the recording.
  • VideoRecordEvent.FinalizeFor recording results, will contain the final file's URIand any associated errors.

After your app receives a indicating a successful recording session Finalize, you can access the captured video from the location specified in OutputOptionsthe .

 recording = videoCapture.output
     .prepareRecording(context, mediaStoreOutputOptions)
     .withAudioEnabled()
     .start(ContextCompat.getMainExecutor(context)) {
    
     recordEvent ->
         when(recordEvent) {
    
    
             is VideoRecordEvent.Start -> {
    
    

             }
             is VideoRecordEvent.Status -> {
    
    

             }
             is VideoRecordEvent.Pause -> {
    
    

             }
             is VideoRecordEvent.Resume -> {
    
    

             }
             is VideoRecordEvent.Finalize -> {
    
    
                 if (!recordEvent.hasError()) {
    
    
                     val msg = "Video capture succeeded: ${
      
      recordEvent.outputResults.outputUri}"
                     context.showToast(msg)
                     Log.d(TAG, msg)
                 } else {
    
    
                     recording?.close()
                     recording = null
                     Log.e(TAG, "video capture ends with error", recordEvent.cause)
                 }
             }
         }
     }

Complete sample code

CameraProviderHere is a complete sample code for capturing video using in Compose :

// CameraProvider 拍摄视频示例
private const val TAG = "CameraXVideo"
private const val FILENAME_FORMAT = "yyyy-MM-dd-HH-mm-ss-SSS"

@Composable
fun CameraVideoExample(navController: NavHostController) {
    
    
    val context = LocalContext.current
    val lifecycleOwner = LocalLifecycleOwner.current

    val previewView = remember {
    
     PreviewView(context) }
    // Create Preview UseCase.
    val preview = remember {
    
    
        Preview.Builder().build().apply {
    
     setSurfaceProvider(previewView.surfaceProvider) }
    }
    var cameraSelector by remember {
    
     mutableStateOf(CameraSelector.DEFAULT_BACK_CAMERA) }
    // Create VideoCapture UseCase.
    val videoCapture = remember(cameraSelector) {
    
    
        val qualitySelector = QualitySelector.from(Quality.FHD)
        val recorder = Recorder.Builder()
            .setExecutor(ContextCompat.getMainExecutor(context))
            .setQualitySelector(qualitySelector)
            .build()
        VideoCapture.withOutput(recorder)
    }
    // Bind UseCases
    LaunchedEffect(cameraSelector) {
    
    
        try {
    
    
            val cameraProvider = context.getCameraProvider()
            cameraProvider.unbindAll()
            cameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector, preview, videoCapture)
        } catch(exc: Exception) {
    
    
            Log.e(TAG, "Use case binding failed", exc)
        }
    }

    var recording: Recording? =  null
    var isRecording by remember {
    
     mutableStateOf(false) }
    var time by remember {
    
     mutableStateOf(0L) }

    Scaffold(
        modifier = Modifier.fillMaxSize(),
        floatingActionButton = {
    
    
            FloatingActionButton(onClick = {
    
    
                 if (!isRecording) {
    
    
                     isRecording = true
                     recording?.stop()
                     time = 0L
                     recording = startRecording(context, videoCapture,
                        onFinished = {
    
     savedUri ->
                            if (savedUri != Uri.EMPTY) {
    
    
                                val msg = "Video capture succeeded: $savedUri"
                                context.showToast(msg)
                                Log.d(TAG, msg)
                                navController.currentBackStackEntry?.savedStateHandle?.set("savedUri", savedUri)
                                navController.navigate("VideoPlayerScreen")
                            }
                        },
                        onProgress = {
    
     time = it },
                        onError = {
    
    
                            isRecording = false
                            recording?.close()
                            recording = null
                            time = 0L
                            Log.e(TAG, "video capture ends with error", it)
                        }
                    )
                 } else {
    
    
                     isRecording = false
                     recording?.stop()
                     recording = null
                     time = 0L
                 }
            }) {
    
    
                val iconId = if (!isRecording) R.drawable.ic_start_record_36
                    else R.drawable.ic_stop_record_36
                Icon(
                    imageVector = ImageVector.vectorResource(id = iconId),
                    tint = Color.Red,
                    contentDescription = "Capture Video"
                )
            }
        },
        floatingActionButtonPosition = FabPosition.Center,
    ) {
    
     innerPadding: PaddingValues ->
        Box(modifier = Modifier
            .padding(innerPadding)
            .fillMaxSize()) {
    
    
            AndroidView(
                modifier = Modifier.fillMaxSize(),
                factory = {
    
     previewView },
            )
            if (time > 0 && isRecording) {
    
    
                Text(text = "${
      
      SimpleDateFormat("mm:ss", Locale.CHINA).format(time)} s",
                    modifier = Modifier.align(Alignment.TopCenter),
                    color = Color.Red,
                    fontSize = 16.sp
                )
            }
            if (!isRecording) {
    
    
                IconButton(
                    onClick = {
    
    
                        cameraSelector = when(cameraSelector) {
    
    
                            CameraSelector.DEFAULT_BACK_CAMERA -> CameraSelector.DEFAULT_FRONT_CAMERA
                            else -> CameraSelector.DEFAULT_BACK_CAMERA
                        }
                    },
                    modifier = Modifier
                        .align(Alignment.TopEnd)
                        .padding(bottom = 32.dp)
                ) {
    
    
                    Icon(
                        painter = painterResource(R.drawable.ic_switch_camera),
                        contentDescription = "",
                        tint = Color.Green,
                        modifier = Modifier.size(36.dp)
                    )
                }
            }
        }
    }
}

@SuppressLint("MissingPermission")
private fun startRecording(
    context: Context,
    videoCapture: VideoCapture<Recorder>,
    onFinished: (Uri) -> Unit,
    onProgress: (Long) -> Unit,
    onError: (Throwable?) -> Unit
): Recording{
    
    
    // Create and start a new recording session.
    val name = SimpleDateFormat(FILENAME_FORMAT, Locale.CHINA)
        .format(System.currentTimeMillis())
    val contentValues = ContentValues().apply {
    
    
        put(MediaStore.MediaColumns.DISPLAY_NAME, name)
        put(MediaStore.MediaColumns.MIME_TYPE, "video/mp4")
        if (Build.VERSION.SDK_INT > Build.VERSION_CODES.P) {
    
    
            put(MediaStore.Video.Media.RELATIVE_PATH, "Movies/CameraX-Video")
        }
    }

    val mediaStoreOutputOptions = MediaStoreOutputOptions
        .Builder(context.contentResolver, MediaStore.Video.Media.EXTERNAL_CONTENT_URI)
        .setContentValues(contentValues)
        .build()

    return videoCapture.output
        .prepareRecording(context, mediaStoreOutputOptions)
        .withAudioEnabled()  // 启用音频
        .start(ContextCompat.getMainExecutor(context)) {
    
     recordEvent ->
        when(recordEvent) {
    
    
            is VideoRecordEvent.Start -> {
    
    }
            is VideoRecordEvent.Status -> {
    
    
                val duration = recordEvent.recordingStats.recordedDurationNanos / 1000 / 1000
                onProgress(duration)
            }
            is VideoRecordEvent.Pause -> {
    
    }
            is VideoRecordEvent.Resume -> {
    
    }
            is VideoRecordEvent.Finalize -> {
    
    
                if (!recordEvent.hasError()) {
    
    
                    val savedUri = recordEvent.outputResults.outputUri
                    onFinished(savedUri)
                } else {
    
    
                    onError(recordEvent.cause)
                }
            }
        }
    }
}

private suspend fun Context.getCameraProvider(): ProcessCameraProvider {
    
    
    return ProcessCameraProvider.getInstance(this).await()
}

Routing and permission configuration:

@Composable
fun CameraVideoCaptureNavHost() {
    
    
    val navController = rememberNavController()
    NavHost(navController, startDestination = "CameraVideoScreen") {
    
    
        composable("CameraVideoScreen") {
    
    
            CameraVideoScreen(navController = navController)
        }
        composable("VideoPlayerScreen") {
    
    
            VideoPlayerScreen(navController = navController)
        }
    }

}

@OptIn(ExperimentalPermissionsApi::class)
@Composable
fun CameraVideoScreen(navController: NavHostController) {
    
    
    val multiplePermissionsState = rememberMultiplePermissionsState(
        listOf(
            Manifest.permission.CAMERA,
            Manifest.permission.RECORD_AUDIO,
        )
    )
    LaunchedEffect(Unit) {
    
    
        if (!multiplePermissionsState.allPermissionsGranted) {
    
    
            multiplePermissionsState.launchMultiplePermissionRequest()
        }
    }
    if (multiplePermissionsState.allPermissionsGranted) {
    
    
        CameraVideoExample(navController)
    } else {
    
    
        NoCameraPermissionScreen(multiplePermissionsState)
    }
}

@OptIn(ExperimentalPermissionsApi::class)
@Composable
fun NoCameraPermissionScreen(permissionState: MultiplePermissionsState) {
    
    
    Column(modifier = Modifier.padding(10.dp)) {
    
    
        Text(
            getTextToShowGivenPermissions(
                permissionState.revokedPermissions, // 被拒绝/撤销的权限列表
                permissionState.shouldShowRationale
            ),
            fontSize = 16.sp
        )
        Spacer(Modifier.height(8.dp))
        Button(onClick = {
    
     permissionState.launchMultiplePermissionRequest() }) {
    
    
            Text("请求权限")
        }
    }
}

@OptIn(ExperimentalPermissionsApi::class)
private fun getTextToShowGivenPermissions(
    permissions: List<PermissionState>,
    shouldShowRationale: Boolean
): String {
    
    
    val size = permissions.size
    if (size == 0) return ""
    val textToShow = StringBuilder().apply {
    
     append("以下权限:\n") }
    for (i in permissions.indices) {
    
    
        textToShow.append(permissions[i].permission).apply {
    
    
            if (i == size - 1) append(" \n") else append(", ")
        }
    }
    textToShow.append(
        if (shouldShowRationale) {
    
    
            " 需要被授权,以保证应用功能正常使用."
        } else {
    
    
            " 未获得授权. 应用功能将不能正常使用."
        }
    )
    return textToShow.toString()
}

In order to view the recorded video, we use Google's ExoPlayer library to play the video in another routing screen , adding dependencies:

implementation "com.google.android.exoplayer:exoplayer:2.18.7"
// 展示拍摄视频
@Composable
fun VideoPlayerScreen(navController: NavHostController) {
    
    
    val savedUri = navController.previousBackStackEntry?.savedStateHandle?.get<Uri>("savedUri")
    val context = LocalContext.current

    val exoPlayer = savedUri?.let {
    
    
        remember(context) {
    
    
            ExoPlayer.Builder(context).build().apply {
    
    
                setMediaItem(MediaItem.fromUri(savedUri))
                prepare()
            }
        }
    }

    DisposableEffect(
        Box(
            modifier = Modifier.fillMaxSize()
        ) {
    
    
            AndroidView(
                factory = {
    
     context ->
                    StyledPlayerView(context).apply {
    
    
                        player = exoPlayer
                        setShowFastForwardButton(false)
                        setShowNextButton(false)
                        setShowPreviousButton(false)
                        setShowRewindButton(false)
                        controllerHideOnTouch = true
                        controllerShowTimeoutMs = 200
                    }
                },
                modifier = Modifier.fillMaxSize()
            )
        }
    ) {
    
    
        onDispose {
    
    
            exoPlayer?.release()
        }
    }
}

Shoot video with CameraController

With CameraX CameraController, you can switch independently ImageCapture, VideoCaptureand ImageAnalysisUseCase, provided that these UseCase can be used at the same time . ImageCaptureand ImageAnalysisUseCase are enabled by default, so you don't need to call setEnabledUseCases()to take a picture.

If CameraControllerrecording video using , you first need to setEnabledUseCases()allow VideoCaptureUseCase using .

// CameraX: Enable VideoCapture UseCase on CameraController.
cameraController.setEnabledUseCases(VIDEO_CAPTURE);

If you want to start recording video, you can call CameraController.startRecording()the function. This function saves the recorded video to File, as shown in the following example. Additionally, you need to pass a Executorand an OnVideoSavedCallbackimplementation class of to handle the success and error callbacks.

From CameraX 1.3.0-alpha02, startRecording()returns a Recordingobject that can be used to pause, resume, and stop video recording. You can also AudioConfigenable or disable the video recording function through the parameter. To enable recording, you need to make sure you have permission to use the microphone. Similar to CameraProvider, a Consumer<VideoRecordEvent>type of callback is also passed to monitor video events.

@SuppressLint("MissingPermission")
@androidx.annotation.OptIn(ExperimentalVideo::class)
private fun startStopVideo(context: Context, cameraController: LifecycleCameraController): Recording {
    
    
    // Define the File options for saving the video.
    val name = SimpleDateFormat(FILENAME_FORMAT, Locale.CHINA)
        .format(System.currentTimeMillis())+".mp4"

    val outputFileOptions = FileOutputOptions
        .Builder(File(context.filesDir, name))
        .build()

    // Call startRecording on the CameraController.
    return cameraController.startRecording(
        outputFileOptions,
        AudioConfig.create(true), // 开启音频
        ContextCompat.getMainExecutor(context),
    ) {
    
     videoRecordEvent ->
        when(videoRecordEvent) {
    
    
            is VideoRecordEvent.Start -> {
    
    }
            is VideoRecordEvent.Status -> {
    
    }
            is VideoRecordEvent.Pause -> {
    
    }
            is VideoRecordEvent.Resume -> {
    
    }
            is VideoRecordEvent.Finalize -> {
    
    
                if (!videoRecordEvent.hasError()) {
    
    
                    val savedUri = videoRecordEvent.outputResults.outputUri
                    val msg = "Video capture succeeded: $savedUri"
                    context.showToast(msg) 
                    Log.d(TAG, msg) 
                } else {
    
     
                    Log.d(TAG, "video capture ends with error", videoRecordEvent.cause)
                }
            }
        }
    }
}

As you can see, CameraControllerrecording video with is much simpler than using CameraProvider.

Complete sample code

CameraControllerHere is a complete sample code for capturing video using in Compose :

// CameraController 拍摄视频示例
private const val TAG = "CameraXVideo"
private const val FILENAME_FORMAT = "yyyy-MM-dd-HH-mm-ss-SSS"

@androidx.annotation.OptIn(ExperimentalVideo::class)
@OptIn(ExperimentalComposeUiApi::class)
@Composable
fun CameraVideoExample2(navController: NavHostController) {
    
    
    val context = LocalContext.current
    val lifecycleOwner = LocalLifecycleOwner.current
    val cameraController = remember {
    
     LifecycleCameraController(context) }

    var recording: Recording? =  null
    var time by remember {
    
     mutableStateOf(0L) }

    Scaffold(
        modifier = Modifier.fillMaxSize(),
        floatingActionButton = {
    
    
            FloatingActionButton(onClick = {
    
    
                if (!cameraController.isRecording) {
    
    
                    recording?.stop()
                    time = 0L
                    recording = startRecording(context, cameraController,
                        onFinished = {
    
     savedUri ->
                            if (savedUri != Uri.EMPTY) {
    
    
                                val msg = "Video capture succeeded: $savedUri"
                                context.showToast(msg)
                                Log.d(TAG, msg)
                                navController.currentBackStackEntry?.savedStateHandle?.set("savedUri", savedUri)
                                navController.navigate("VideoPlayerScreen")
                            }
                        },
                        onProgress = {
    
     time = it },
                        onError = {
    
    
                            recording?.close()
                            recording = null
                            time = 0L
                            Log.e(TAG, "video capture ends with error", it)
                        }
                    )
                } else {
    
    
                    recording?.stop()
                    recording = null
                    time = 0L
                }
            }) {
    
    
                val iconId = if (!cameraController.isRecording) R.drawable.ic_start_record_36
                    else R.drawable.ic_stop_record_36
                Icon(
                    imageVector = ImageVector.vectorResource(id = iconId),
                    tint = Color.Red,
                    contentDescription = "Capture Video"
                )
            }
        },
        floatingActionButtonPosition = FabPosition.Center,
    ) {
    
     innerPadding: PaddingValues ->
        Box(modifier = Modifier
            .padding(innerPadding)
            .fillMaxSize()) {
    
    
            AndroidView(
                modifier = Modifier
                    .fillMaxSize()
                    .padding(innerPadding),
                factory = {
    
     context ->
                    PreviewView(context).apply {
    
    
                        setBackgroundColor(Color.White.toArgb())
                        layoutParams = LinearLayout.LayoutParams(ViewGroup.LayoutParams.MATCH_PARENT, ViewGroup.LayoutParams.MATCH_PARENT)
                        scaleType = PreviewView.ScaleType.FILL_CENTER
                        implementationMode = PreviewView.ImplementationMode.COMPATIBLE
                    }.also {
    
     previewView ->
                        cameraController.cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA
                        previewView.controller = cameraController
                        cameraController.bindToLifecycle(lifecycleOwner)
                        // cameraController.cameraInfo?.let {
    
    
                        //    val supportedQualities = QualitySelector.getSupportedQualities(it)
                        // }
                        cameraController.setEnabledUseCases(VIDEO_CAPTURE) // 启用 VIDEO_CAPTURE UseCase
                        cameraController.videoCaptureTargetQuality = Quality.FHD
                    }
                },
                onReset = {
    
    },
                onRelease = {
    
    
                    cameraController.unbind()
                }
            )
            if (time > 0 && cameraController.isRecording) {
    
    
                Text(text = "${
      
      SimpleDateFormat("mm:ss", Locale.CHINA).format(time)} s",
                    modifier = Modifier.align(Alignment.TopCenter),
                    color = Color.Red,
                    fontSize = 16.sp
                )
            }
            if (!cameraController.isRecording) {
    
    
                IconButton(
                    onClick = {
    
    
                        cameraController.cameraSelector = when(cameraController.cameraSelector) {
    
    
                            CameraSelector.DEFAULT_BACK_CAMERA -> CameraSelector.DEFAULT_FRONT_CAMERA
                            else -> CameraSelector.DEFAULT_BACK_CAMERA
                        }
                    },
                    modifier = Modifier
                        .align(Alignment.TopEnd)
                        .padding(bottom = 32.dp)
                ) {
    
    
                    Icon(
                        painter = painterResource(R.drawable.ic_switch_camera),
                        contentDescription = "",
                        tint = Color.Green,
                        modifier = Modifier.size(36.dp)
                    )
                }
            }
        }
    }
}

@SuppressLint("MissingPermission")
@androidx.annotation.OptIn(ExperimentalVideo::class)
private fun startRecording(
    context: Context,
    cameraController: LifecycleCameraController,
    onFinished: (Uri) -> Unit,
    onProgress: (Long) -> Unit,
    onError: (Throwable?) -> Unit,
): Recording {
    
    
    // Define the File options for saving the video.
    val name = SimpleDateFormat(FILENAME_FORMAT, Locale.CHINA)
        .format(System.currentTimeMillis())+".mp4"

    val outputFileOptions = FileOutputOptions
        .Builder(File(context.getOutputDirectory(), name))
        .build()

    // Call startRecording on the CameraController.
    return cameraController.startRecording(
        outputFileOptions,
        AudioConfig.create(true), // 开启音频
        ContextCompat.getMainExecutor(context),
    ) {
    
     videoRecordEvent ->
        when(videoRecordEvent) {
    
    
            is VideoRecordEvent.Start -> {
    
    }
            is VideoRecordEvent.Status -> {
    
    
                val duration = videoRecordEvent.recordingStats.recordedDurationNanos / 1000 / 1000
                onProgress(duration)
            }
            is VideoRecordEvent.Pause -> {
    
    }
            is VideoRecordEvent.Resume -> {
    
    }
            is VideoRecordEvent.Finalize -> {
    
    
                if (!videoRecordEvent.hasError()) {
    
    
                    val savedUri = videoRecordEvent.outputResults.outputUri
                    onFinished(savedUri)
                    context.notifySystem(savedUri, outputFileOptions.file)
                } else {
    
    
                    onError(videoRecordEvent.cause)
                }
            }
        }
    }
}
private fun Context.getOutputDirectory(): File {
    
    
    val mediaDir = externalMediaDirs.firstOrNull()?.let {
    
    
        File(it, resources.getString(R.string.app_name)).apply {
    
     mkdirs() }
    }

    return if (mediaDir != null && mediaDir.exists()) mediaDir else filesDir
}
// 发送系统广播
private fun Context.notifySystem(savedUri: Uri?, file: File) {
    
    
    // 对于运行API级别>=24的设备,将忽略隐式广播,因此,如果您只针对24+级API,则可以删除此语句
    if (Build.VERSION.SDK_INT < Build.VERSION_CODES.N) {
    
    
        sendBroadcast(Intent(Intent.ACTION_MEDIA_SCANNER_SCAN_FILE, savedUri)) //刷新单个文件
    } else {
    
    
        MediaScannerConnection.scanFile(this, arrayOf(file.absolutePath), null, null)
    }
}

ImageAnalysis

The image analysis use case provides your application with CPU-accessible images on which you can perform image processing, computer vision, or machine learning inference. The application implements analyze()a method that runs every frame.

To use Image Analytics in your application, follow these steps:

  • Build ImageAnalysisuse cases.
  • create ImageAnalysis.Analyzer.
  • To ImageAnalysisconfigure the analyzer.
  • Bind lifecycleOwner, , cameraSelectorand ImageAnalysisuse cases to lifecycles. ( ProcessCameraProvider.bindToLifecycle())

Immediately after binding, CameraX will send the image to the registered analyzer. When the analysis is complete, call ImageAnalysis.clearAnalyzer()or unbind the ImageAnalysisuse case to stop the analysis.

Build an ImageAnalysis use case

ImageAnalysisAnalyzers (image consumers) can be connected to CameraX (image producers). Applications can use ImageAnalysis.Builderto build ImageAnalysisobjects. With the help of ImageAnalysis.Builder, the application can be configured as follows:

Image output parameters:

  • Format: CameraX is setOutputImageFormat(int)supported via YUV_420_888and RGBA_8888. The default format is YUV_420_888.
  • Resolution and AspectRatio : You can set one of these parameters, but note that you cannot set both values ​​at the same time.
  • rotation angle .
  • Target Name : Use this parameter for debugging.

Image flow control:

Here is the build ImageAnalysissample code:

private fun getImageAnalysis(): ImageAnalysis{
    
    
    val imageAnalysis = ImageAnalysis.Builder() 
        .setOutputImageFormat(ImageAnalysis.OUTPUT_IMAGE_FORMAT_RGBA_8888)
        .setTargetResolution(Size(1280, 720))
        .setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
        .build()
    val executor = Executors.newSingleThreadExecutor()
    imageAnalysis.setAnalyzer(executor, ImageAnalysis.Analyzer {
    
     imageProxy ->
        val rotationDegrees = imageProxy.imageInfo.rotationDegrees
        Log.e(TAG, "ImageAnalysis.Analyzer: imageProxy.format = ${
      
      imageProxy.format}")
        // insert your code here.
        if (imageProxy.format == ImageFormat.YUV_420_888 || imageProxy.format == PixelFormat.RGBA_8888) {
    
    
            val bitmap = imageProxy.toBitmap()
        }
        // ...
        // after done, release the ImageProxy object
        imageProxy.close()
    })
    return imageAnalysis
}
val imageAnalysis = getImageAnalysis() 
cameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector, preview, imageCapture, imageAnalysis)

Note: AnalyzerThe callback method in will be called back every frame when previewing.

Apps can set either the resolution or the aspect ratio, but not both. The exact output resolution depends on the app's requested size (or aspect ratio) and hardware capabilities, and may differ from the requested size or aspect ratio. See the documentation on setTargetResolution() for resolution matching algorithms

Applications can configure output image pixels to be in YUV (default) or RGBA color space. When setting the RGBA output format, CameraX will internally convert the image from YUV color space to RGBA color space, and pack the image bits into the ImageProxyfirst plane (the other two planes are not used) ByteBuffer, the sequence is as follows:

ImageProxy.getPlanes()[0].buffer[0]: alpha
ImageProxy.getPlanes()[0].buffer[1]: red
ImageProxy.getPlanes()[0].buffer[2]: green
ImageProxy.getPlanes()[0].buffer[3]: blue
...

operating mode

When your application's analysis pipeline cannot meet CameraX's frame rate requirements, you can configure CameraX to drop frames in one of the following ways:

  • Non-blocking (default): In this mode, the executor always caches the latest image into the image buffer (similar to a queue with depth 1), while the application analyzes the previous image. If CameraX receives a new image before the app has finished processing, the new image is saved to the same buffer, overwriting the previous one. Note that in this case, ImageAnalysis.Builder.setImageQueueDepth()it has no effect, the buffer contents are always overwritten. You can enable this non-blocking mode by using STRATEGY_KEEP_ONLY_LATESTthe call . setBackpressureStrategy()See the reference documentation for STRATEGY_KEEP_ONLY_LATEST for more information on executor-related effects .

  • Blocking : In this mode, the internal executor can add multiple images to the internal image queue and only start dropping frames when the queue is full. The system blocks on the entire camera device: If a camera device has more than one bound use case, then when CameraX processes those images, the system blocks all of them. For example, if both preview and image analysis are bound to a camera device, then the corresponding preview will also be blocked while CameraX is processing the image. You can enable blocking mode by STRATEGY_BLOCK_PRODUCERpassing a setBackpressureStrategy()to . Additionally, you can ImageAnalysis.Builder.setImageQueueDepth()configure the image queue depth by using .

If the analyzer has low latency and high performance, in which case the total time spent analyzing the image is less than the duration of a CameraX frame (for example, 16 ms at 60fps), then both modes of operation above can provide a smooth overall experience. Blocking mode is still useful in some cases, such as when dealing with very brief system jitters.

If the profiler has high latency and high performance, a combination of blocking mode and long queues will be needed to compensate for the delay. Note, however, that the application can still process all frames in this case.

Non-blocking mode may be more appropriate if analyzer latency is high and time-consuming (analyzer cannot process all frames), since in this case the system must drop frames for the analysis path, but let other concurrently bound use cases All frames are still visible.

ML Kit Analyzer (Machine Learning Kit Analyzer)

Google's Machine Learning Suite provides on-device machine learning Vision APIs for detecting faces, scanning barcodes, labeling images, and more. With the help of ML Kit Analyzer, you can more easily integrate ML Kit with CameraX applications.

ML Kit analyzers are ImageAnalysis.Analyzerimplementations of the interface. The profiler optimizes for use with ML Suite by overriding the default target resolution (if needed), handles coordinate transformations, and passes the frame to ML Suite which returns aggregated analysis results.

Implementing the ML Kit Analyzer

For implementing ML Kit profilers, it is recommended to use CameraControllerthe class, which can be PreviewViewused with to display UI elements. When implementing with CameraController, the ML Kit analyzer handles the coordinate transformation between the raw ImageAnalysisstream and the for you. PreviewViewThe analyzer receives the target coordinate system from CameraX, computes the coordinate transformation, and forwards it to the ML Kit class Detectorfor analysis.

To use an ML Kit analyzer with CameraController, call setImageAnalysisAnalyzer()and pass it a new ML Kit analyzer object, including the following in its constructor:

  • A list of machine learning suites Detector, CameraX will be called sequentially.

  • The target coordinate system used to determine the output coordinates of ML Kit:

    COORDINATE_SYSTEM_VIEW_REFERENCED: The transformed PreviewViewcoordinates.
    COORDINATE_SYSTEM_ORIGINAL: the original ImageAnalysisstream coordinates.

  • Used to call Consumerthe callback and MlKitAnalyzer.Resultpass the to the application Executor.

  • Called by CameraX when there is new ML Kit output Consumer.

Using the ML Kit profiler requires adding dependencies:

def camerax_version = "1.3.0-alpha04"
implementation "androidx.camera:camera-mlkit-vision:${camerax_version}"

QR code/barcode recognition

Add ML Kit barcode dependency library:

implementation 'com.google.mlkit:barcode-scanning:17.1.0'

Here is an example of usage:

private const val TAG = "MLKitAnalyzer"

@OptIn(ExperimentalComposeUiApi::class)
@Composable
fun MLKitAnalyzerCameraExample(navController: NavHostController) {
    
    
    val context = LocalContext.current
    val lifecycleOwner = LocalLifecycleOwner.current
    val cameraController = remember {
    
     LifecycleCameraController(context) }

    AndroidView(
        modifier = Modifier.fillMaxSize() ,
        factory = {
    
     context ->
            PreviewView(context).apply {
    
    
                setBackgroundColor(Color.White.toArgb())
                layoutParams = LinearLayout.LayoutParams(ViewGroup.LayoutParams.MATCH_PARENT, ViewGroup.LayoutParams.MATCH_PARENT)
                scaleType = PreviewView.ScaleType.FILL_CENTER
                implementationMode = PreviewView.ImplementationMode.COMPATIBLE
            }.also {
    
     previewView ->
                previewView.controller = cameraController
                cameraController.bindToLifecycle(lifecycleOwner)
                cameraController.imageAnalysisBackpressureStrategy = ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST
                cameraController.setBarcodeAnalyzer(context) {
    
     result ->
                    navController.currentBackStackEntry?.savedStateHandle?.set("result", result)
                    navController.navigate("ResultScreen")
                }
            }
        },
        onReset = {
    
    },
        onRelease = {
    
    
            cameraController.unbind()
        }
    )
}

private fun LifecycleCameraController.setBarcodeAnalyzer(
    context: Context,
    onFound: (String?) -> Unit
) {
    
    
    // create BarcodeScanner object
    val options = BarcodeScannerOptions.Builder()
        .setBarcodeFormats(Barcode.FORMAT_QR_CODE,
            Barcode.FORMAT_AZTEC, Barcode.FORMAT_DATA_MATRIX, Barcode.FORMAT_PDF417,
            Barcode.FORMAT_CODABAR, Barcode.FORMAT_CODE_39, Barcode.FORMAT_CODE_93,
            Barcode.FORMAT_EAN_8, Barcode.FORMAT_EAN_13, Barcode.FORMAT_ITF,
            Barcode.FORMAT_UPC_A, Barcode.FORMAT_UPC_E
        )
        .build()
    val barcodeScanner = BarcodeScanning.getClient(options)

    setImageAnalysisAnalyzer(
        ContextCompat.getMainExecutor(context),
        MlKitAnalyzer(
            listOf(barcodeScanner),
            COORDINATE_SYSTEM_VIEW_REFERENCED,
            ContextCompat.getMainExecutor(context)
        ) {
    
     result: MlKitAnalyzer.Result? ->
            val value = result?.getValue(barcodeScanner)
            value?.let {
    
     list ->
                if (list.size > 0) {
    
    
                    list.forEach {
    
     barCode ->
                        Log.e(TAG, "format:${
      
      barCode.format}, displayValue:${
      
      barCode.displayValue}")
                        context.showToast("识别到:${
      
      barCode.displayValue}")
                    }
                    val res = list[0].displayValue
                    if (!res.isNullOrEmpty()) onFound(res) 
                }
            }
        }
    )
}

In the code example above, the ML Kit analyzer would pass the following to the class BarcodeScannerof Detector:

  • COORDINATE_SYSTEM_VIEW_REFERENCEDA transformation based on representing the target coordinate system Matrix.
  • camera frame.

If BarcodeScannerit encounters any problems, its Detectorwill throw an error, and the ML Kit analyzer will propagate that error to your app. If successful, the ML Kit analyzer will return MLKitAnalyzer.Result#getValue(), in this case Barcodea object.

You can also barcode.valueTypeget the type of the value via:

for (barcode in barcodes) {
    
    
    val bounds = barcode.boundingBox
    val corners = barcode.cornerPoints

    val rawValue = barcode.rawValue

    val valueType = barcode.valueType
    // See API reference for complete list of supported types
    when (valueType) {
    
    
        Barcode.TYPE_WIFI -> {
    
    
            val ssid = barcode.wifi!!.ssid
            val password = barcode.wifi!!.password
            val type = barcode.wifi!!.encryptionType
        }
        Barcode.TYPE_URL -> {
    
    
            val title = barcode.url!!.title
            val url = barcode.url!!.url
        }
    }
}

You can also implement ML Kit analyzers using the class camera-corein . ImageAnalysisHowever, since ImageAnalysisis not PreviewViewintegrated with , you must manually handle coordinate transformations. For more information, see the ML Kit Analyzer Reference Documentation .

Routing and permission configuration:

@Composable
fun ExampleMLKitAnalyzerNavHost() {
    
    
    val navController = rememberNavController()
    NavHost(navController, startDestination = "MLKitAnalyzerCameraScreen") {
    
    
        composable("MLKitAnalyzerCameraScreen") {
    
    
            MLKitAnalyzerCameraScreen(navController = navController)
        }
        composable("ResultScreen") {
    
    
            ResultScreen(navController = navController)
        }
    }
}

@OptIn(ExperimentalPermissionsApi::class)
@Composable
fun MLKitAnalyzerCameraScreen(navController: NavHostController) {
    
    
    val cameraPermissionState = rememberPermissionState(Manifest.permission.CAMERA)
    LaunchedEffect(Unit) {
    
    
        if (!cameraPermissionState.status.isGranted && !cameraPermissionState.status.shouldShowRationale) {
    
    
            cameraPermissionState.launchPermissionRequest()
        }
    }
    if (cameraPermissionState.status.isGranted) {
    
     // 相机权限已授权, 显示预览界面
        MLKitAnalyzerCameraExample(navController)
    } else {
    
     // 未授权,显示未授权页面
        NoCameraPermissionScreen(cameraPermissionState = cameraPermissionState)
    }
}

@OptIn(ExperimentalPermissionsApi::class)
@Composable
fun NoCameraPermissionScreen(cameraPermissionState: PermissionState) {
    
    
    // In this screen you should notify the user that the permission
    // is required and maybe offer a button to start another camera perission request
    Column(horizontalAlignment = Alignment.CenterHorizontally) {
    
    
        val textToShow = if (cameraPermissionState.status.shouldShowRationale) {
    
    
            // 如果用户之前选择了拒绝该权限,应当向用户解释为什么应用程序需要这个权限
            "未获取相机授权将导致该功能无法正常使用。"
        } else {
    
    
            // 首次请求授权
            "该功能需要使用相机权限,请点击授权。"
        }
        Text(textToShow)
        Spacer(Modifier.height(8.dp))
        Button(onClick = {
    
     cameraPermissionState.launchPermissionRequest() }) {
    
     Text("请求权限") }
    }
}
// 展示识别结果
@Composable
fun ResultScreen(navController: NavHostController) {
    
    
    val result = navController.previousBackStackEntry?.savedStateHandle?.get<String>("result")
    result?.let {
    
    
        Box(modifier = Modifier.fillMaxSize()) {
    
    
            Text("$it", fontSize = 18.sp, modifier = Modifier.align(Alignment.Center))
        }
    }
}

For more related content, please refer to: https://developers.google.cn/ml-kit/vision/barcode-scanning/android?hl=zh-cn

text recognition

Add dependencies:

dependencies {
    
    
  // To recognize Latin script
  implementation 'com.google.mlkit:text-recognition:16.0.0'
  // To recognize Chinese script
  implementation 'com.google.mlkit:text-recognition-chinese:16.0.0'
  // To recognize Devanagari script
  implementation 'com.google.mlkit:text-recognition-devanagari:16.0.0'
  // To recognize Japanese script
  implementation 'com.google.mlkit:text-recognition-japanese:16.0.0'
  // To recognize Korean script
  implementation 'com.google.mlkit:text-recognition-korean:16.0.0'
}

Sample code:

private fun LifecycleCameraController.setTextAnalyzer(
    context: Context,
    onFound: (String) -> Unit
) {
    
    
    var called = false
    // val recognizer = TextRecognition.getClient(TextRecognizerOptions.DEFAULT_OPTIONS) // 拉丁文
    val recognizer = TextRecognition.getClient(ChineseTextRecognizerOptions.Builder().build()) // 中文
    setImageAnalysisAnalyzer(
        ContextCompat.getMainExecutor(context),
        MlKitAnalyzer(
            listOf(recognizer),
            COORDINATE_SYSTEM_VIEW_REFERENCED,
            ContextCompat.getMainExecutor(context)
        ) {
    
     result: MlKitAnalyzer.Result? ->
            val value = result?.getValue(recognizer)
            value?.let {
    
     resultText ->
                val sb = StringBuilder()
                for (block in resultText.textBlocks) {
    
    
                    val blockText = block.text
                    sb.append(blockText).append("\n")
                    val blockCornerPoints = block.cornerPoints
                    val blockFrame = block.boundingBox
                    for (line in block.lines) {
    
    
                        val lineText = line.text
                        val lineCornerPoints = line.cornerPoints
                        val lineFrame = line.boundingBox
                        for (element in line.elements) {
    
    
                            val elementText = element.text
                            val elementCornerPoints = element.cornerPoints
                            val elementFrame = element.boundingBox
                        }
                    }
                }
                val res = sb.toString()
                if (res.isNotEmpty() && !called) {
    
    
                    Log.e(TAG, "$res")
                    context.showToast("识别到:$res")
                    onFound(res)
                    called = true
                }
            }
        }
    )
}

A text recognizer breaks down text into blocks, lines, elements, and symbols. Roughly speaking:

  • A block is a group of contiguous lines of text, such as paragraphs or columns.
  • A line is a group of consecutive words on the same axis,
  • Element is a group of consecutive alphanumerics ("words") of the Latin alphabet that refers to other characters in most columns of the Latin alphabet
  • Symbol is a single alphanumeric character on the same axis in most Latin languages, or a character in other languages

The following figure highlights these examples in descending order. The first highlighted block (indicated in cyan) is a block of text. The second set of highlighted blue blocks are lines of text. Finally, the third set of words highlighted in dark blue text is Word.

insert image description here

For all detected blocks, lines, elements, and symbols, the API returns bounding boxes, corners, rotation information, confidence scores, recognized languages, and recognized text.

For more related content, please refer to: https://developers.google.cn/ml-kit/vision/text-recognition/v2/android?hl=zh-cn

face recognition

Add dependencies:

dependencies {
    
     
  // Use this dependency to bundle the model with your app
  implementation 'com.google.mlkit:face-detection:16.1.5'
}

Before applying face detection to a picture, if you want to change the default settings of the face detector, use the FaceDetectorOptions object to specify those settings. You can change the following settings:

set up Function
setPerformanceMode PERFORMANCE_MODE_FAST(default) / PERFORMANCE_MODE_ACCURATEPrioritize speed or accuracy when detecting faces.
setLandmarkMode LANDMARK_MODE_NONE(default) / LANDMARK_MODE_ALLWhether to try to identify facial "landmarks": eyes, ears, nose, cheeks, mouth, etc.
setContourMode CONTOUR_MODE_NONE(default) / CONTOUR_MODE_ALLWhether to detect the outline of facial features. Detects only the outlines of the most prominent faces in the picture.
setClassificationMode CLASSIFICATION_MODE_NONE(default) / CLASSIFICATION_MODE_ALLWhether to classify faces into different categories (e.g. "smiling" vs "eyes open").
setMinFaceSize float(Default: 0.1f) Sets the minimum desired face size expressed as the ratio of head width to image width.
enableTracking false(default) / trueWhether to assign IDs to faces for use in tracking faces across images.
Note that when contour detection is enabled, only one face is detected, so face tracking will not produce useful results. For this reason, to speed up detection, do not enable contour detection and face tracking at the same time.

For example:

// High-accuracy landmark detection and face classification
val highAccuracyOpts = FaceDetectorOptions.Builder()
        .setPerformanceMode(FaceDetectorOptions.PERFORMANCE_MODE_ACCURATE)
        .setLandmarkMode(FaceDetectorOptions.LANDMARK_MODE_ALL)
        .setClassificationMode(FaceDetectorOptions.CLASSIFICATION_MODE_ALL)
        .build()

// Real-time contour detection
val realTimeOpts = FaceDetectorOptions.Builder()
        .setContourMode(FaceDetectorOptions.CONTOUR_MODE_ALL)
        .build()

// 获取 FaceDetector 实例
val detector = FaceDetection.getClient(highAccuracyOpts)
// Or, to use the default option:
// val detector = FaceDetection.getClient();

If the face detection operation is successful, the system passes an array of objects to the success listener Face. Each Faceobject represents a face detected in the image. For each face, you get its bounding coordinates in the input image, along with any other information you've configured the face detector to look for. For example:

for (face in faces) {
    
    
    val bounds = face.boundingBox
    val rotY = face.headEulerAngleY // Head is rotated to the right rotY degrees
    val rotZ = face.headEulerAngleZ // Head is tilted sideways rotZ degrees

    // If landmark detection was enabled (mouth, ears, eyes, cheeks, and nose available):
    val leftEar = face.getLandmark(FaceLandmark.LEFT_EYE)
    leftEar?.let {
    
    
        val leftEarPos = leftEar.position
    }

    // If contour detection was enabled:
    val leftEyeContour = face.getContour(FaceContour.LEFT_EYE)?.points
    val upperLipBottomContour = face.getContour(FaceContour.UPPER_LIP_BOTTOM)?.points

    // If classification was enabled:
    if (face.smilingProbability != null) {
    
    
        val smileProb = face.smilingProbability
    }
    if (face.rightEyeOpenProbability != null) {
    
    
        val rightEyeOpenProb = face.rightEyeOpenProbability
    }

    // If face tracking was enabled:
    if (face.trackingId != null) {
    
    
        val id = face.trackingId
    }
}

Sample code:

@OptIn(ExperimentalComposeUiApi::class)
@Composable
fun MLKitFaceDetectorExample() {
    
    
    val context = LocalContext.current
    val lifecycleOwner = LocalLifecycleOwner.current
    val cameraController = remember {
    
     LifecycleCameraController(context) }
    var faces by remember {
    
     mutableStateOf(listOf<Face>()) }

    val bounds = remember(faces) {
    
    
        faces.map {
    
     face -> face.boundingBox }
    }

    val points = remember(faces) {
    
     getPoints(faces) }

    Box(modifier = Modifier.fillMaxSize()) {
    
    
        AndroidView(
            modifier = Modifier.fillMaxSize(),
            factory = {
    
     context ->
                PreviewView(context).apply {
    
    
                    setBackgroundColor(Color.White.toArgb())
                    layoutParams = LinearLayout.LayoutParams(
                        ViewGroup.LayoutParams.MATCH_PARENT,
                        ViewGroup.LayoutParams.MATCH_PARENT
                    )
                    scaleType = PreviewView.ScaleType.FILL_CENTER
                    implementationMode = PreviewView.ImplementationMode.COMPATIBLE
                }.also {
    
     previewView ->
                    previewView.controller = cameraController
                    cameraController.bindToLifecycle(lifecycleOwner)
                    cameraController.imageAnalysisBackpressureStrategy =
                        ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST
                    cameraController.setFaceDetectorAnalyzer(context) {
    
     faces = it }
                }
            },
            onReset = {
    
    },
            onRelease = {
    
    
                cameraController.unbind()
            }
        )
        Canvas(modifier = Modifier.fillMaxSize()) {
    
    
            bounds.forEach {
    
     rect ->
                drawRect(
                    Color.Red,
                    size = Size(rect.width().toFloat(), rect.height().toFloat()),
                    topLeft = Offset(x = rect.left.toFloat(), y = rect.top.toFloat()),
                    style = Stroke(width = 5f)
                )
            }
            points.forEach {
    
     point ->
                drawCircle(
                    Color.Green,
                    radius = 2.dp.toPx(),
                    center = Offset(x = point.x, y = point.y),
                )
            }
        }
    }
}

private fun LifecycleCameraController.setFaceDetectorAnalyzer(
    context: Context,
    onFound: (List<Face>) -> Unit
) {
    
    
    // High-accuracy landmark detection and face classification
    val highAccuracyOpts = FaceDetectorOptions.Builder()
        .setPerformanceMode(FaceDetectorOptions.PERFORMANCE_MODE_ACCURATE)
        .setLandmarkMode(FaceDetectorOptions.LANDMARK_MODE_ALL)
        .enableTracking()
        .build()
    // Real-time contour detection
    val realTimeOpts = FaceDetectorOptions.Builder()
        .setContourMode(FaceDetectorOptions.CONTOUR_MODE_ALL)
        .build()
    val detector = FaceDetection.getClient(highAccuracyOpts)
    setImageAnalysisAnalyzer(
        ContextCompat.getMainExecutor(context),
        MlKitAnalyzer(
            listOf(detector),
            COORDINATE_SYSTEM_VIEW_REFERENCED,
            ContextCompat.getMainExecutor(context)
        ) {
    
     result: MlKitAnalyzer.Result? ->
            val value = result?.getValue(detector)
            value?.let {
    
     onFound(it) }
        }
    )
}

// All landmarks
private val landMarkTypes = intArrayOf(
    FaceLandmark.MOUTH_BOTTOM,
    FaceLandmark.MOUTH_RIGHT,
    FaceLandmark.MOUTH_LEFT,
    FaceLandmark.RIGHT_EYE,
    FaceLandmark.LEFT_EYE,
    FaceLandmark.RIGHT_EAR,
    FaceLandmark.LEFT_EAR,
    FaceLandmark.RIGHT_CHEEK,
    FaceLandmark.LEFT_CHEEK,
    FaceLandmark.NOSE_BASE
)

private fun getPoints(faces: List<Face>) : List<PointF> {
    
    
    val points = mutableListOf<PointF>()
    for (face in faces) {
    
    
        landMarkTypes.forEach {
    
     landMarkType ->
            face.getLandmark(landMarkType)?.let {
    
    
                points.add(it.position)
            }
        }
    }
    return points
}

Effect:

insert image description here

For more related content, please refer to: https://developers.google.cn/ml-kit/vision/face-detection/android?hl=zh-cn

CameraX other advanced configuration options

CameraXConfig

For simplicity, CameraX has default configurations (such as internal executors and handlers) suitable for most usage scenarios. However, if your application has special requirements or wishes to customize these configurations, CameraXConfigthe interface can be used for this purpose.

With help CameraXConfig, apps can do the following:

  • Use to setAvailableCameraLimiter()optimize startup delay time.
  • setCameraExecutor()Provide an application executor to CameraX using .
  • Replace the default scheduler handler with setSchedulerHandler().
  • Use setMinimumLoggingLevel()to change the logging level.

The following procedure shows how to use CameraXConfig:

  1. Create a object with your custom configuration CameraXConfig.
  2. ApplicationImplements the interface in CameraXConfig.Providerand getCameraXConfig()returns CameraXConfigthe object in .
  3. Add Applicationthe class to AndroidManifest.xmlthe file.

For example, the following code sample limits CameraX logging to error messages only:

class CameraApplication : Application(), CameraXConfig.Provider {
    
    
   override fun getCameraXConfig(): CameraXConfig {
    
    
       return CameraXConfig.Builder.fromConfig(Camera2Config.defaultConfig())
           .setMinimumLoggingLevel(Log.ERROR).build()
   }
}

If your app needs to know about the CameraX configuration after it's been set, keep CameraXConfiga local copy of the object.

camera limiter

During the first call ProcessCameraProvider.getInstance(), CameraX enumerates and queries the characteristics of the available cameras on the device. Since CameraX needs to communicate with hardware components, this process can take a long time for each camera, especially on low-end devices. If your application only uses a specific camera on the device (such as the default front-facing camera), you can set CameraX to ignore other cameras, thereby reducing the startup delay of the cameras used by your application.

CameraXConfig.Builder.setAvailableCamerasLimiter()If a is passed to CameraSelectorfilters out a camera, CameraX will assume that the camera does not exist at runtime. For example, the following code restricts the app to use only the device's default rear camera:

class MainApplication : Application(), CameraXConfig.Provider {
    
    
   override fun getCameraXConfig(): CameraXConfig {
    
    
       return CameraXConfig.Builder.fromConfig(Camera2Config.defaultConfig())
              .setAvailableCamerasLimiter(CameraSelector.DEFAULT_BACK_CAMERA)
              .build()
   }
}

thread

Many of the platform APIs on which CameraX is built require blocking inter-process communication (IPC) with the hardware, which can sometimes require response times of hundreds of milliseconds. Therefore, CameraX only calls these APIs from a background thread, thereby avoiding blocking of the main thread and keeping the interface smooth. CameraX manages these background threads internally, so this kind of behavior appears transparent. However, some applications require strict control over threads. CameraXConfigAllows an application to set the background thread used by CameraXConfig.Builder.setCameraExecutor()and .CameraXConfig.Builder.setSchedulerHandler()

automatically choose

CameraX automatically provides dedicated features based on the device your app is running on. For example, if you do not specify a resolution or if you specify an unsupported resolution, CameraX automatically determines the best resolution to use. All of these operations are handled by the library without requiring you to write device-specific code.

The goal of CameraX is to successfully initialize a camera session. This means that CameraX reduces resolution and aspect ratio according to device capabilities. This happens for the following reasons:

  • The device does not support the requested resolution.
  • There are compatibility issues with the device, such as an older device that requires a certain resolution to function properly.
  • On some devices, some formats are only available in certain aspect ratios.
  • For JPEG or video encoding, the device prefers "closest mod16". See SCALER_STREAM_CONFIGURATION_MAP for details .

Although CameraX creates and manages the session, you should always check in your code the size of the image returned by the use case output and adjust accordingly.

to rotate

By default, during use case creation, the camera rotation is set to match the default display rotation. By default, CameraX generates output that ensures the app is what you expect to see in the preview. You can change the rotation angle to a custom value to support multi-display devices by passing in the current display orientation when configuring the use case object or passing in the display orientation dynamically after the use case object is created.

Your app can use a configuration setting to set the target rotation. Then, even while the lifecycle is running, the app can update the rotation setting by using methods in the use case API such as ImageAnalysis.setTargetRotation(). You could do the above while the app is locked in portrait mode so you don't need to reconfigure the rotation, but photo or analytics use cases require knowledge of the device's current rotation. For example, a use case might require knowing the angle of rotation to do face detection in the correct orientation, or to set a photo in landscape or portrait orientation.

Data for captured pictures may not be stored with rotation information. Exif data includes rotation information so that the gallery app displays the image in the correct screen orientation after saving.

To display the preview data in the correct orientation, you can Preview.PreviewOutput()create a transform using the metadata output of the .

The following code sample shows how to set the rotation angle for an orientation event:

override fun onCreate() {
    
    
    val imageCapture = ImageCapture.Builder().build()

    val orientationEventListener = object : OrientationEventListener(this as Context) {
    
    
        override fun onOrientationChanged(orientation : Int) {
    
    
            // Monitors orientation values to determine the target rotation value
            val rotation : Int = when (orientation) {
    
    
                in 45..134 -> Surface.ROTATION_270
                in 135..224 -> Surface.ROTATION_180
                in 225..314 -> Surface.ROTATION_90
                else -> Surface.ROTATION_0
            }

            imageCapture.targetRotation = rotation
        }
    }
    orientationEventListener.enable()
}

Each use case will either directly rotate the image data according to the set rotation angle, or provide rotation metadata of the unrotated image data to the user.

  • Preview : Provides metadata output for use with Preview.getTargetRotation()knowing the rotation settings for the target resolution.
  • ImageAnalysis : Provides metadata output to understand where the image buffer coordinates are relative to the display coordinates.
  • ImageCapture : Changes the image Exif metadata, buffer, or both, to reflect the rotation setting. The changed value depends on the HAL implementation.

For more information about rotation, please refer to: https://developer.android.google.cn/training/camerax/orientation-rotation?hl=zh-cn

clipping rectangle

By default, the clipping rectangle is the full buffer rectangle, which can be customized with ViewPortand . UseCaseGroupBy grouping use cases and setting the viewport, CameraX guarantees that the clipping rectangles of all use cases in a group point to the same area in the camera sensor.

The following code snippet shows how to use these two classes:

val viewPort =  ViewPort.Builder(Rational(width, height), display.rotation).build()
val useCaseGroup = UseCaseGroup.Builder()
    .addUseCase(preview)
    .addUseCase(imageAnalysis)
    .addUseCase(imageCapture)
    .setViewPort(viewPort)
    .build()
cameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector, useCaseGroup)

ViewPortUsed to specify the buffer rectangle visible to the end user. CameraX calculates the largest possible clipping rectangle based on the properties of the viewport and the attached use case. In general, to achieve what you see is what you get, you should configure the viewport according to your preview use case. An easy way to get the viewport is to use PreviewView.

The following code snippet shows how to get ViewPorta object:

val viewport = findViewById<PreviewView>(R.id.preview_view).viewPort

In the preceding example, the app gets the same content through ImageAnalysisand ImageCaptureas the end user sees in PreviewView(assuming PreviewViewthe zoom type of is set to the default value FILL_CENTER). Once the clipping rect and rotation are applied to the output buffer, the image will be consistent across all use cases, although resolution may vary. For more information on how to apply transformation information, see Transformation output .

Select an available camera

CameraX automatically selects the best camera device based on the application's requirements and use case. If you wish to use a device other than the one automatically selected, you have several options:

  • Use CameraSelector.DEFAULT_FRONT_CAMERAto request the default front camera.
  • Use CameraSelector.DEFAULT_BACK_CAMERAto request the default rear camera.
  • Use CameraSelector.Builder.addCameraFilter()to CameraCharacteristicsfilter the list of available devices.

Note: Camera devices must be recognized by the system and appear in CameraManager.getCameraIdList()before they can be used.

In addition, each original equipment manufacturer (OEM) must choose whether to support external camera devices. PackageManager.FEATURE_CAMERA_EXTERNALTherefore, be sure to check that is enabled before attempting to use any external camera .

The following code sample shows how to create a CameraSelectorto affect device selection:

fun selectExternalOrBestCamera(provider: ProcessCameraProvider):CameraSelector? {
    
    
   val cam2Infos = provider.availableCameraInfos.map {
    
    
       Camera2CameraInfo.from(it)
   }.sortedByDescending {
    
    
       // HARDWARE_LEVEL is Int type, with the order of:
       // LEGACY < LIMITED < FULL < LEVEL_3 < EXTERNAL
       it.getCameraCharacteristic(CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL)
   }

   return when {
    
    
       cam2Infos.isNotEmpty() -> {
    
    
           CameraSelector.Builder()
               .addCameraFilter {
    
    
                   it.filter {
    
     camInfo ->
                       // cam2Infos[0] is either EXTERNAL or best built-in camera
                       val thisCamId = Camera2CameraInfo.from(camInfo).cameraId
                       thisCamId == cam2Infos[0].cameraId
                   }
               }.build()
       }
       else -> null
    }
}

// create a CameraSelector for the USB camera (or highest level internal camera)
val selector = selectExternalOrBestCamera(processCameraProvider)
processCameraProvider.bindToLifecycle(this, selector, preview, analysis)

Select multiple cameras at the same time

Starting with CameraX 1.3 , you can also select multiple cameras at the same time. For example, you can rig the front and rear cameras to take photos or record videos from both perspectives simultaneously.

When using the concurrent camera feature, the device can run two cameras with different lens orientations at the same time, or run two rear cameras at the same time. The following code block shows how to bindToLifecycleset up two cameras when calling and ConcurrentCameraget two Cameraobjects from the returned object.

// Build ConcurrentCameraConfig
val primary = ConcurrentCamera.SingleCameraConfig(
    primaryCameraSelector,
    useCaseGroup,
    lifecycleOwner
)

val secondary = ConcurrentCamera.SingleCameraConfig(
    secondaryCameraSelector,
    useCaseGroup,
    lifecycleOwner
)

val concurrentCamera = cameraProvider.bindToLifecycle(
    listOf(primary, secondary)
)

val primaryCamera = concurrentCamera.cameras[0]
val secondaryCamera = concurrentCamera.cameras[1]

camera resolution

You can choose to let CameraX set the picture resolution based on device capabilities, hardware level supported by the device, use case and provided aspect ratio combination. Alternatively, you can set a specific target resolution or a specific aspect ratio in use cases where the corresponding configuration is supported.

automatic resolution

CameraX can cameraProcessProvider.bindToLifecycle()automatically determine the best resolution setting based on the use case specified in . Whenever possible, bindToLifecycle()specify all use cases that need to run concurrently in a single session of a single call. CameraX determines the resolution based on the bundled set of use cases, taking into account the hardware level supported by the device, as well as device-specific variations (device exceeds or does not meet the available streaming configuration). This is done to ensure that the app runs on a variety of devices with minimal device-specific code paths.

The default aspect ratio for image capture and image analysis use cases is 4:3.

For use cases with configurable aspect ratios, let the app specify the desired aspect ratio based on the UI design. CameraX generates output in the requested aspect ratio, matching as closely as possible the aspect ratio supported by the device. If there are no supported exact match resolutions, the resolution that satisfies the most criteria is chosen. That is, the application will determine how the camera is displayed in the application, and CameraX will determine the optimal camera resolution setting to meet the specific requirements of different devices.

For example, an app can do any of the following:

  • Specify the target resolution of 4:3or for the use case16:9
  • Specify a custom resolution, CameraX will try to find the closest resolution to this resolution
  • ImageCaptureSpecifies the cropping aspect ratio for

CameraX automatically selects Camera2the resolution of the internal interface. The table below shows these resolutions:

insert image description here

specify resolution

When building a use case using setTargetResolution(Size resolution)the method, you can set a specific resolution, as shown in the following code example:

val imageAnalysis = ImageAnalysis.Builder()
    .setTargetResolution(Size(1280, 720))
    .build()

You cannot set target aspect ratio and target resolution for the same use case. If it does, it will be thrown when building the configuration object IllegalArgumentException.

After rotating the supported size by the target rotation angle, express the resolution in a coordinate system Size. For example, a device whose natural screen orientation is portrait and uses a natural target rotation angle can specify if it requests a portrait image, 480x640while the same device when rotated 90and targets a landscape screen orientation can specify it 640x480.

The target resolution attempts to set a lower bound on the image resolution. The actual image resolution is the closest available resolution, and its size is not smaller than the target resolution determined by the camera implementation.

However, if there is no resolution equal to or greater than the target resolution, the closest available resolution smaller than the target resolution is chosen. Resolutions with the same aspect ratio as the provided Sizehave priority over resolutions with different aspect ratios.

CameraX applies the most appropriate resolution upon request. Only specify if the primary need is to meet aspect ratio requirements, and setTargetAspectRatioCameraX will determine a suitable specific resolution based on the device. Use if the main requirement of your application is to specify a resolution to improve image processing efficiency (such as processing small or medium-sized images according to device processing capabilities) setTargetResolution(Size resolution).

Note: If used setTargetResolution(), you may get buffers with aspect ratios that don't match other use cases. If aspect ratios must match, check the buffer dimensions returned by both use cases, and clip or scale one to match the other.
If your application requires exact resolutions, see createCaptureSession()the table within to determine the maximum supported resolutions for each hardware level. To see specific resolutions supported by your current device, see StreamConfigurationMap.getOutputSizes(int).

If your app is running on Android 10 or higher, you can use isSessionConfigurationSupported()authentication-specific SessionConfiguration.

Control camera output

Not only does CameraX allow you to optionally configure the camera output for each individual use case, but it also implements the following interfaces to support common camera operations across all bundled use cases:

  • With CameraControl, you can configure common camera features.
  • With CameraInfo, you can query the status of these generic camera features.

The following CameraControlcamera features are supported:

  • zoom
  • flashlight
  • Focus and metering (tap to focus)
  • exposure compensation

Get instances of CameraControl and CameraInfo

Retrieves instances of and using the object ProcessCameraProvider.bindToLifecycle()returned by . The following code shows an example:CameraCameraControlCameraInfo

val camera = processCameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector, preview)

// For performing operations that affect all outputs.
val cameraControl = camera.cameraControl
// For querying information and states.
val cameraInfo = camera.cameraInfo

For example, you can bindToLifecycle()submit a zoom operation and other CameraControloperations after calling . If you stop or destroy the instance used to bind the camera activity, CameraControlno more operations can be performed and a failure will be returned ListenableFuture.

NOTE: If LifecycleOwneris stopped or destroyed, Camerait will be turned off, after which all state changes for the zoom, torch, focus and metering, and exposure compensation controls will revert to their default values.

zoom

CameraControlTwo methods of changing the zoom level are provided:

  • setZoomRatio() is used to set zoom by zoom ratio.

    The ratio must be in the range of CameraInfo.getZoomState().getValue().getMinZoomRatio()to CameraInfo.getZoomState().getValue().getMaxZoomRatio(). Otherwise, the function returns failed ListenableFuture.

  • setLinearZoom() sets the current zoom operation with a linear zoom value between and 0.1.0

The advantage of linear zoom is that it allows the field of view (FOV) to scale as the zoom changes. Therefore, linear zoom works well with Sliderthe view.

CameraInfo.getZoomState()will return the current zoom state LiveData. This value changes when the camera is initialized or when the zoom level is set using setZoomRatio()or . setLinearZoom()Calling either method sets the supported ZoomState.getZoomRatio()and ZoomState.getLinearZoom()values. This is useful if you want the zoom scale text to appear next to the zoom slider. Both can be updated by simply observing ZoomState LiveData, without casting.

The returned by these two APIs ListenableFuturegives the app the option to receive notifications when repeated requests with the specified zoom value are completed. Also, if you set a new zoom value while the previous zoom operation is still in progress, the previous zoom operation's ListenableFuturewill fail immediately.

flashlight

CameraControl.enableTorch(boolean)The torch can be enabled or disabled (torch app).

CameraInfo.getTorchState()Can be used to query the current flashlight status. You can CameraInfo.hasFlashUnit()determine whether the torch feature is available by checking the value returned by . If the torch is not available, calling CameraControl.enableTorch(boolean)will cause the returned ListenableFutureto complete immediately with a failure result, setting the torch's state to TorchState.OFF.

When enabled, the torch will remain on when taking photos and videos, regardless of the flash mode setting. ImageCapturein will only work when the flashlight is disabled flashMode.

Focus and Metering

CameraControl.startFocusAndMetering()The AF/AE/AWB metering area can be set according to the specified FocusMeteringActionto trigger autofocus and exposure metering. There are many camera apps that implement "tap to focus" functionality this way.

MeteringPoint

First, use MeteringPointFactory.createPoint(float x, float y, float size)Create MeteringPoint. Represents a single point on MeteringPointthe camera . SurfaceIt is stored in normalized form so it can be easily converted to sensor coordinates for specifying AF/AE/AWB areas.

MeteringPointhas a size between 0and 1, with a default size of 0.15f. When calling MeteringPointFactory.createPoint(float x, float y, float size), CameraX sizecreates a rectangular area (x, y)centered on the provided .

The following code demonstrates how to create MeteringPoint:

// Use PreviewView.getMeteringPointFactory if PreviewView is used for preview.
previewView.setOnTouchListener((view, motionEvent) ->  {
    
    
val meteringPoint = previewView.meteringPointFactory
    .createPoint(motionEvent.x, motionEvent.y)}

// Use DisplayOrientedMeteringPointFactory if SurfaceView / TextureView is used for
// preview. Please note that if the preview is scaled or cropped in the View,
// it’s the application's responsibility to transform the coordinates properly
// so that the width and height of this factory represents the full Preview FOV.
// And the (x,y) passed to create MeteringPoint might need to be adjusted with
// the offsets.
val meteringPointFactory = DisplayOrientedMeteringPointFactory(
     surfaceView.display,
     camera.cameraInfo,
     surfaceView.width,
     surfaceView.height
)

// Use SurfaceOrientedMeteringPointFactory if the point is specified in
// ImageAnalysis ImageProxy.
val meteringPointFactory = SurfaceOrientedMeteringPointFactory(
     imageWidth,
     imageHeight,
     imageAnalysis)

If startFocusAndMetering and FocusMeteringAction
need to be called startFocusAndMetering(), the application must be constructed FocusMeteringAction, which contains one or more MeteringPoints, the latter is composed of optional metering modes such as FLAG_AF, FLAG_AE, and FLAG_AWB. The following code demonstrates this usage:

val meteringPoint1 = meteringPointFactory.createPoint(x1, x1)
val meteringPoint2 = meteringPointFactory.createPoint(x2, y2)
val action = FocusMeteringAction.Builder(meteringPoint1) // default AF|AE|AWB
      // Optionally add meteringPoint2 for AF/AE.
      .addPoint(meteringPoint2, FLAG_AF | FLAG_AE)
      // The action is canceled in 3 seconds (if not set, default is 5s).
      .setAutoCancelDuration(3, TimeUnit.SECONDS)
      .build()

val result = cameraControl.startFocusAndMetering(action)
// Adds listener to the ListenableFuture if you need to know the focusMetering result.
result.addListener({
    
    
   // result.get().isFocusSuccessful returns if the auto focus is successful or not.
}, ContextCompat.getMainExecutor(this)

startFocusAndMetering()Accepts one , as shown in the code above FocusMeteringAction, which contains one for AF/AE/AWB metering zones MeteringPoint, and another for AF and AE only MeteringPoint.

Internally, CameraX will convert it to Camera2 MeteringRectanglesand set the corresponding CONTROL_AF_REGIONS/CONTROL_AE_REGIONS/CONTROL_AWB_REGIONSparameters to the capture request.

Since not all devices support AF/AE/AWB and multiple zones, CameraX does its best effort FocusMeteringAction. CameraX will use the maximum number supported MeteringPoint, and use them sequentially in the order in which the metering points were added. MeteringPointCameraX ignores all additions beyond the maximum supported number . For example, if you provide 3 for MeteringPointon a platform that only supports 2 , CameraX will only use the first 2 and ignore the last .FocusMeteringActionMeteringPointMeteringPointMeteringPoint

exposure compensation

Exposure compensation is useful when the application requires fine-tuning of the exposure value (EV) other than the result of the automatic exposure (AE) output. CameraX will combine exposure compensation values ​​as follows to determine the desired exposure for the current picture conditions:

Exposure = ExposureCompensationIndex * ExposureCompensationStep

CameraX provides Camera.CameraControl.setExposureCompensationIndex()function for setting exposure compensation to indexed value.

When the index value is positive, the image will be brightened; when the index value is negative, the image will be darkened. CameraInfo.ExposureState.exposureCompensationRange()Apps can query supported ranges as described in the next section . setExposureCompensationIndex()The returned ListenableFuture completes when the value is successfully enabled in a capture request if the corresponding value is supported, or causes the returned to ListenableFuturecomplete immediately with a failure if the specified index is out of the supported range.

CameraX keeps only the latest outstanding setExposureCompensationIndex()request. Calling this function multiple times while the previous request has not been executed will cause the request to be cancelled.

The following snippet sets the exposure compensation index and registers a callback to know when an exposure change request is executed:

camera.cameraControl.setExposureCompensationIndex(exposureCompensationIndex)
   .addListener({
    
    
      // Get the current exposure compensation index, it might be
      // different from the asked value in case this request was
      // canceled by a newer setting request.
      val currentExposureIndex = camera.cameraInfo.exposureState.exposureCompensationIndex
      …
   }, mainExecutor)

Camera.CameraInfo.getExposureState()The current ones can be retrieved ExposureState, including:

  • Support for exposure compensation controls.
  • The current exposure compensation index.
  • Exposure compensation index range.
  • The exposure compensation step used to calculate the exposure compensation value.

For example, the following code initializes the Exposure setting with the current ExposureStatevalue SeekBar:

val exposureState = camera.cameraInfo.exposureState
binding.seekBar.apply {
    
    
   isEnabled = exposureState.isExposureCompensationSupported
   max = exposureState.exposureCompensationRange.upper
   min = exposureState.exposureCompensationRange.lower
   progress = exposureState.exposureCompensationIndex
}

CameraX Extensions API

CameraX provides an Extensions API for accessing extensions implemented by device manufacturers on various Android devices. See Camera Extensions for a list of supported extension modes .

For a list of devices that support the extension, see Supported devices .

Extended architecture

The following diagram shows the camera extension architecture.

insert image description here

CameraX applications can use extensions through the CameraX Extensions API. The CameraX Extensions API can be used to manage queries for available extensions, configure extension camera sessions, and communicate with OEM libraries of camera extensions. This allows your app to use features such as Night, HDR, Auto, Bokeh, or Face Photo Fix.

dependencies

The CameraX Extensions API is camera-extensionsimplemented in the library. These extensions depend on the CameraX core modules ( core, camera2, lifecycle).

dependencies {
    
    
  def camerax_version = "1.3.0-alpha04"
  implementation "androidx.camera:camera-core:${camerax_version}"
  implementation "androidx.camera:camera-camera2:${camerax_version}"
  implementation "androidx.camera:camera-lifecycle:${camerax_version}"
  //the CameraX Extensions library
  implementation "androidx.camera:camera-extensions:${camerax_version}"
    ...
}

An extension that enables image capture and preview

Before using the Extensions API, use ExtensionsManager#getInstanceAsync(Context, CameraProvider)the method to retrieve ExtensionsManageran instance. This way, you can query extended availability information. Then retrieve the enabled extensions CameraSelector. If the method is called CameraSelectorwith the extension enabled bindToLifecycle(), the extended mode will be applied for the image capture and preview use cases.

Note: With extensions enabled on ImageCaptureand Preview, if you use and ImageCaptureas arguments to , you may be limited in the number of cameras you can choose. An exception is thrown if no camera supporting the extension can be found .PreviewbindToLifecycle()ExtensionsManager#getExtensionEnabledCameraSelector()

For an extension to implement the image capture and preview use case, see the following code sample:

import androidx.camera.extensions.ExtensionMode
import androidx.camera.extensions.ExtensionsManager

override fun onCreate(savedInstanceState: Bundle?) {
    
    
    super.onCreate(savedInstanceState)

    val lifecycleOwner = this

    val cameraProviderFuture = ProcessCameraProvider.getInstance(applicationContext)
    cameraProviderFuture.addListener({
    
    
        // Obtain an instance of a process camera provider
        // The camera provider provides access to the set of cameras associated with the device.
        // The camera obtained from the provider will be bound to the activity lifecycle.
        val cameraProvider = cameraProviderFuture.get()

        val extensionsManagerFuture =
            ExtensionsManager.getInstanceAsync(applicationContext, cameraProvider)
        extensionsManagerFuture.addListener({
    
    
            // Obtain an instance of the extensions manager
            // The extensions manager enables a camera to use extension capabilities available on
            // the device.
            val extensionsManager = extensionsManagerFuture.get()

            // Select the camera
            val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA

            // Query if extension is available.
            // Not all devices will support extensions or might only support a subset of
            // extensions.
            if (extensionsManager.isExtensionAvailable(cameraSelector, ExtensionMode.NIGHT)) {
    
    
                // Unbind all use cases before enabling different extension modes.
                try {
    
    
                    cameraProvider.unbindAll()

                    // Retrieve a night extension enabled camera selector
                    val nightCameraSelector =
                        extensionsManager.getExtensionEnabledCameraSelector(
                            cameraSelector,
                            ExtensionMode.NIGHT
                        )

                    // Bind image capture and preview use cases with the extension enabled camera
                    // selector.
                    val imageCapture = ImageCapture.Builder().build()
                    val preview = Preview.Builder().build()
                    // Connect the preview to receive the surface the camera outputs the frames
                    // to. This will allow displaying the camera frames in either a TextureView
                    // or SurfaceView. The SurfaceProvider can be obtained from the PreviewView.
                    preview.setSurfaceProvider(surfaceProvider)

                    // Returns an instance of the camera bound to the lifecycle
                    // Use this camera object to control various operations with the camera
                    // Example: flash, zoom, focus metering etc.
                    val camera = cameraProvider.bindToLifecycle(
                        lifecycleOwner,
                        nightCameraSelector,
                        imageCapture,
                        preview
                    )
                } catch (e: Exception) {
    
    
                    Log.e(TAG, "Use case binding failed", e)
                }
            }
        }, ContextCompat.getMainExecutor(this))
    }, ContextCompat.getMainExecutor(this))
}

Disable extension

To disable vendor extensions, unbind all use cases, then rebind the image capture and preview use cases using the regular camera picker. For example, use to CameraSelector.DEFAULT_BACK_CAMERArebind to the rear camera.

Summary of Comparison between CameraX and Camera1

Although the code for CameraX and Camera1 may look different, the basic concepts in CameraX and Camera1 are very similar. In CameraX, various common camera function tasks: Preview, ImageCapture, VideoCaptureand ImageAnalysisare all abstracted as UseCase use cases, so many tasks left to developers in Camera1 can be automatically handled by CameraX.

insert image description here

An example of how CameraX handles low-level details for the developer is UseCaseshared between valid ViewPort. This ensures that all UseCasepixels seen are exactly the same. In Camera1, you'll have to manage these details yourself, and given that devices have different aspect ratios for each camera sensor and screen, it can be difficult to ensure that the preview matches the captured photos and videos.

As another example, CameraX automatically handles Lifecycle callbacks for the Lifecycle instance you pass it. This means that CameraX handles your app's connection to the camera throughout the Android activity lifecycle, including the following cases: closing the camera after your app goes to the background, and removing the camera preview when the screen no longer needs it , and pause the camera preview while other activities (such as a video call invitation) take priority in the foreground.

Also, CameraX handles rotation and scaling without you having to do any additional code. If the orientation of the activity is not locked, the system will set it every time the device is rotated, UseCasebecause the system destroys and recreates the activity when the orientation of the screen changes. This way, UseCases will default to their target rotation each time to match the screen orientation.

insert image description here


Reference: https://developer.android.google.cn/training/camerax?hl=zh-cn

Guess you like

Origin blog.csdn.net/lyabc123456/article/details/131181550