EVS - Android 12

上篇文章已经介绍了EVS的基本概念,在Android大版本迭代的过程中,EVS也在不断完善

本文将介绍Android 12上的EVS流程

首先EVS涉及到是三个服务,分别是:

evs_app、evs_manager和evs_sample_driver

evs_app

evs_app可以根据各个OEM的需求,替换各自的app应用,它的主要作用就是负责协调camera和

显示,通过谷歌定义的标准接口,获取到数据帧,经EGL渲染,快速显示。它的启动如下:

service evs_app /system/bin/evs_app
    class hal
    priority -20
    user automotive_evs
    group automotive_evs
    disabled # will not automatically start with its class; must be explictly started.

一般在init阶段启动,大约在开机的2s后,不过上面可以看到默认的是“disable”,需要手动进行开

启,也可以根据需求进行更改,改为开机自启动,如果车机状态没有更改的话,会持续显示20s

(可根据自己需求进行更改),这里车机状态,则是通过Vehicle过去,

状态可分为:

VehicleGear::GEAR_REVERSE

VehicleTurnSignal::RIGHT

VehicleTurnSignal::LEFT

VehicleGear::GEAR_PARK

每个状态下对应各自camera,会在config.json中设置好,如下:

  "cameras" : [
    {
      "cameraId" : "/dev/video13",
      "function" : "reverse,park",
      "x" : 0.0,
      "y" : 20.0,
      "z" : 48,
      "yaw" : 180,
      "pitch" : -10,
      "roll" : 0,
      "hfov" : 115,
      "vfov" : 80,
      "hflip" : true,
      "vflip" : false
    },
    {
      "cameraId" : "/dev/video15",
      "function" : "front,park",
      "x" : 0.0,
      "y" : 100.0,
      "z" : 48,
      "yaw" : 0,
      "pitch" : -10,
      "roll" : 0,
      "hfov" : 115,
      "vfov" : 80,
      "hflip" : false,
      "vflip" : false
    },
    {
      "cameraId" : "/dev/video17",
      "function" : "right,park",
      "x" : -25.0,
      "y" : 60.0,
      "z" : 88,
      "yaw" : -90,
      "pitch" : -10,
      "roll" : 0,
      "hfov" : 60,
      "vfov" : 62,
      "hflip" : false,
      "vflip" : false
    },
    {
      "cameraId" : "/dev/video19",
      "function" : "left, park",
      "x" : 20.0,
      "y" : 60.0,
      "z" : 88,
      "yaw" : 90,
      "pitch" : -10,
      "roll" : 0,
      "hfov" : 60,
      "vfov" : 62,
      "hflip" : false,
      "vflip" : false
    }

可以看出,当状态是park,则是需要4个camera,实现360环视的效果,其他三个则是对应各自的

camera。

evs_manager

Android 12已经升级到1.1版本,主要作用没有变,提供 evs应用所需的构建模块,它的接口通过

HIDL 呈现,并且能够接受多个并发客户端。其他应用和服务(特别是汽车服务)可以查询evs

manager状态,以了解 EVS 系统何时处于活动状态。

启动方式:

service evs_manager /system/bin/[email protected]
    class hal
    priority -20
    user automotive_evs
    group automotive_evs system
    disabled # will not automatically start with its class; must be explictly started.

也是在开机的时候启动,默认的也是手动开启。

evs_sample_driver

就是驱动hal,启动方式也是一样

service evs_sample_driver /vendor/bin/[email protected]
    class hal
    priority -20
    user graphics
    group automotive_evs camera
    onrestart restart automotive_display
    onrestart restart evs_manager
    disabled # will not automatically start with its class; must be explictly started.

注意:以上服务默认都是在开机时不自动启动,在不更改rc文件的情况下,如果想开机自启动

可以通过property控制,在platform/packages/services/Car/car_product/build/car_base.mk中

默认 LOCAL_EVS_PROPERTIES ?= persist.automotive.evs.mode=0

车机服务启动init.car.rc 中有设置

on property:persist.automotive.evs.mode=0
    # stop EVS and automotive display services
    stop automotive_display
    stop evs_sample_driver
    stop evs_manager

on property:persist.automotive.evs.mode=1
    # start EVS and automotive display services
    start automotive_display
    start evs_sample_driver
    start evs_manager

可以看出当persist.automotive.evs.mode = 1 时,evs_sample_driver和evs_mandger也会随之启

动,上面automotive_display这个服务,是EVS 1.1版本新增的服务,应该是涉及到多屏。

代码流程

关键看代码流程,有些不是很重要的过程会进行省略

如上面所说,app应用运行之前,需要evs_sample_driver和evs_mandger先启动,而且需要先启动

evs_sample_driver,主要流程如下:(注意:evs_sample_driver启动之前需要先启动

automotive_display,不然无法获取IAutomotiveDisplayProxyService这个服务)

int main() {

    android::sp<IAutomotiveDisplayProxyService> carWindowService =
        IAutomotiveDisplayProxyService::getService("default");//先获取AutomotiveDisplayProxyService


    android::sp<IEvsEnumerator> service = new EvsEnumerator(carWindowService);//构建EvsEnumerator

看下EvsEnumerator类的构建

EvsEnumerator::EvsEnumerator(sp<IAutomotiveDisplayProxyService> proxyService) {

    enumerateCameras();
    enumerateDisplays();

分别进行camera和display的枚举,就是probe当前车机配置的硬件资源

void EvsEnumerator::enumerateCameras() {

    DIR* dir = opendir("/dev");

        while ((entry = readdir(dir)) != nullptr) {
            // We're only looking for entries starting with 'video'
            if (strncmp(entry->d_name, "video", 5) == 0) {
                std::string deviceName("/dev/");
                deviceName += entry->d_name;
                videoCount++;
                if (sCameraList.find(deviceName) != sCameraList.end()) {
                    LOG(INFO) << deviceName << " has been added already.";
                    captureCount++;
                } else if(qualifyCaptureDevice(deviceName.c_str())) {
                    sCameraList.emplace(deviceName, deviceName.c_str());
                    captureCount++;
                }
            }
        }

enumerateCameras()方法是典型的枚举底层配置video设备节点过程,在qualifyCaptureDevice()

中会对video设备所支持的格式进行判断,最后符合条件的,会将device放入到sCameraList中,

应用可以通过接口getcameralist()得到这些camera信息。

enumerateDisplays()方法雷同,不同的是它是从config.json中读取的port口,也就是displayID。

以上过程完毕后,已经拿到了硬件资源,app就可以获取到这些信息,这时候先启动evs_manager

最后再启动evs_app

evs_app启动,在evs_app.cpp中

int main(int argc, char** argv)

    const char* evsServiceName = "default"; //服务设为default,如有更改需要在argv中添加

    android_pixel_format_t extMemoryFormat = HAL_PIXEL_FORMAT_RGBA_8888;//显示使用RGB格式

    int32_t mockGearSignal = static_cast<int32_t>(VehicleGear::GEAR_PARK);//初始状态,我进行了更改。改成了park

    if (!config.initialize(CONFIG_DEFAULT_PATH)) {
        LOG(ERROR) << "Missing or improper configuration for the EVS application.  Exiting.";
        return EXIT_FAILURE;
    } //从config.json中读取camera和display的设置


    pEvs = IEvsEnumerator::getService(evsServiceName); //获取evs managera服务

    displayId = config.setActiveDisplayId(displayId);//在config.json中有设置display port口

    pDisplay = pEvs->openDisplay_1_1(displayId);//先打开显示


    config.setExternalMemoryFormat(extMemoryFormat);//将format和GearSigna设置下去,后面会进行获取

    // Set a mock gear signal for the test mode
    config.setMockGearSignal(mockGearSignal);

    if (useVehicleHal) { //如果使用了VehicleHal,默认是true

        pVnet = IVehicle::getService();

    }

    pStateController = new EvsStateControl(pVnet, pEvs, pDisplay, config);//进入到状态控制逻辑,主要逻辑都是在EvsStateControl中实现

    pStateController->startUpdateLoop()//所有资源配置完成后,就一直循环执行了

看下EvsStateControl类的构建,就是用调用标准接口获取camera信息,这些信息在

evs_driver启动的时候,全部填充好了。

EvsStateControl::EvsStateControl(android::sp<IVehicle> pVnet, android::sp<IEvsEnumerator> pEvs,
                                 android::sp<IEvsDisplay> pDisplay, const ConfigManager& config) :


   LOG(INFO) << "Requesting camera list";
    mEvs->getCameraList_1_1(
        [this, &config](hidl_vec<CameraDesc> cameraList) {
            LOG(INFO) << "Camera list callback received " << cameraList.size() << "cameras.";
            for (auto&& cam: cameraList) {
                LOG(INFO) << "Found camera " << cam.v1.cameraId;
                bool cameraConfigFound = false;

               for (auto&& info: config.getCameras()) {
                    if (cam.v1.cameraId == info.cameraId) {
                        // We found a match!
                        if (info.function.find("reverse") != std::string::npos) {
                            mCameraList[State::REVERSE].emplace_back(info);
                            mCameraDescList[State::REVERSE].emplace_back(cam);
                        }
                        if (info.function.find("right") != std::string::npos) {
                            mCameraList[State::RIGHT].emplace_back(info);
                            mCameraDescList[State::RIGHT].emplace_back(cam);
                        }
                        if (info.function.find("left") != std::string::npos) {
                            mCameraList[State::LEFT].emplace_back(info);
                            mCameraDescList[State::LEFT].emplace_back(cam);
                        }
                        if (info.function.find("park") != std::string::npos) {
                            mCameraList[State::PARKING].emplace_back(info);
                            mCameraDescList[State::PARKING].emplace_back(cam);
                        }
                        cameraConfigFound = true;
                        break;
                    }
                }

getCameraList_1_1()方法作用就是,从cameralist中获取camera信息,并和config.jon中设置的

信息进行对应,每个车机状态下,对应起来相应的camera。

以上信息都对应好后,执行startUpdateLoop(),就是启动另一个线程,完成逻辑控制,流程如下:

void EvsStateControl::updateLoop() {
    while (run) { 一个循环,循环执行

        if (!selectStateForCurrentConditions()) {
            LOG(ERROR) << "selectStateForCurrentConditions failed so we're going to die";
            break;
        }//根据当前状态配置EVS pipeline,下面单独说明

        if (mCurrentRenderer) {
            // Get the output buffer we'll use to display the imagery
            BufferDesc_1_0 tgtBuffer = {};
            displayHandle->getTargetBuffer([&tgtBuffer](const BufferDesc_1_0& buff) {
                                          tgtBuffer = buff;
                                      }
            );//从底层获取到camera数据帧,并发送到display这个服务

        if (!mCurrentRenderer->drawFrame(convertBufferDesc(tgtBuffer))) {
                    // If drawing failed, we want to exit quickly so an app restart can happen
            run = false;
        }//收到数据帧后,开始进行渲染

        //渲染结束后,最终进行显示 
        displayHandle->returnTargetBufferForDisplay(tgtBuffer);

流程很清晰,简述为先配置EVS pipeline,然后获取camera数据帧送显,送显之前先进行渲染,

渲染完成后,最终显示出来,下面逐一看下具体过程:

1、selectStateForCurrentConditions()

bool EvsStateControl::selectStateForCurrentConditions() {

    static int32_t sMockGear   = mConfig.getMockGearSignal();//刚开始的时候,设置的状态是park,这里进行获取

    if (mVehicle != nullptr) { //如果使用了Vehiclehal,也可以通过Vehicle实时过去车机状态

        // Query the car state
        if (invokeGet(&mGearValue) != StatusCode::OK) {
            LOG(ERROR) << "GEAR_SELECTION not available from vehicle.  Exiting.";
            return false;
        }
        if ((mTurnSignalValue.prop == 0) || (invokeGet(&mTurnSignalValue) != StatusCode::OK)) {
            // Silently treat missing turn signal state as no turn signal active
            mTurnSignalValue.value.int32Values.setToExternal(&sMockSignal, 1);
            mTurnSignalValue.prop = 0;
        }
    } else { //没有使用Vehiclehal吗,就按evs_app启动时设置的car state,就是park

            static const int kShowTime = 20;//默认显示20s

        if (std::chrono::duration_cast<std::chrono::seconds>(now - start).count() > kShowTime) {
            // Switch to drive (which should turn off the reverse camera)
            sMockGear = int32_t(VehicleGear::GEAR_DRIVE);
        }
        mGearValue.value.int32Values.setToExternal(&sMockGear, 1);
        mTurnSignalValue.value.int32Values.setToExternal(&sMockSignal, 1);
    }

    State desiredState = OFF;
    if (mGearValue.value.int32Values[0] == int32_t(VehicleGear::GEAR_REVERSE)) {
        desiredState = REVERSE;
    } else if (mTurnSignalValue.value.int32Values[0] == int32_t(VehicleTurnSignal::RIGHT)) {
        desiredState = RIGHT;
    } else if (mTurnSignalValue.value.int32Values[0] == int32_t(VehicleTurnSignal::LEFT)) {
        desiredState = LEFT;
    } else if (mGearValue.value.int32Values[0] == int32_t(VehicleGear::GEAR_PARK)) {
        desiredState = PARKING;
    }

    return configureEvsPipeline(desiredState); //根据当前的car state配置evs pipeline

假设我们这里的state是park,那么对应的相机就是有4个,看下configureEvsPipeline()

bool EvsStateControl::configureEvsPipeline(State desiredState) {

    if (!isGlReady && !isSfReady()) {先判断GPU相关的东西有木有初始化完成,没完成的话,就走CPU进行渲染

        if (mCameraList[desiredState].size() >= 1) {
            mDesiredRenderer = std::make_unique<RenderPixelCopy>(mEvs,
                                                                 mCameraList[desiredState][0]);

    } else {//ready后,这里分为两种情况,如下

        if (mCameraList[desiredState].size() == 1) {
            mDesiredRenderer = std::make_unique<RenderDirectView>(mEvs,
                                                                  mCameraDescList[desiredState][0],
                                                                  mConfig);

        } else if (mCameraList[desiredState].size() > 1 || desiredState == PARKING) {
            mDesiredRenderer = std::make_unique<RenderTopView>(mEvs,
                                                               mCameraList[desiredState],
                                                               mConfig);
        
        //渲染器初始化完成后,开始进行激活
        if (!mCurrentRenderer->activate()) {
            LOG(ERROR) << "New renderer failed to activate";
            return false;
        }

state是park就走RenderTopView,那么执行RenderTopView::activate()

bool RenderTopView::activate() {

    prepareGL()//初始化EGL,步骤很固定,如果哪一步执行错误,log也很详细

    // Load our shader programs 使用shader进行渲染
    mPgmAssets.simpleTexture = buildShaderProgram(vtxShader_simpleTexture,
                                                 pixShader_simpleTexture,
                                                 "simpleTexture");

    mPgmAssets.projectedTexture = buildShaderProgram(vtxShader_projectedTexture,
                                                    pixShader_projectedTexture,
                                                    "projectedTexture");

    //主要还是设置camera和display这个过程
    for (auto&& cam: mActiveCameras) { 
        cam.tex.reset(createVideoTexture(mEnumerator,
                                         cam.info.cameraId.c_str(),
                                         nullptr,
                                         sDisplay));
    }

注意:这里的EGL默认支持的是3.0,如果硬件平台只支持2.0,需要进行更改,更改的地方可以

私信博主,这里就不多说啦!!

看下createVideoTexture()方法

VideoTex* createVideoTexture(sp<IEvsEnumerator> pEnum,
                             const char* evsCameraId,
                             std::unique_ptr<Stream> streamCfg,
                             EGLDisplay glDisplay,
                             bool useExternalMemory,
                             android_pixel_format_t format) {


    if (streamCfg != nullptr) {
        //先打开camera,流程就先经过evs_managera,再进入到evs_sample_driver
        pCamera = pEnum->openCamera_1_1(evsCameraId, *streamCfg);

            //最终执行到EvsEnumerator::openCamera_1_1,返回pActiveCamera 
            pActiveCamera = EvsV4lCamera::Create(cameraId.c_str(),
                                             sConfigManager->getCameraInfo(cameraId),
                                             &streamCfg);

        //关键是构建StreamHandler,这个类
        pStreamHandler = new StreamHandler(pCamera,
                                           2,     // number of buffers
                                           useExternalMemory,
                                           format);

        pStreamHandler->startStream())//起流,开始传输图像数据帧

注意:上面streamCfg这个变量,在1.1版本新增的特性,它的会涉及到很多细节性的东西,比如

尺寸等等

看下StreamHandler

StreamHandler::StreamHandler(android::sp <IEvsCamera> pCamera,
                             uint32_t numBuffers,
                             bool useOwnBuffers,
                             android_pixel_format_t format,
                             int32_t width,
                             int32_t height)

        android::GraphicBufferAllocator &alloc(android::GraphicBufferAllocator::get());
        const auto usage = GRALLOC_USAGE_HW_TEXTURE |
                           GRALLOC_USAGE_SW_READ_RARELY |
                           GRALLOC_USAGE_SW_WRITE_OFTEN;

        //分配buffer数据
        for (auto i = 0; i < numBuffers; ++i) {
            unsigned pixelsPerLine;
            android::status_t result = alloc.allocate(width,
                                                      height,
                                                      format,
                                                      1,
                                                      usage,
                                                      &memHandle,
                                                      &pixelsPerLine,
                                                      0,
                                                      "EvsApp");   

                //填充BufferDesc_1_1 
                BufferDesc_1_1 buf;
                AHardwareBuffer_Desc* pDesc =
                    reinterpret_cast<AHardwareBuffer_Desc *>(&buf.buffer.description);
                pDesc->width = width;
                pDesc->height = height;
                pDesc->layers = 1;
                pDesc->format = format; 
                pDesc->usage = GRALLOC_USAGE_HW_TEXTURE |
                               GRALLOC_USAGE_SW_READ_RARELY |
                               GRALLOC_USAGE_SW_WRITE_OFTEN;
                pDesc->stride = pixelsPerLine;
                buf.buffer.nativeHandle = memHandle;
                buf.bufferId = i;   // Unique number to identify this buffer
                mOwnBuffers[i] = buf;

另外一个就是起流startStream()

bool StreamHandler::startStream() {

    Return <EvsResult> result = mCamera->startVideoStream(this);//之前opencamera的时候,已经拿到mCamera,这里直接用,这里会执行到evs_sample_driver中

进入到EvsV4lCamera.cpp

Return<EvsResult> EvsV4lCamera::startVideoStream(const sp<IEvsCameraStream_1_0>& stream)  {

    const uint32_t videoSrcFormat = mVideo.getV4LFormat();
    LOG(INFO) << "Configuring to accept " << (char*)&videoSrcFormat
              << " camera data and convert to " << std::hex << mFormat;

    switch (mFormat) { //根据不同的format,选择不同的格式转换函数

    case HAL_PIXEL_FORMAT_RGBA_8888: //刚开始的时候设置的是RGB格式
        switch (videoSrcFormat) {//模组支持的YUV格式
        
        //那么需要将YUV转换成RGB
        case V4L2_PIX_FMT_YUYV:     mFillBufferFromVideo = fillRGBAFromYUYV;    break;

    //起流,注意传入的参数forwardFrame(tgt, data),后面数据帧是通过callback的形式返回的,这里传入的参数就是callback函数
    if (!mVideo.startStream([this](VideoCapture*, imageBuffer* tgt, void* data) {
                                this->forwardFrame(tgt, data);
                            })

进入到VideoCapture.cpp中

bool VideoCapture::startStream(std::function<void(VideoCapture*, imageBuffer*, void*)> callback) {

    v4l2_requestbuffers bufrequest;
    bufrequest.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    bufrequest.memory = V4L2_MEMORY_MMAP;
    bufrequest.count = 1;
    //调用icotl函数,设置缓冲区
    if (ioctl(mDeviceFd, VIDIOC_REQBUFS, &bufrequest) < 0) {
        PLOG(ERROR) << "VIDIOC_REQBUFS failed";
        return false;
    }

    for (int i = 0; i < mNumBuffers; ++i) {
        mBufferInfos[i].type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
        mBufferInfos[i].memory = V4L2_MEMORY_MMAP;//mmap的方式
        mBufferInfos[i].index = i;

        
        if (ioctl(mDeviceFd, VIDIOC_QUERYBUF, &mBufferInfos[i]) < 0) {
            PLOG(ERROR) << "VIDIOC_QUERYBUF failed";
            return false;
        }

        mPixelBuffers[i] = mmap(
                NULL,
                mBufferInfos[i].length,
                PROT_READ | PROT_WRITE,
                MAP_SHARED,
                mDeviceFd,
                mBufferInfos[i].m.offset
        );

        //开始queue buffer
        if (ioctl(mDeviceFd, VIDIOC_QBUF, &mBufferInfos[i]) < 0) {
            PLOG(ERROR) << "VIDIOC_QBUF failed";
            return false;
        }

    // 起流
    const int type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    if (ioctl(mDeviceFd, VIDIOC_STREAMON, &type) < 0) {
        PLOG(ERROR) << "VIDIOC_STREAMON failed";
        return false;
    }

    mCallback = callback;//将callback函数,赋值给全局变量maCallback

    //再起一个线程,用于返回数据
    mCaptureThread = std::thread([this](){ collectFrames(); });

看下怎么callback的

void VideoCapture::collectFrames() {

    //dueue buffer
        if (ioctl(mDeviceFd, VIDIOC_DQBUF, &buf) < 0) {
            PLOG(ERROR) << "VIDIOC_DQBUF failed";
            break;
        }

        mFrames.insert(buf.index);
        mBufferInfos[buf.index] = buf;
        
        //这里进行callback回去,就是执行EvsV4lCamera::forwardFrame()
        mCallback(this, &mBufferInfos[buf.index], mPixelBuffers[buf.index]);

执行EvsV4lCamera::forwardFrame()

void EvsV4lCamera::forwardFrame(imageBuffer* pV4lBuff, void* pData) {

        void *targetPixels = nullptr;
        GraphicBufferMapper &mapper = GraphicBufferMapper::get();
        status_t result =
            mapper.lock(bufDesc_1_1.buffer.nativeHandle,
                        GRALLOC_USAGE_SW_WRITE_OFTEN | GRALLOC_USAGE_SW_READ_NEVER,
                        android::Rect(pDesc->width, pDesc->height),
                        (void **)&targetPixels); 

        //YUV转换成RGB,执行对mFillBufferFromVideo已经赋过值,走fillRGBAFromYUYV()
        mFillBufferFromVideo(bufDesc_1_1, (uint8_t *)targetPixels, pData,
                             mColorSpaceConversionBuffer.data(), mStride);

    //走deliverFrame_1_1标准接口,返回frame到应用,也就是streamhandler中
    auto result = mStream_1_1->deliverFrame_1_1(frames);

注意:这里也是会判断mStream_1_1是否为空,1.1版本的新特性

Return<void> StreamHandler::deliverFrame_1_1(const hidl_vec<BufferDesc_1_1>& buffers) {

        if (mReadyBuffer >= 0) {
            // Send the previously saved buffer back to the camera unused
            hidl_vec<BufferDesc_1_1> frames;
            frames.resize(1);
            frames[0] = mBuffers[mReadyBuffer];
            //告诉底层收到了,这帧结束了
            auto ret = mCamera->doneWithFrame_1_1(frames);

        //将传回的bufDesc放入到mBuffers中,等待来取
        mBuffers[mReadyBuffer] = bufDesc;

那么什么时候,来取呢,谁来取呢

在EvsStateControl::updateLoop() 中,displayHandle->getTargetBuffer()它来取

拿到buffer数据后,就进行渲染和显示了

猜你喜欢

转载自blog.csdn.net/qq_37057338/article/details/127808195