高通Camera框架--数据流浅谈01

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/Mr_ZJC/article/details/49849271

    本文重点:stagefrightRecorder.cpp    OMXCodec.cpp   MPEG4Writer.cpp  CameraSource.cpp 之间的调用关系

===============================================================================

     最初看的时候,有些地方还是不清楚,关于解码和文件的读写之间的关系不是很了解。只是知道底层回调的数据会经过CameraSource.cpp回调,只是知道数据会在OMXCodec.cpp 中完成解码,只是知道在MPEG4Writer.cpp 会有读写线程和轨迹线程,只是知道在stagefrightRecorder.cpp 中会将OMXCodex.cpp、MPEG4Writer.cpp和CameraSource.cpp 做相互的配合调用。之前一直纠结的是MPEG4Writer.cpp是直接读CameraSource.cpp 的数据,那和解码之间又是怎样的联系呢?

     是自己知识面不广,了解的不多,读代码能力也还得加强。

     这次看源码,总算是理清了上面的疑点。

    OMXCodec.cpp 的read函数,直接读的是CameraSource.cpp 中数据,然后MPEG4Writer.cpp 中的轨迹线程的mSource->read()调用的则是OMXCodec.cpp 中数据。那也就是说,底层数据经过CameraSource.cpp回调的时候,是先经过解码,然后在将数据写入文件。

   >>>>>>>>这里直接从stagefrightRecorder.cpp 的start函数开始了,会在start()函数中调用startMPEG4Recording()函数

stagefrightRecorder.cpp

status_t StagefrightRecorder::start() {
      ......
    switch (mOutputFormat) {
        case OUTPUT_FORMAT_DEFAULT:
        case OUTPUT_FORMAT_THREE_GPP:
        case OUTPUT_FORMAT_MPEG_4:
            status = startMPEG4Recording();
           
     ......
}

      >>>>>>>>在startMPEG4Recording()方法中过,调用的重要方法已经标红

status_t StagefrightRecorder::startMPEG4Recording() {
   ......

      status_t err = setupMPEG4Recording(
            mOutputFd, mVideoWidth, mVideoHeight,
            mVideoBitRate, &totalBitRate, &mWriter);

    sp<MetaData> meta = new MetaData;


    setupMPEG4MetaData(startTimeUs, totalBitRate, &meta);


    err = mWriter->start(meta.get());
  ......
}

     >>>>>>>>在setupMPEG4Recording()方法中,我们看到 sp<MediaWriter> writer = new MPEG4Writer(outputFd); 这个是完成writer的初始化,那我们现在就知道这个writer是MPEG4Writer了,这个还是蛮重要的。在这个方法中,会调用setupMediaSource(),完成source的初始化,而这个source就是CameraSource,还会继续调用setupVideoEncoder()方法,完成coder的初始化,而这个coder则是OMXCodex。还得注意下的就是writer->addSource(encoder); 这里是把解码的数据交给了writer,这样MPEG4Writer.cpp和OMXCodec.cpp直接就连续起来了

status_t StagefrightRecorder::setupMPEG4Recording(
      ......
        sp<MediaWriter> *mediaWriter) {
    mediaWriter->clear();

  
    sp<MediaWriter> writer = new MPEG4Writer(outputFd);


    if (mVideoSource < VIDEO_SOURCE_LIST_END) {


        sp<MediaSource> mediaSource;       
        err = setupMediaSource(&mediaSource);
        if (err != OK) {
            return err;
        }


        sp<MediaSource> encoder;
        err = setupVideoEncoder(mediaSource, videoBitRate, &encoder);
        if (err != OK) {
            return err;
        }


        writer->addSource(encoder);
        *totalBitRate += videoBitRate;
    }


    // Audio source is added at the end if it exists.
    // This help make sure that the "recoding" sound is suppressed for
    // camcorder applications in the recorded files.
    if (!mCaptureTimeLapse && (mAudioSource != AUDIO_SOURCE_CNT)) {
        err = setupAudioEncoder(writer);
        if (err != OK) return err;
        *totalBitRate += mAudioBitRate;
    }


    if (mInterleaveDurationUs > 0) {
        reinterpret_cast<MPEG4Writer *>(writer.get())->
            setInterleaveDuration(mInterleaveDurationUs);
    }
    if (mLongitudex10000 > -3600000 && mLatitudex10000 > -3600000) {
        reinterpret_cast<MPEG4Writer *>(writer.get())->
            setGeoData(mLatitudex10000, mLongitudex10000);
    }
    if (mMaxFileDurationUs != 0) {
        writer->setMaxFileDuration(mMaxFileDurationUs);
    }
    if (mMaxFileSizeBytes != 0) {
        writer->setMaxFileSize(mMaxFileSizeBytes);
    }


    mStartTimeOffsetMs = mEncoderProfiles->getStartTimeOffsetMs(mCameraId);
    if (mStartTimeOffsetMs > 0) {
        reinterpret_cast<MPEG4Writer *>(writer.get())->
            setStartTimeOffsetMs(mStartTimeOffsetMs);
    }


    writer->setListener(mListener);
    *mediaWriter = writer;
    return OK;
}

  >>>>>>>>在setupMediaSource()方法中是完成了cameraSource的初始化

status_t StagefrightRecorder::setupMediaSource(
                      sp<MediaSource> *mediaSource) {
    if (mVideoSource == VIDEO_SOURCE_DEFAULT
            || mVideoSource == VIDEO_SOURCE_CAMERA) {
        sp<CameraSource> cameraSource;
        status_t err = setupCameraSource(&cameraSource);

        if (err != OK) {
            return err;
        }
        *mediaSource = cameraSource;
    } else if (mVideoSource == VIDEO_SOURCE_GRALLOC_BUFFER) {
        // If using GRAlloc buffers, setup surfacemediasource.
        // Later a handle to that will be passed
        // to the client side when queried
        status_t err = setupSurfaceMediaSource();
        if (err != OK) {
            return err;
        }
        *mediaSource = mSurfaceMediaSource;
    } else {
        return INVALID_OPERATION;
    }
    return OK;
}

      >>>>>>>在setupVideoEncoder()方法中是完成了OMXCodec的初始化,这里注意下OMXCodec::create(...,camerasource,...,...);我们看到create的时候,传进入的参数中,那个source是cameraSource,所以后面在OMXCodec.cpp 中调用的mSoure->read();直接调用的就是CameraSoure.cpp中的read()方法

status_t StagefrightRecorder::setupVideoEncoder(
        ......
    sp<MediaSource> encoder = OMXCodec::Create(
            client.interface(), enc_meta,
            true /* createEncoder */, cameraSource,
            NULL, encoder_flags);


    if (encoder == NULL) {
        ALOGW("Failed to create the encoder");
        // When the encoder fails to be created, we need
        // release the camera source due to the camera's lock
        // and unlock mechanism.
        cameraSource->stop();
        return UNKNOWN_ERROR;
    }


    mVideoSourceNode = cameraSource;
    mVideoEncoderOMX = encoder;


    *source = encoder;


    return OK;
}

-----------------------------

    >>>>>上面有说到在stagefrigheRecorder.cpp中有调用到MPEG4Writer.cpp中addSource()方法 [writer->addSource(encoder);],而addSource中传进来的参数是解码的数据,这样MPEG4Writer.cpp和OMXCodec.cpp之间就有了联系,MPEG4Writer.cpp 读写的就是OMXCodec.cpp 中解码后的数据。

MPEG4Writer.cpp 

    >>>>>> 在MPEG4Writer.cpp 的addSource()方法中,注意看下Track *track = new Track(this, source, 1 + mTracks.size());  我们看到new Track(...,source,...)的时候,是传进去了source,而这个source,从上面的分析,我们已经知道它是解码后的数据

status_t MPEG4Writer::addSource(const sp<MediaSource> &source) {
    Mutex::Autolock l(mLock);
    if (mStarted) {
        ALOGE("Attempt to add source AFTER recording is started");
        return UNKNOWN_ERROR;
    }


    // At most 2 tracks can be supported.
    if (mTracks.size() >= 2) {
        ALOGE("Too many tracks (%d) to add", mTracks.size());
        return ERROR_UNSUPPORTED;
    }


    CHECK(source.get() != NULL);


    // A track of type other than video or audio is not supported.
    const char *mime;
    sp<MetaData> meta = source->getFormat();
    CHECK(meta->findCString(kKeyMIMEType, &mime));
    bool isAudio = !strncasecmp(mime, "audio/", 6);
    bool isVideo = !strncasecmp(mime, "video/", 6);
    if (!isAudio && !isVideo) {
        ALOGE("Track (%s) other than video or audio is not supported",
            mime);
        return ERROR_UNSUPPORTED;
    }


    // At this point, we know the track to be added is either
    // video or audio. Thus, we only need to check whether it
    // is an audio track or not (if it is not, then it must be
    // a video track).


    // No more than one video or one audio track is supported.
    for (List<Track*>::iterator it = mTracks.begin();
         it != mTracks.end(); ++it) {
        if ((*it)->isAudio() == isAudio) {
            ALOGE("%s track already exists", isAudio? "Audio": "Video");
            return ERROR_UNSUPPORTED;
        }
    }


    // This is the first track of either audio or video.
    // Go ahead to add the track.
    Track *track = new Track(this, source, 1 + mTracks.size());       -------------------------------
    mTracks.push_back(track);                                                                                               |
                                                                                                                                               |
                                                                                                                                               |
    mHFRRatio = ExtendedUtils::HFR::getHFRRatio(meta);                                                   |
                                                                                                                                               |
                                                                                                                                               |
    return OK;                                                                                                                           |
}                                                                                                                                               |

                                                                                                                                                |

MPEG4Writer::Track::Track(       --------------------------------------------------------------------------- |
        MPEG4Writer *owner, const sp<MediaSource> &source, size_t trackId)
    : mOwner(owner),
      mMeta(source->getFormat()),
      mSource(source),
      mDone(false),
      mPaused(false),
      mResumed(false),
      mStarted(false),
      mTrackId(trackId),
      mTrackDurationUs(0),
      mEstimatedTrackSizeBytes(0),
      mSamplesHaveSameSize(true),
      mStszTableEntries(new ListTableEntries<uint32_t>(1000, 1)),
      mStcoTableEntries(new ListTableEntries<uint32_t>(1000, 1)),
      mCo64TableEntries(new ListTableEntries<off64_t>(1000, 1)),
      mStscTableEntries(new ListTableEntries<uint32_t>(1000, 3)),
      mStssTableEntries(new ListTableEntries<uint32_t>(1000, 1)),
      mSttsTableEntries(new ListTableEntries<uint32_t>(1000, 2)),
      mCttsTableEntries(new ListTableEntries<uint32_t>(1000, 2)),
      mCodecSpecificData(NULL),
      mCodecSpecificDataSize(0),
      mGotAllCodecSpecificData(false),
      mReachedEOS(false),
      mRotation(0),
      mHFRRatio(1) {
    getCodecSpecificDataFromInputFormatIfPossible();


    const char *mime;
    mMeta->findCString(kKeyMIMEType, &mime);
    mIsAvc = !strcasecmp(mime, MEDIA_MIMETYPE_VIDEO_AVC);
    mIsAudio = !strncasecmp(mime, "audio/", 6);
    mIsMPEG4 = !strcasecmp(mime, MEDIA_MIMETYPE_VIDEO_MPEG4) ||
               !strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AAC);


    setTimeScale();
}

  >>>>>> theradEntry()是轨迹线程真正执行的方法,在这个方法中会通过mSource->read(&buffer)去不断的读取数据,那我们想知道,它是read的哪里的数据,所以就需要找到mSource是在哪里初始化的,我们搜索下,会发现是在

    MPEG4Writer::Track::Track(  )
        MPEG4Writer *owner, const sp<MediaSource> &source, size_t trackId)
    : mOwner(owner),
   
      mSource(source),

  这里进行了初始化。直接看上面,已经连线标出来的方法,就知道,这个source就是解码后的数据了

status_t MPEG4Writer::Track::threadEntry() {

    while (!mDone && (err = mSource->read(&buffer)) == OK) {

       ......

    }
}

--------------------

     >>>>>>在上面的分析中,我们知道在stagefrightRecorder.cpp中会完成OMXCodec的初始化,而且在初始化中,就将CameraSource传进来,这里只是想说 下面的source就是CameraSource,这样CameraSource.cpp和OMXCodec.cpp之间就联系起来了

OMXCodec.cpp

OMXCodec::OMXCodec(
        const sp<IOMX> &omx, IOMX::node_id node,
        uint32_t quirks, uint32_t flags,
        bool isEncoder,
        const char *mime,
        const char *componentName,
        const sp<MediaSource> &source,
        const sp<ANativeWindow> &nativeWindow)
    : mOMX(omx),
      mOMXLivesLocally(omx->livesLocally(node, getpid())),
      mNode(node),
      mQuirks(quirks),
      mFlags(flags),
      mIsEncoder(isEncoder),
      mIsVideo(!strncasecmp("video/", mime, 6)),
      mMIME(strdup(mime)),
      mComponentName(strdup(componentName)),
      mSource(source),
      mCodecSpecificDataIndex(0),
      mState(LOADED),
      mInitialBufferSubmit(true),
      mSignalledEOS(false),
      mNoMoreOutputData(false),
      mOutputPortSettingsHaveChanged(false),
      mSeekTimeUs(-1),
      mSeekMode(ReadOptions::SEEK_CLOSEST_SYNC),
      mTargetTimeUs(-1),
      mOutputPortSettingsChangedPending(false),
      mSkipCutBuffer(NULL),
      mLeftOverBuffer(NULL),
      mPaused(false),
      mNativeWindow(
              (!strncmp(componentName, "OMX.google.", 11))
                        ? NULL : nativeWindow),
      mNumBFrames(0),
      mInSmoothStreamingMode(false),
      mOutputCropChanged(false),
      mSignalledReadTryAgain(false),
      mReturnedRetry(false),
      mLastSeekTimeUs(-1),
      mLastSeekMode(ReadOptions::SEEK_CLOSEST) {
    mPortStatus[kPortIndexInput] = ENABLING;
    mPortStatus[kPortIndexOutput] = ENABLING;


    setComponentRole();
}

    >>>>>>这里的read()方法,会被MPEG4Writer.cpp的mSourcr->read()调用,至于解码的过程,还没有详细看

status_t OMXCodec::read(
        MediaBuffer **buffer, const ReadOptions *options) { 

      .......

}

猜你喜欢

转载自blog.csdn.net/Mr_ZJC/article/details/49849271