Android Nuplayer学习笔记

作为一个audio工程师,需要了解一些Nuplayer框架和audio framework之间的联系!

先从熟悉的着手,先分析下Nuplayer对AudioTrack的使用

Nuplayer是在如何使用AudioTrack的?

先看一个类的继承关系

//MediaPlayerService.h
class MediaPlayerService : public BnMediaPlayerService
{
    
    
	...
		class AudioOutput : public MediaPlayerBase::AudioSink
		...
		sp<AudioTrack>          mTrack;
	...
}

可见,AudioTrack被封装到了MediaPlayerService的子类AudioOutput中。

而AudioOutput又是这样被使用的:

//sp<MediaPlayerBase> MediaPlayerService::Client::setDataSource_pre(
        //player_type playerType)
        mAudioOutput = new AudioOutput(mAudioSessionId, IPCThreadState::self()->getCallingUid(),mPid, mAudioAttributes);
static_cast<MediaPlayerInterface*>(p.get())->setAudioSink(mAudioOutput);
return p;

//这里创建的mAudioOutput通过setAudioSink设置给了
//NuPlayer的mAudioSink

这个Client对应着使用mediaplayer接口和mediaplayerservice交互的进程。每一个client都有自己独立的一个mAudioOutput来使用。MediaPlayerService对AudioTrack做了一层封装。其实最终还是客户端app进程在使用。

创建audiotrack对象的流程如下:

//MediaCodec发来类似于如下消息
//NuPlayerDecoder: [audio] kWhatCodecNotify: cbID = 4, paused = 0
//4对应着MediaCodec::CB_OUTPUT_FORMAT_CHANGED
NuPlayer::Decoder::onMessageReceived()
	NuPlayer::Decoder::handleOutputFormatChange()
    	NuPlayer::Renderer::changeAudioFormat()
    //发出消息kWhatChangeAudioFormat(发给自己)
    
//收到消息    
NuPlayer::Renderer::onMessageReceived(kWhatChangeAudioFormat)
	NuPlayer::Renderer::onChangeAudioFormat
		ExtendedNuPlayerRenderer::onOpenAudioSink
			NuPlayer::Renderer::onOpenAudioSink
    			NuPlayer::Renderer::onOpenAudioSink()
    				MediaPlayerService::AudioOutput::open()
    					

这样就创建了AudioTrack.

接下来看看,AudioTrack对象(mAudioSink)都是怎么应用的

音频数据流的代码流程

NuPlayer::Decoder收到消息

NuPlayerDecoder: [audio] kWhatCodecNotify: cbID = 1, paused = 0
NuPlayerDecoder: [OMX.google.aac.decoder] onMessage: AMessage(what = 'cdcN', target = 8) = {
    
                             
NuPlayerDecoder:   int32_t callbackID = 2
NuPlayerDecoder:   int32_t index = 0
NuPlayerDecoder:   size_t offset = 0
NuPlayerDecoder:   size_t size = 4096
NuPlayerDecoder:   int64_t timeUs = 820464400
int32_t flags = 0
NuPlayerDecoder: }

callbackID是2,对应MediaCodec::CB_OUTPUT_AVAILABLE

//触发调用这个方法
NuPlayer::Decoder::handleAnOutputBuffer(index, offset, size, timeUs, flags);

/***方法大致流程如下***/
sp<MediaCodecBuffer> buffer;
//根据index取出对应buffer
//这里mCodec是MediaCodec的一个实例
//由此可见,这里NuPlayer::Decoder和MediaCode是
//成对的
mCodec->getOutputBuffer(index, &buffer);

if (index >= mOutputBuffers.size()) {
    
    
    for (size_t i = mOutputBuffers.size(); i <= index; ++i) {
    
    
       //扩容
       mOutputBuffers.add(); 
    }
}
//赋值
mOutputBuffers.editItemAt(index) = buffer;

//设定数据的范围,偏移和大小
buffer->setRange(offset, size);
//设置meta
buffer->meta()->clear();
buffer->meta()->setInt64("timeUs", timeUs);
...
//消息发给自己,NuPlayer::Decoder::onMessageReceived处理消息
//最终转到onRenderBuffer
sp<AMessage> reply = new AMessage(kWhatRenderBuffer, this);
reply->post();

//queueBuffer或者queueEOS
mRenderer->queueBuffer(mIsAudio, buffer, reply);
if (eos && !isDiscontinuityPending()) {
    
    
    mRenderer->queueEOS(mIsAudio, ERROR_END_OF_STREAM);
}

先说kWhatRenderBuffer消息的处理:

void NuPlayer::Decoder::onRenderBuffer(const sp<AMessage> &msg) {
    
    
    status_t err;
    int32_t render;
    size_t bufferIx;
    int32_t eos;
    //buffer index
    CHECK(msg->findSize("buffer-ix", &bufferIx));
    //视频
    if (!mIsAudio) {
    
    
        int64_t timeUs;
        sp<MediaCodecBuffer> buffer = mOutputBuffers[bufferIx];
        buffer->meta()->findInt64("timeUs", &timeUs);
        //mSelectedTrack合法的话就调用display去显示?
        if (mCCDecoder != NULL && mCCDecoder->isSelected()) {
    
    
            mCCDecoder->display(timeUs);
        }
    }
    ...
    if (mCodec == NULL) {
    
    
        err = NO_INIT;
    } else if (msg->findInt32("render", &render) && render) {
    
    
        /* 消息示例
        [OMX.qcom.video.decoder.avc] onMessage: AMessage(what = 'rndr', target = 31) = {
        	size_t buffer-ix = 9
        	int32_t generation = 1
        	int64_t timestampNs = 85634970014000
        	int32_t render = 1
        }
        */
        //纳秒级别的精度
        //触发调用BufferChannel的renderOutputBuffer方法
        int64_t timestampNs;
        err = mCodec->renderOutputBufferAndRelease(bufferIx, timestampNs);        
    } else {
    
    
       	//直接调用BufferChannel的discardBuffer方法
        mNumOutputFramesDropped += !mIsAudio;
        err = mCodec->releaseOutputBuffer(bufferIx);
    }
    ...
    //如果eos,通知NuPlayer关掉decoder或者rescan sources
    if (msg->findInt32("eos", &eos) && eos
        && isDiscontinuityPending()) {
    
    
        finishHandleDiscontinuity(true /* flushOnTimeChange */);
    }
}

再接着看

queueBuffe

void NuPlayer::Renderer::queueBuffer(
        bool audio,
        const sp<MediaCodecBuffer> &buffer,
        const sp<AMessage> &notifyConsumed) {
    
    
    int64_t mediaTimeUs = -1;
    buffer->meta()->findInt64("timeUs", &mediaTimeUs);

    //注意这里的第二个参数是this指针,意思是消息的处理由
    //NuPlayer::Renderer这个类构造的当前对象来完成
    //可以在NuPlayerRenderer.h中发现
    //struct NuPlayer::Renderer : public AHandler
    //该类也实现了这个处理的方法
    //void NuPlayer::Renderer::onMessageReceived(const sp<AMessage> &msg)
    sp<AMessage> msg = new AMessage(kWhatQueueBuffer, this);
    msg->setInt32("queueGeneration", getQueueGeneration(audio));
    msg->setInt32("audio", static_cast<int32_t>(audio));
    msg->setObject("buffer", buffer);
    msg->setMessage("notifyConsumed", notifyConsumed);
    msg->post();
}

发了kWhatQueueBuffer消息之后,转到这个方法:

void NuPlayer::Renderer::onQueueBuffer(const sp<AMessage> &msg) {
    
    
    int32_t audio;
    CHECK(msg->findInt32("audio", &audio));
    ...
    if (audio) {
    
    
        mHasAudio = true;
    } else {
    
    
        mHasVideo = true;
    }
    //如果buffer里有视频
    if (mHasVideo) {
    
    
        if (mVideoScheduler == NULL) {
    
    
            mVideoScheduler = new VideoFrameScheduler();
            mVideoScheduler->init();
        }
    }
    ...
    QueueEntry entry;
    entry.mBuffer = buffer;//真正的音频/视频数据
    //对应queueBuffer传入的reply,用来做notify的
    entry.mNotifyConsumed = notifyConsumed;
    entry.mOffset = 0;
    entry.mFinalResult = OK;//正常情况下设置成OK
    entry.mBufferOrdinal = ++mTotalBuffersQueued;
    //将entry加入我们前面提到过的mAudioQueue/mVideoQueue
    //然后调用postDrainAudio(Video)Queue
    if (audio) {
    
    
        Mutex::Autolock autoLock(mLock);
        mAudioQueue.push_back(entry);
        postDrainAudioQueue_l();
    } else {
    
    
        mVideoQueue.push_back(entry);
        postDrainVideoQueue();
    }
}

postDrainAudioQueue_l,对应着kWhatDrainAudioQueue

        case kWhatDrainAudioQueue:
        {
    
    
            mDrainAudioQueuePending = false;

            //如果这里进来的drainGeneration
            //不是audio的,直接跳出
            int32_t generation;
            CHECK(msg->findInt32("drainGeneration", &generation));
            if (generation != getDrainGeneration(true /* audio */)) {
    
    
                break;
            }
			//这里要进入一个比较复杂的流程了
            //做的事情主要是drain audio queue
            //排泄audio队列
            //***打几个星号强调一下***
            //这个函数的主要返回值表达的意思是是否
            //need reschedule(重新调度)
            if (onDrainAudioQueue()) {
    
    
                //如果需要重新调度,进入下面的逻辑
                uint32_t numFramesPlayed;
                //调用AudioSink的getPosition方法
                //AudioSink是一个audio output的
                //抽象层
                //这里的audio output指的是
                //libmediaplayerservice/MediaPlayerService.h
                //中定义的AudioOutput类
                //那这里的调用实际上是调用了
                //MediaPlayerService::AudioOutput::getPosition
                //当然最终调用的就是AudioTrack的getPosition
                if (mAudioSink->getPosition(&numFramesPlayed) != OK) 				{
    
    ...}
                //已经写下去的数据量减去,已经播放掉的数据量
                //就是还在队列中pending的数据量
                uint32_t numFramesPendingPlayout =
                    mNumFramesWritten - numFramesPlayed;

                // This is how long the audio sink will have data to
                // play back.
                int64_t delayUs =
                    mAudioSink->msecsPerFrame()
                        * numFramesPendingPlayout * 1000ll;
                if (mPlaybackRate > 1.0f) {
    
    
                    delayUs /= mPlaybackRate;
                }

                // Let's give it more data after about half that time
                // has elapsed.
                delayUs /= 2;
                // 检查buffer大小,以估计允许的最大delay(不超过0.5s)
                const int64_t maxDrainDelayUs = std::max(
                        mAudioSink->getBufferDurationInUs(), (int64_t)500000 /* half second */);
                Mutex::Autolock autoLock(mLock);
                //这里相当于再次排泄audio queue
                //根据已经写给AudioSink的数据和AudioSink已经播放掉的数据
                //差值,计算出pending中的数据需要delay多久再reschedule
                //然后下面这个函数就会delay发出消息
                //当然了,为了防止数据不足underRun,这里时间需要减半
                postDrainAudioQueue_l(delayUs);
            }
            break;
        }

接下来我们就好好看看这个onDrainAudioQueue方法

bool NuPlayer::Renderer::onDrainAudioQueue() {
    
    
    //do not drain audio during teardown as queued buffers may be invalid.
    if (mAudioTornDown) {
    
    
    	return false;    
    }
    uint32_t numFramesPlayed;
    if (mAudioSink->getPosition(&numFramesPlayed) != OK) {
    
    
        //getPosition失败
        //renderer不会去reschedule(返回false)
        //除非新的采样进来排队
        //If we have pending EOS (or "eos" marker for discontinuities), we need
        //to post these now as 
        //NuPlayerDecoder might be waiting for it.
        drainAudioQueueUntilLastEOS();
        return false;
    }
    
    //把已经written的帧数记下来
    uint32_t prevFramesWritten = mNumFramesWritten;
    while (!mAudioQueue.empty()) {
    
    
        //从队列中取出一个entry
        QueueEntry *entry = &*mAudioQueue.begin();
        if (entry->mBuffer == NULL) {
    
    
            if (entry->mNotifyConsumed != nullptr) {
    
    
                // TAG for re-open audio sink.
                //为什么mNotifyConsumed不为空
                //就需要重新open audio sink?
                //因为entry的buffer是空的,出异常了?
                onChangeAudioFormat(entry->mMeta, entry->mNotifyConsumed);
                mAudioQueue.erase(mAudioQueue.begin());
                continue;                
            }
            // EOS
            if (mPaused) {
    
    
                return false;
            }
            int64_t postEOSDelayUs = 0;
            //AudioOutput的mNextOutput为空的时候需要做尾部填充
            //不为空的实际应用就是两个音频文件
            //创建的两个MediaPlayer实例
            //设置nextPlayer之后,可以实现无缝切换
            //需要做尾部填充
            if (mAudioSink->needsTrailingPadding()) {
    
    
                //post eos需要delay的时间
                //计算逻辑主要是这个
                //writtenAudioDurationUs - audioSinkPlayedUs;
                //前者是mNumFramesWritten这些数据消耗它需要多久
                //后者是已经播放了的
                //差值就是需要pending的时间
                postEOSDelayUs = getPendingAudioPlayoutDurationUs(ALooper::GetNowUs());
            }
            //pengding上边算出来的时间之后,notifyEOS
            //出异常之后,也要让他能正常EOS
            notifyEOS(true /* audio */, entry->mFinalResult, postEOSDelayUs);
            //这里有一个媒体时间的计算
            mLastAudioMediaTimeUs = getDurationUsIfPlayedAtSampleRate(mNumFramesWritten);
            //这一个entry已经处理完,删掉它
            mAudioQueue.erase(mAudioQueue.begin());
            entry = NULL;
            //判断是否需要做尾部填充
            if (mAudioSink->needsTrailingPadding()) {
    
    
                mAudioSink->stop();
                mNumFramesWritten = 0;
            }
            //当前entry不需要重新调度
            return false;
        }
        //接下来的分析在notifyEOS章节后面
    }
}

notifyEOS分析

notifyEOS这个方法代表着一个时间节点,我们来仔细分析一下.

EOS应该是end of stream的意思

void NuPlayer::Renderer::notifyEOS(bool audio, status_t finalResult, int64_t delayUs) {
    
    
    Mutex::Autolock autoLock(mLock);
    notifyEOS_l(audio, finalResult, delayUs);
}

void NuPlayer::Renderer::notifyEOS_l(bool audio, status_t finalResult, int64_t delayUs) {
    
    
    if (audio && delayUs > 0) {
    
    
        //这里是发给自己,自己处理,最终再通过notifyEOS调用回来
        //然后不走这个if分支
        sp<AMessage> msg = new AMessage(kWhatEOS, this);
        msg->setInt32("audioEOSGeneration", mAudioEOSGeneration);
        msg->setInt32("finalResult", finalResult);
        msg->post(delayUs);
        return;
    }
    sp<AMessage> notify = mNotify->dup();
    notify->setInt32("what", kWhatEOS);
    notify->setInt32("audio", static_cast<int32_t>(audio));
    notify->setInt32("finalResult", finalResult);
    //这里会发给mNotify
    //mNotify在Renderer构造函数中传入
    //NuPlayer::onStart()中
    //可以分析出,传入的notify是Nuplayer对象(也继承自AHandler)
    //所以这里post之后的消息由Nuplayer自己响应
    notify->post(delayUs);

    if (audio) {
    
    
        mAnchorTimeMediaUs = -1;
    }
} 

如上面注释部分分析,这个kWhatEOS消息最终会分发到NuPlayer

//NuPlayer.cpp
//render构造的时候传入的notify的第一个参数是kWhatRendererNotify
sp<AMessage> notify = new AMessage(kWhatRendererNotify, this);
//所以这里
void NuPlayer::onMessageReceived(const sp<AMessage> &msg) {
    
    
	...
        //Renderer发过来的消息就一律在这里接收了
        case kWhatRendererNotify:
    {
    
    
        ...
        if (what == Renderer::kWhatEOS) {
    
    
            int32_t audio;
            CHECK(msg->findInt32("audio", &audio));
            
            int32_t finalResult;
            CHECK(msg->findInt32("finalResult", &finalResult));
            
            if (audio) {
    
    
                mAudioEOS = true;
            } else {
    
    
                mVideoEOS = true;
            }
            //finalResult一般情况下会是OK
            //使用Callback形式的时候
            //notifyEOSCallback每次都会传
            //ERROR_END_OF_STREAM
            //感觉名字带个error,但并不是有问题呢
            if (finalResult == ERROR_END_OF_STREAM) {
    
    
                ALOGV("reached %s EOS", audio ? "audio" : "video");
            } else {
    
    
                ALOGE("%s track encountered an error (%d)",
                         audio ? "audio" : "video", finalResult);
                notifyListener(
               		MEDIA_ERROR, MEDIA_ERROR_UNKNOWN, finalResult);
            }
            //这个条件简单来说,就是视频/音频满足eos条件的时候
            if ((mAudioEOS || mAudioDecoder == NULL)
                        && (mVideoEOS || mVideoDecoder == NULL)) {
    
    
                    notifyListener(MEDIA_PLAYBACK_COMPLETE, 0, 0);
            }
        }
        ...
    }
    ...
}

接下来接着分析notifyListener

调用的时候是这样的:

notifyListener(MEDIA_PLAYBACK_COMPLETE, 0, 0);

实现

void NuPlayer::notifyListener(int msg, int ext1, int ext2, const Parcel *in) {
    
    
    ...
    sp<NuPlayerDriver> driver = mDriver.promote();
	...
    driver->notifyListener(msg, ext1, ext2, in);
}

//真正实现
//最后一个是默认参数,调用时可以不传递
void NuPlayerDriver::notifyListener_l(
    int msg, int ext1, int ext2, const Parcel *in) {
    
    
    ...
                case MEDIA_PLAYBACK_COMPLETE:
        {
    
    
            if (mState != STATE_RESET_IN_PROGRESS) {
    
    
                if (mAutoLoop) {
    
    
                    //默认stream_music
                    audio_stream_type_t streamType = AUDIO_STREAM_MUSIC;
                    if (mAudioSink != NULL) {
    
    
                        streamType = mAudioSink->getAudioStreamType();
                    }
                    if (streamType == AUDIO_STREAM_NOTIFICATION) {
    
    
                        //AUDIO_STREAM_NOTIFICATION不能循环
                        mAutoLoop = false;
                    }
                }
                //如果循环播放的话
                if (mLooping || mAutoLoop) {
    
    
                    //seek到开头处
                    mPlayer->seekToAsync(0);
                    if (mAudioSink != NULL) {
    
    
                        //为了播放出最后一点音频,
                        //renderer在结尾处停止了sink
                        //如果正在循环播放,我们需要重启它
                        mAudioSink->start();
                    }
                    // 循环时候不发送完成事件
                    return;
                }
                //停止播放
                mPlayer->pause();
                mState = STATE_PAUSED;
            }
            // fall through
        }
    ...
    //
    mLock.unlock();
    //发出事件
    sendEvent(msg, ext1, ext2, in);
    mLock.lock();
}

如果不循环播放的话,会调用

mPlayer->pause();

mPlayer创建过程,返回了一个Nuplayer对象

//NuPlayerDriver构造函数
mPlayer(AVNuFactory::get()->createNuPlayer(pid)),

那这里的pause方法就对应的是

void NuPlayer::pause() {
    
    
    (new AMessage(kWhatPause, this))->post();
}

//NuPlayer::onMessageReceived收到消息调用了onPause();
void NuPlayer::onPause() {
    
    
	//mSource来自setDataSource
	mSource->pause();
	//mRenderer当然就是一个NuPlayer::Renderer对象了
	mRenderer->pause();
    sp<NuPlayerDriver> driver = mDriver.promote();
    if (driver != NULL) {
    
    
        int64_t now = systemTime();
        int64_t played = now - mLastStartedPlayingTimeNs;

        driver->notifyMorePlayingTimeUs((played+500)/1000);
    }

调用完pause,再回去,调用

//NuPlayerDriver::notifyListener_l
sendEvent(msg, ext1, ext2, in);

这个方法的实现

//各类继承关系
//class MediaPlayerInterface : public MediaPlayerBase
//struct NuPlayerDriver : public MediaPlayerInterface
//NuPlayerDriver的爷爷类是MediaPlayerBase
//发现NuPlayerDriver中调用的sendEvent方法实现如下
//代码位于MediaPlayerInterface.h
class MediaPlayerBase {
    
    
    ...
       void sendEvent(int msg, int ext1=0, int ext2=0,const Parcel *obj=NULL) {
    
     
    	sp<Listener> listener;
        {
    
    
            Mutex::Autolock autoLock(mNotifyLock);
            listener = mListener;
        }
        if (listener != NULL) {
    
    
            listener->notify(msg, ext1, ext2, obj);
        }
    }
    ...
    //mListener赋值
    void setNotifyCallback(
        const sp<Listener> &listener) {
    
    
        Mutex::Autolock autoLock(mNotifyLock);
        mListener = listener;
    }
}

调用setNotifyCallback的位置

//MediaPlayerFactory.cpp
sp<MediaPlayerBase> MediaPlayerFactory::createPlayer(
        player_type playerType,
        const sp<MediaPlayerBase::Listener> &listener,
    pid_t pid) {
    
    
 	   sp<MediaPlayerBase> p;
    ...
       p->setNotifyCallback(listener);
    ...
}

一番寻找之后,发现,mListener是这么来的

//MediaPlayerService::Client::Client
mListener = new Listener(this);

//sp<MediaPlayerBase> MediaPlayerService::Client::setDataSource_pre
sp<MediaPlayerBase> p = createPlayer(playerType);

//sp<MediaPlayerBase> MediaPlayerService::Client::createPlayer
p = MediaPlayerFactory::createPlayer(playerType, mListener, mPid);

那所以,这里的listener就是使用mediaplayer的客户端自己了

class Client : public BnMediaPlayer
//mListener的定义
sp<MediaPlayerBase::Listener> mListener;
mListener = new Listener(this)

等号后面的Listener类(MediaPlayerService::Client::Listener)
继承自MediaPlayerBase::Listener

//MediaPlayerService.h
class Listener : public MediaPlayerBase::Listener {
    
    
   	...
    virtual void notify(int msg, int ext1, int ext2, const Parcel *obj) {
    
    
        sp<Client> client = mClient.promote();
        if (client != NULL) {
    
    
            client->notify(msg, ext1, ext2, obj);
        }
    } 
    ...
}

//MediaPlayerService.cpp
void MediaPlayerService::Client::notify(
    int msg, int ext1, int ext2, const Parcel *obj)
{
    
    
    sp<IMediaPlayerClient> c;
    ...
    c = mClient;    
    ...
    c->notify(msg, ext1, ext2, obj);
}

这里notify实现在BpMediaPlayerClient(继承自IMediaPlayerClient)

//来到对端status_t BnMediaPlayerClient::onTransact
case NOTIFY:
	notify(msg, ext1, ext2, &obj);
//然后
//class MediaPlayer : public BnMediaPlayerClient
//所以这个notify的最终实现
void MediaPlayer::notify(int msg, int ext1, int ext2, const Parcel *obj)
{
    
    
    ...
        case MEDIA_PLAYBACK_COMPLETE:
    		mCurrentState = MEDIA_PLAYER_PLAYBACK_COMPLETE;
    ...
        //这里又是listener...
        listener->notify(msg, ext1, ext2, obj);
}

往回找到调用MediaPlayer::setListener的地方

//framework/base/media/jni/android_media_MediaPlayer.cpp
static void
android_media_MediaPlayer_native_setup
{
    
    
    ...
    // create new listener and give it to MediaPlayer
    sp<JNIMediaPlayerListener> listener = new JNIMediaPlayerListener(env, thiz, weak_this);
    mp->setListener(listener);
    ...
}

回到java层

//MediaPlayer.java
public MediaPlayer() {
    
    
    ...
    native_setup(new WeakReference<MediaPlayer>(this));
    ...
}

再来看看JNIMediaPlayerListener

//android_media_MediaPlayer.cpp
class JNIMediaPlayerListener: public MediaPlayerListener
{
    
    
...
	JNIMediaPlayerListener::JNIMediaPlayerListener(JNIEnv* env, jobject thiz, jobject weak_thiz)
	{
    
    
		//这里的class就是android/media/MediaPlayer
		jclass clazz = env->GetObjectClass(thiz);
		//跟java的mediaplayer关联上了
		mClass = (jclass)env->NewGlobalRef(clazz);
		//注意这里传下来的是一个弱引用对象
		mObject  = env->NewGlobalRef(weak_thiz);
	}
...
	void JNIMediaPlayerListener::notify(int msg, int ext1, int ext2, const Parcel *obj)
	{
    
    
		//看名字就是这个了fields.post_event
    	env->CallStaticVoidMethod(mClass, fields.post_event, mObject,msg, ext1, ext2, NULL);
	}
...
}

//android_media_MediaPlayer_native_init
    fields.post_event = env->GetStaticMethodID(clazz, "postEventFromNative","(Ljava/lang/Object;IIILjava/lang/Object;)V");

各种弯弯绕绕,终于找到这里MediaPlayer java中的方法postEventFromNative.篇幅有限,这里就不继续跟踪了。回调已经从NuPlayer::Renderer送到了app进程,接下来会有别的方式去最终通过应用注册的一些listener,最终收到回调。

到这里,notifyEOS算是分析结束了。

我们跳回来,继续onDrainAudioQueue

bool NuPlayer::Renderer::onDrainAudioQueue() {
    
    
    //(接上文)
    ...
    //已经queued的buffer总数(包括音频和视频)
    //前面这个值小于后面的值,就证明没走到这里
    //entry->mBuffer == NULL
    //出异常了
    mLastAudioBufferDrained = entry->mBufferOrdinal;
    //entry第一次被处理的时候,更新mediaTimeUs
    //这个mediaTimeUs来自MediaCodec
    //这个mediaTimeUs
    //的意思便是,当前这个entry相对于开始时间点的持续时长
    //比如说视频播放到52:08,那这里的mediaTimeUs就是
    //3125.80 secs (转换成s)左右
    if (entry->mOffset == 0 && entry->mBuffer->size() > 0) {
    
    
    	int64_t mediaTimeUs;
        CHECK(entry->mBuffer->meta()->findInt64("timeUs", &mediaTimeUs));
      	//这个函数后面就分析
        onNewAudioMediaTime(mediaTimeUs);
    }    
    //计算audio数据长度
    size_t copy = entry->mBuffer->size() - entry->mOffset;
    //把数据通过AudioTrack接口write写出去
    ssize_t written = mAudioSink->write(entry->mBuffer->data() + entry->mOffset,copy, false /* blocking */);
    //写数据失败了
    if (written < 0) {
    
    
        ...
        break;
    }
    //调整entry的偏移(有可能当前entry中的buffer一次写不完)
    entry->mOffset += written;
    //计算没写完的数据量
    size_t remainder = entry->mBuffer->size() - entry->mOffset;
    if ((ssize_t)remainder < mAudioSink->frameSize()) {
    
    
        if (remainder > 0) {
    
    
            //损坏的audio buffer具有分帧,丢弃
            entry->mOffset += remainder;
            copy -= remainder;
        }
        //前面说过mNotifyConsumed来自queueBuffer传过来的
        //第三个参数notify
        //reply是在handleAnOutputBuffer函数创建的
        //msg设置的kWhatRenderBuffer
        //handler设置的this
        //最终调用的是onRenderBuffer
    	//这里先不继续分析onRenderBuffer
        entry->mNotifyConsumed->post();
        //删掉已经处理完的entry
        mAudioQueue.erase(mAudioQueue.begin());
        entry = NULL;
    }
    //当前这次写完的帧数
    size_t copiedFrames = written / mAudioSink->frameSize();
    //累计写完的帧数
    mNumFramesWritten += copiedFrames;
       {
    
    
            Mutex::Autolock autoLock(mLock);
            int64_t maxTimeMedia;
           //锚点时间加上
           //当前写完的帧数-上一次写掉的帧数(从log来看是这样的)
           //即本次写掉的帧数对应的媒体持续时间
            maxTimeMedia =
                mAnchorTimeMediaUs +
                        (int64_t)(max((long long)mNumFramesWritten - mAnchorNumFramesWritten, 0LL)
                                * 1000LL * mAudioSink->msecsPerFrame());
           //更新最大媒体时间
            mMediaClock->updateMaxTimeMedia(maxTimeMedia);
			//通过前面分析过的notify链路回调APP
            //MEDIA_STARTED
            notifyIfMediaRenderingStarted_l();
        }
    ...
        if (written != (ssize_t)copy) {
    
    
            //这个异常情况说明,copy的大小不是frameSize的整数倍
            CHECK_EQ(copy % mAudioSink->frameSize(), 0u);
            //不然的话就是写下去的数据小于copy
            //然后一定会触发后面的reschedule?
            //这里推断应该是AudioSink buffer满了
            ALOGV("AudioSink write short frame count %zd < %zu", written, copy);
            break;
        }
    //判断需不需要reschedule
    //prevFramesWritten != mNumFramesWritten基本上满足
    //mAudioQueue有数据的时候,就需要reschedule
    bool reschedule = !mAudioQueue.empty()
        && (!mPaused
            || prevFramesWritten != mNumFramesWritten);
    return reschedule;
}

onNewAudioMediaTime分析

void NuPlayer::Renderer::onNewAudioMediaTime(int64_t mediaTimeUs) {
    
    
    ...
    //mAudioFirstAnchorTimeMediaUs为初始值-1的话
    //设置它为mediaTimeUs
    //并且mMediaClock->setStartingTimeMedia(mediaUs);
    //简单来讲,就是设置mAudioFirstAnchorTimeMediaUs
    //和mediaclock的开始时间
    setAudioFirstAnchorTimeIfNeeded_l(mediaTimeUs);
    //一直调用getTimestamp,直到能正常取到时间戳为止
    if (mNextAudioClockUpdateTimeUs == -1) {
    
    
        AudioTimestamp ts;
        if (mAudioSink->getTimestamp(ts) == OK && ts.mPosition > 0) {
    
    
        	mNextAudioClockUpdateTimeUs = 0; //开始更新时钟
        }
    }
    int64_t nowUs = ALooper::GetNowUs();
    //前面能从AudioTrack取到timeStamp了
    if (mNextAudioClockUpdateTimeUs >= 0) {
    
    
        //这里要保证的是两次updateAnchor的时间间隔不小于20ms
        if (nowUs >= mNextAudioClockUpdateTimeUs) {
    
    
            //当前media时间等于mediaTimeUs减去等待播放的数据所持续的时长
            //********当前真正播放的时间点*********
            //当前codec过来的数据和当前真实播放的数据之间的差距可能会达到几百ms
            //我当前的项目上就是
            int64_t nowMediaUs = mediaTimeUs - getPendingAudioPlayoutDurationUs(nowUs);
            //更新锚点,底下详细分析
            mMediaClock->updateAnchor(nowMediaUs, nowUs, mediaTimeUs);
            mUseVirtualAudioSink = false;
            //当前时间加上20ms
            mNextAudioClockUpdateTimeUs = nowUs + kMinimumAudioClockUpdatePeriodUs;
        }
    } else {
    
    
        //取不到AudioTrack timestamp
        int64_t unused;
        //
        if ((mMediaClock->getMediaTime(nowUs, &unused) != OK)
            && (getDurationUsIfPlayedAtSampleRate(mNumFramesWritten)
                > kMaxAllowedAudioSinkDelayUs)) {
    
    
            //mAnchorTimeRealUs == -1
            //并且getDurationUsIfPlayedAtSampleRate超过了1.5s
            
            //足够的数据送往AudioSink,但是AudioSink还没有渲染任何数据。
            //AudioSink出问题了。
            //比如device 和audio out没有连接
            //这种情况下:
            //切换到system clock
            //使用虚拟audiosink
            mMediaClock->updateAnchor(mAudioFirstAnchorTimeMediaUs, nowUs, mediaTimeUs);
            mUseVirtualAudioSink = true;
        }
    }
    //已经写下去的帧数(测试发现上一次播放开始计数,暂停不清零,seek清零)
    mAnchorNumFramesWritten = mNumFramesWritten;
    //最后一次播放到的相对于媒体文件开头的时间戳
    mAnchorTimeMediaUs = mediaTimeUs;
}

updateAnchor

前面说道,onNewAudioMediaTime过程中20ms以上updateAnchor才一次。

正常情况下调用(前文onNewAudioMediaTime的if分支)

mMediaClock->updateAnchor(nowMediaUs, nowUs, mediaTimeUs);

入参分别是

1.当前真正播放的相对于媒体开头的媒体时间

2.函数调用时刻的系统时间

3.codec送过来的,当前正在drain的entry中的相对于媒体开头的媒体时间。

void MediaClock::updateAnchor(
        int64_t anchorTimeMediaUs,
        int64_t anchorTimeRealUs,
        int64_t maxTimeMediaUs) {
    
    
	...
        int64_t nowUs = ALooper::GetNowUs();
    	//计算方式
    	//之前记录的正在播放的媒体时间
    	//加上根据播放速率计算的,函数调用这段时间播放掉的时间
    	//其实还是当前播放的媒体时间
    	int64_t nowMediaUs =
        anchorTimeMediaUs + (nowUs - anchorTimeRealUs) * (double)mPlaybackRate;
    ...
        //最大媒体时间其实就是codec送过来的当前buffer的时间戳
        //这里应该是为了防止时间戳比媒体真实总时间还要长,流媒体的话
        //就是当前渲染到的buffer的时间了
        mMaxTimeMediaUs = maxTimeMediaUs;
    ...
        if (mAnchorTimeRealUs != -1) {
    
    
            //(旧的)当前真正播放的媒体时间(计算方式同上)
        	int64_t oldNowMediaUs =
                mAnchorTimeMediaUs + (nowUs - mAnchorTimeRealUs) * (double)mPlaybackRate;
            if (nowMediaUs < oldNowMediaUs
                && nowMediaUs > oldNowMediaUs - kAnchorFluctuationAllowedUs) {
    
    
                //如果这次要更新的时间满足
                //oldNowMediaUs-10ms<nowMediaUs<oldNowMediaUs
                //比之前的时间要小,但是又只小了10ms以内,不更新
                return;
            }
        }
    //这个变量记录的就是最后一次取到的当前系统时间
    mAnchorTimeRealUs = nowUs;
    //这个变量记录的是最后一次正在播放的相对媒体时间
    mAnchorTimeMediaUs = nowMediaUs;
}

mAnchorTimeMediaUs这个变量在NuPlayer::Renderer类和MediaClock类中都有定义.

需要注意区分。

视频渲染

拖了好久,还是要面对不熟悉的东西。硬着头皮分析下不熟悉的视频部分!

前面说到NuPlayer::Decoder::handleAnOutputBuffer处理MediaCodec送过来的消息。

然后这个方法开始处理这些消息

void NuPlayer::Renderer::onQueueBuffer(const sp<AMessage> &msg) {
    
    
    ...
        mHasVideo = true;
    ...
    if (mHasVideo) {
    
    
        if (mVideoScheduler == NULL) {
    
    
            mVideoScheduler = new VideoFrameScheduler();
            mVideoScheduler->init();
        }
    }
    ...
        //创建并初始化entry
        QueueEntry entry;
    ...
        mVideoQueue.push_back(entry);
    	postDrainVideoQueue();
    ...
        //取出Audio/Video的第一个时间戳
        int64_t firstAudioTimeUs;
    	int64_t firstVideoTimeUs;
    CHECK(firstAudioBuffer->meta()
            ->findInt64("timeUs", &firstAudioTimeUs));
    CHECK(firstVideoBuffer->meta()
            ->findInt64("timeUs", &firstVideoTimeUs));
    	int64_t diff = firstVideoTimeUs - firstAudioTimeUs;
    if (diff > 100000ll) {
    
    
        //音频数据比video数据早了0.1s以上
        (*mAudioQueue.begin()).mNotifyConsumed->post();
        //删掉第一个buffer queue
        mAudioQueue.erase(mAudioQueue.begin());
        return;
    }
    syncQueuesDone_l();
}

这里先把这个简单的syncQueuesDone_l函数分析掉。

前面做完了postDrainAudioQueue_l或者postDrainVideoQueue,没有什么异常的时候,就会调用这个函数

void NuPlayer::Renderer::syncQueuesDone_l() {
    
    
    if (!mSyncQueues) {
    
    
        return;
    }
    mSyncQueues = false;
    
    //如果队列里还是有数据,需要再次调用post函数,排空数据
    if (!mAudioQueue.empty()) {
    
    
        postDrainAudioQueue_l();
    }
    if (!mVideoQueue.empty()) {
    
    
        mLock.unlock();
        postDrainVideoQueue();
        mLock.lock();
    }
}

好的,继续分析前面漏掉的视频渲染流程。

从真实运行情况log:

07-14 10:47:23.059 11632 11769 V NuPlayerRenderer: queueBuffer audio = 1
07-14 10:47:23.060 11632 11769 V NuPlayerRenderer: queueBuffer audio = 1
07-14 10:47:23.061 11632 11769 V NuPlayerRenderer: queueBuffer audio = 1
07-14 10:47:23.063 11632 11769 V NuPlayerRenderer: queueBuffer audio = 1
07-14 10:47:23.094 11632 11767 V NuPlayerRenderer: queueBuffer audio = 0
07-14 10:47:23.095 11632 11767 V NuPlayerRenderer: queueBuffer audio = 0
07-14 10:47:23.096 11632 11767 V NuPlayerRenderer: queueBuffer audio = 0
07-14 10:47:23.104 11632 11767 V NuPlayerRenderer: queueBuffer audio = 0
07-14 10:47:23.110 11632 11767 V NuPlayerRenderer: queueBuffer audio = 0
07-14 10:47:23.140 11632 11767 V NuPlayerRenderer: queueBuffer audio = 0
07-14 10:47:23.140 11632 11767 V NuPlayerRenderer: queueBuffer audio = 0
07-14 10:47:23.140 11632 11767 V NuPlayerRenderer: queueBuffer audio = 0
07-14 10:47:23.142 11632 11769 V NuPlayerRenderer: queueBuffer audio = 1
07-14 10:47:23.146 11632 11769 V NuPlayerRenderer: queueBuffer audio = 1
07-14 10:47:23.150 11632 11769 V NuPlayerRenderer: queueBuffer audio = 1
07-14 10:47:23.159 11632 11769 V NuPlayerRenderer: queueBuffer audio = 1
07-14 10:47:23.165 11632 11767 V NuPlayerRenderer: queueBuffer audio = 0
07-14 10:47:23.165 11632 11767 V NuPlayerRenderer: queueBuffer audio = 0
07-14 10:47:23.168 11632 11769 V NuPlayerRenderer: queueBuffer audio = 1
07-14 10:47:23.168 11632 11769 V NuPlayerRenderer: queueBuffer audio = 1
07-14 10:47:23.169 11632 11769 V NuPlayerRenderer: queueBuffer audio = 1

来看,onQueueBuffer会不规律地,交替调用postDrainAudioQueue_l和postDrainVideoQueue。

前面我们已经分析过postDrainAudioQueue_l,现在开始研究postDrainVideoQueue

void NuPlayer::Renderer::postDrainVideoQueue() {
    
    
    if (mDrainVideoQueuePending
        || getSyncQueues()
        || (mPaused && mVideoSampleReceived)) {
    
    
        return;
    }
    if (mVideoQueue.empty()) {
    
    
        return;
    }
    
    QueueEntry &entry = *mVideoQueue.begin();
    sp<AMessage> msg = new AMessage(kWhatDrainVideoQueue, this);
    //出现了一个之前一直没分析的drainGeneration
    //暂停的时候,flush的时候,会++
    msg->setInt32("drainGeneration", getDrainGeneration(false /* audio */));
    ...
    bool needRepostDrainVideoQueue = false;
    int64_t delayUs;
    int64_t nowUs = ALooper::GetNowUs();
    int64_t realTimeUs;
    if (mFlags & FLAG_REAL_TIME) {
    
    
        //这个分支暂时不知道干啥用的,使用实时时间?
    	int64_t mediaTimeUs;
        CHECK(entry.mBuffer->meta()->findInt64("timeUs", &mediaTimeUs));
        realTimeUs = mediaTimeUs;
    } else {
    
    
        int64_t mediaTimeUs;
        CHECK(entry.mBuffer->meta()->findInt64("timeUs", &mediaTimeUs));
        {
    
    
            //这一段主要是为了获取realTimeUs
            //送显时间,视频帧应该在这个时间点显示
            Mutex::Autolock autoLock(mLock);
            if (mAnchorTimeMediaUs < 0) {
    
    
                mMediaClock->updateAnchor(mediaTimeUs, nowUs, mediaTimeUs);
                mAnchorTimeMediaUs = mediaTimeUs;
                realTimeUs = nowUs;
            } else if (!mVideoSampleReceived) {
    
    
                // Always render the first video frame.
                //这个变量除了onFlush会置位false
                //每次在onDrainVideoQueue都会被置为true
                //说明onFlush之后会直接去渲染第一帧
                realTimeUs = nowUs;
            } else if (mAudioFirstAnchorTimeMediaUs < 0
                || mMediaClock->getRealTimeFor(mediaTimeUs, &realTimeUs) == OK) {
    
    
                //然后又调用一遍?上面不是赋值了吗?
                //取到的时间相当于是当前entry对应的buffer
                //应该播放的真实计算机时间点
                realTimeUs = getRealTimeUs(mediaTimeUs, nowUs);
            } else if
                (mediaTimeUs - mAudioFirstAnchorTimeMediaUs >= 0) 
            {
    
    
                needRepostDrainVideoQueue = true;
                realTimeUs = nowUs;
            } else {
    
    
                realTimeUs = nowUs;
            }
        }
        ...
    }
}

这里,小小展开一下,这个getRealTimeFor的调用。

如果mAudioFirstAnchorTimeMediaUs < 0 或者 mMediaClock->getRealTimeFor(mediaTimeUs, &realTimeUs) == OK。

mAudioFirstAnchorTimeMediaUs这边变量,在onNewAudioMediaTime的时候会去设置。被设置过之后,每次都会触发调用getRealTimeFor函数

status_t MediaClock::getRealTimeFor(
    int64_t targetMediaUs, int64_t *outRealUs) const {
    
    
	...
        int64_t nowUs = ALooper::GetNowUs();
    	int64_t nowMediaUs;
    	//获取当前正在播放的media时间
    	status_t status =
            getMediaTime_l(nowUs, &nowMediaUs, true /* allowPastMaxTime */);
    	...
        //返回值等于,目标媒体时间(当前这种场景,传进来的是codec产生的buffer
        //中的时间戳),和当前正在播放的mediatime的差值加上当前时间
        //结果就是,当前这个buffer应该播放的系统时间点
        //video的话就是当前这一帧应该显示的时间点
        *outRealUs = (targetMediaUs - nowMediaUs) / (double)mPlaybackRate + nowUs;
    ...
}

继续分析postDrainVideoQueue

前面根据当前mediaTimeUs获取了realTimeUs

void NuPlayer::Renderer::postDrainVideoQueue() {
    
    
	//接前面
    //各种计算,算出realTimeUs和delayUs
    //realTimeUs给mVideoScheduler做schedule
    //delayUs用来发消息
    ...
        if (!mHasAudio) {
    
    
            // smooth out videos >= 10fps
            mMediaClock->updateMaxTimeMedia(mediaTimeUs + 100000);
        }
    
    	//目前最新数据的时间点和当前之间点之间的差值
    	delayUs = realTimeUs - nowUs;
    	int64_t postDelayUs = -1;
    if (delayUs > 500000) {
    
    
        //delay 500ms
        //为什么要delay这么久,因为当前来的数据
        //是在当前时间+500ms之后才会渲染的,来早了
        postDelayUs = 500000;
        //如果mLastAudioBufferDrained
        if (mHasAudio && (mLastAudioBufferDrained - entry.mBufferOrdinal) <= 0) {
    
    
            //这里delay只留10ms
            postDelayUs = 10000;
        }
    } else if (needRepostDrainVideoQueue) {
    
    
        postDelayUs = mediaTimeUs - mAudioFirstAnchorTimeMediaUs;
        postDelayUs /= mPlaybackRate;
    }
    
    if (postDelayUs >= 0) {
    
    
        //目前碰到过的问题是,进到了这里,就音画不同步了
        msg->setWhat(kWhatPostDrainVideoQueue);
        msg->post(postDelayUs);
        mVideoScheduler->restart();
        mDrainVideoQueuePending = true;
        return;
    }
    //利用vsync信号调整realTimeUs
    realTimeUs = mVideoScheduler->schedule(realTimeUs * 1000) / 1000;
    //twoVsyncsUs在我这个平台上是33344,那VsyncPeriod就是16672us
    int64_t twoVsyncsUs = 2 * (mVideoScheduler->getVsyncPeriod() / 1000);
    delayUs = realTimeUs - nowUs;
    msg->post(delayUs > twoVsyncsUs ? delayUs - twoVsyncsUs : 0);
    mDrainVideoQueuePending = true;
}

这里出现了一个新东西mVideoScheduler,类型是VideoFrameScheduler.

看了下代码基本看不懂。

不过,schedule传递的时间戳主要是realTimeUs,基本来自audio。

VideoFrameScheduler有点啃不懂,先跳过。

前面msg本身是kWhatDrainVideoQueue,如果postDelayUs >= 0,则是kWhatPostDrainVideoQueue。

kWhatDrainVideoQueue:
onDrainVideoQueue();
postDrainVideoQueue();

//下面这个消息带一个postDelayUs的延迟
kWhatPostDrainVideoQueue:
postDrainVideoQueue();

都会再次调用postDrainVideoQueue方法,但是kWhatDrainVideoQueue多了一个

onDrainVideoQueue

void NuPlayer::Renderer::onDrainVideoQueue() {
    
    
    ...
    QueueEntry *entry = &*mVideoQueue.begin();
    if (entry->mBuffer == NULL) {
    
    
    	notifyEOS(false /* audio */, entry->mFinalResult);
        mVideoQueue.erase(mVideoQueue.begin());
        entry = NULL;
        setVideoLateByUs(0);
        return;
    }
    
    int64_t nowUs = ALooper::GetNowUs();
    int64_t realTimeUs;
    int64_t mediaTimeUs = -1;
    if (mFlags & FLAG_REAL_TIME) {
    
    
    	CHECK(entry->mBuffer->meta()->findInt64("timeUs", &realTimeUs));
        mediaTimeUs = realTimeUs; // for trace purpose only
   	} else {
    
    
    	CHECK(entry->mBuffer->meta()->findInt64("timeUs", &mediaTimeUs));
    	//这个方法前文有做分析
        //调用的是MediaClock::getRealTimeFor
        realTimeUs = getRealTimeUs(mediaTimeUs, nowUs);
    }
    
    bool tooLate = false;
    if (!mPaused) {
    
    
        //设置视频的延迟时间(视频帧显示的时间点和当前时间点之间的差值)
        setVideoLateByUs(nowUs - realTimeUs);
        //超过40ms判定为太迟了
        //算丢帧
        tooLate = (mVideoLateByUs > 40000);
        if (tooLate) {
    
    
            //太迟了打印行log
        } else {
    
    
            int64_t mediaUs = 0;
            //这个方法后面分析下
            //前面通过mediaTimeUs计算出realTimeUs
            //然后这里在通过realTimeUs往回推倒出mediaUs
            //这个值如果flag不是FLAG_REAL_TIME的话,一模一样
            //不知道这么做是为了什么
            mMediaClock->getMediaTime(realTimeUs, &mediaUs);         
        }
    } else {
    
    
        //暂停
        setVideoLateByUs(0);
        if (!mVideoSampleReceived && !mHasAudio) {
    
    
            clearAnchorTime();
        }
    }
   	//始终在保持a/v同步统计信息的同时渲染第一个视频帧
    //调用onFlush之后
    if (!mVideoSampleReceived) {
    
    
        realTimeUs = nowUs;
        tooLate = false;
    }
    //触发renderBuffer
    entry->mNotifyConsumed->setInt64("timestampNs", realTimeUs * 1000ll);
    entry->mNotifyConsumed->setInt32("render", !tooLate);
    entry->mNotifyConsumed->post();
    //删之
    mVideoQueue.erase(mVideoQueue.begin());
    entry = NULL;
    //视频采样收过了
    //只有onFlush能让它变回false
    mVideoSampleReceived = true;
    
    if (!mPaused) {
    
    
    	if (!mVideoRenderingStarted) {
    
    
        	mVideoRenderingStarted = true;
            //notifyListener发出MEDIA_INFO_RENDERING_START消息
            notifyVideoRenderingStart();
        }
        Mutex::Autolock autoLock(mLock);
        //notifyListener发出MEDIA_STARTED消息
        notifyIfMediaRenderingStarted_l();
    }
}

MediaClock::getMediaTime_l

status_t MediaClock::getMediaTime_l(
    int64_t realUs, int64_t *outMediaUs, bool allowPastMaxTime) const {
    
    
    ...
        int64_t mediaUs = mAnchorTimeMediaUs
            + (realUs - mAnchorTimeRealUs) * (double)mPlaybackRate;
    ...
        *outMediaUs = mediaUs;
        return OK;
}

这个出参等于mAnchorTimeMediaUs(最后一次记录的相对于媒体开头的时间,或者说调用updateAnchor时候记录下的播放进度)

加上,realUs - mAnchorTimeRealUs (这个变量是MediaClock的,是上一次获取的nowUs,即上一次调用updateAnchor或者setPlaybackRate的系统时间),那这个差值的意思就是目前帧对应的real time和锚点real time之间的差值,加上锚点媒体时间,就是当前帧渲染时候的相对媒体时间。我曹,好绕。

总之,这个方法返回的就是相对时间,相对于媒体开始点,进度条的时间。

目前的大概研究成果就这些了。可能就是一篇代码分析流水账,如有纰漏,还请指出。

以后有更深入的理解,再来献丑!

猜你喜欢

转载自blog.csdn.net/bberdong/article/details/107481690