理解 Audio 音频系统三 之 [2] AudioFlinger --- AudioThread

理解 Audio 音频系统三 之 [2] AudioFlinger --- AudioThread


本文接着前面的《理解 Audio 音频系统三 之 [1] AudioFlinger 启动流程 及 Audio PatchPanel初始化》 继续写。

在前面的文章中,我们主要是根据AudioFlinger 启动流程,重点分析了下 Audio PatchPanel 的初始化,接下来,我们主要介绍下 AudioFlinger 中的其他对象,比如,AudioTrack 和 AudioThread 等 。

三、AudioFlinger

6. AudioFlinger 目录介绍

在 AudioFlinger 目录下:

@ \frameworks\av\services\audioflinger
    Android.mk
    AudioFlinger.cpp / AudioFlinger.h		// AudioFlinger 主代码,Class AudioFlinger 实现,Class SyncEvent 实现,
    AudioHwDevice.cpp / AudioHwDevice.h		// Class AudioHwDevice 实现
    AudioStreamOut.cpp / AudioStreamOut.h	// Class AudioStreamOut 实现
    AudioWatchdog.cpp / AudioWatchdog.h		// 监控CPU 及 记录其相关信息
    AutoPark.h
    BufLog.cpp / BufLog.h					// Audio log 相关实现,log位于/data/misc/audioserver
    Configuration.h							// 对整个 AudioFlinger 功能宏控的单独配置
    Effects.cpp / Effects.h					// Class EffectModule 、 EffectHandle 、 EffectChain 类实现
    FastCapture.cpp / FastCapture.h			// Class FastCapture 类实现 
    FastCaptureDumpState.cpp / FastCaptureDumpState.h	// 打印 FastCapture 的相关信息
    FastCaptureState.cpp / FastCaptureState.h	
    FastMixer.cpp / FastMixer.h				// Class FastMixer 类实现
    FastMixerDumpState.cpp / FastMixerDumpState.h
    FastMixerState.cpp / FastMixerState.h
    FastThread.cpp / FastThread.h			// class FastThread 是 FastMixer 和 FastCapture  的基类
    FastThreadDumpState.cpp / FastThreadDumpState.h	
    FastThreadState.cpp / FastThreadState.h // FastThread 状态管理
    MmapTracks.h
    PatchPanel.cpp / PatchPanel.h			// Class PatchPanel 类实现
    PlaybackTracks.h						// Class Track, 及其子类 OutputTrack、PatchTrack 的类定义
    RecordTracks.h							// Class RecordTrack,及其子类 PatchRecord 的类定义
    ServiceUtilities.cpp / ServiceUtilities.h	// Services 权限检查
    SpdifStreamOut.cpp / SpdifStreamOut.h	// Class SpdifStreamOut 实现:对HAL层的PCM数据进行封装
    StateQueue.cpp / StateQueue.h			
    StateQueueInstantiations.cpp
    Threads.cpp / Threads.h					
    	// class ThreadBase 类 
    	// 子类 Class PlaybackThread, MixerThread, DirectOutputThread, OffloadThread, AsyncCallbackThread
    	// 子类 Class RecordThread, MmapThread , MmapPlaybackThread , MmapCaptureThread 等类定义
    TrackBase.h								// Class TrackBase 定义, 及 Class PatchProxyBufferProvider 定义
    Tracks.cpp								// Class TrackBase 的方法实现,及其他 AudioFlinger 内 Track 相关实现
    TypedLogger.cpp
    TypedLogger.h

从上面的目录,可以看出,除了前面写的 Class PatchPanel 外,

AudioFlinger 中还包含了如下几个大模块:

  1. Audio Device 音频设备管理
  2. Audio EffectModule 音效模块
  3. Audio Thread 音频线程管理
  4. Audio Track 音频流管理
  5. Audio PatchPanel 音频路径管理

接下来,我们对其先各个突破,一个个地来详细学习下:


7. Audio Thread 音效线程管理分析

AudioThread 是AudioFlinger 中的一重要模块。
我们先来看下它的头文件。

7.1 Threads.h 代码分析

可以看出,Threads.h 是只为 AudioFlinger 服务的。

其主要工作如下:

  1. 定义Class ThreadBase,共继承自 Thread ,拥有 Thread 线程的一切方法
    Thread 源码位于 #include <utils/threads.h> 中

  2. 定义了 Audio 中的六种线程,分别用于不同的音频场景
    MIXER:用于混音输出音频的线程
    DIRECT:用于直接输出音频的线程 (不需要混音)
    DUPLICATING:用于拷贝的线程
    RECORD:用于录音的线程
    OFFLOAD:用于硬解码的线程(需有硬解码硬件支持)
    MMAP:用于MMAP 的线程

  3. 定义录音 和 播放时的事件类型
    CFG_EVENT_IO, // IO 访问操作
    CFG_EVENT_PRIO, // 申请优先级
    CFG_EVENT_SET_PARAMETER, // 配置参数
    CFG_EVENT_CREATE_AUDIO_PATCH, // 创建 Audio patch
    CFG_EVENT_RELEASE_AUDIO_PATCH, // 释放 Audio patch

以上代码如下:

@ \src\frameworks\av\services\audioflinger\Threads.h

#ifndef INCLUDING_FROM_AUDIOFLINGER_H
    #error This header file should only be included from AudioFlinger.h
#endif

// ThreadBase 继承自 Thread
class ThreadBase : public Thread {
public:

#include "TrackBase.h"
	// 定义了 Audio 中的六种线程
    enum type_t {
        MIXER,              // Thread class is MixerThread  		用于混音输出音频的线程 
        DIRECT,             // Thread class is DirectOutputThread 	用于直接输出音频的线程 (不需要混音)
        DUPLICATING,        // Thread class is DuplicatingThread 	用于拷贝的线程
        RECORD,             // Thread class is RecordThread	 		用于录音的线程
        OFFLOAD,            // Thread class is OffloadThread 		用于硬解码的线程(需有硬解码硬件支持) 
        MMAP                // control thread for MMAP stream		用于MMAP 的线程
        // If you add any values here, also update ThreadBase::threadTypeToString()
    };
	// 获取线程类型的字符串描述
    static const char *threadTypeToString(type_t type);
	// 构造函数
    ThreadBase(const sp<AudioFlinger>& audioFlinger, audio_io_handle_t id,
                audio_devices_t outDevice, audio_devices_t inDevice, type_t type, bool systemReady);
    virtual             ~ThreadBase();  // 析构函数
	// 检查当前线程 是否可以运行
    virtual status_t    readyToRun();
	// 打印当前线程 及 对应 Audio 的所有信息
    void dumpBase(int fd, const Vector<String16>& args);
    // 打印当前线程的 音效链表信息 
    void dumpEffectChains(int fd, const Vector<String16>& args);

    void clearPowerManager();

	// 定义 录音 和 播放时的事件类型
    // base for record and playback
    enum {
        CFG_EVENT_IO,					// IO 访问操作
        CFG_EVENT_PRIO,					// 申请优先级
        CFG_EVENT_SET_PARAMETER,		// 配置参数
        CFG_EVENT_CREATE_AUDIO_PATCH,	// 创建 Audio patch
        CFG_EVENT_RELEASE_AUDIO_PATCH,	// 释放 Audio patch
    };

7.1.1 定义一系列类Class ThreadBase 的内部类
  • ConfigEventData : 用于打印当前 Event 事件的数据
  • ConfigEvent : 基类,用于 Event 事件的管理

对前边的 六种 Event 事件分别定义了其 ConfigEventData 和 ConfigEvent 内部类。

  • IoConfigEventData :继承自ConfigEventData , 用于 IO 事件

  • IoConfigEvent :继承自 ConfigEvent,IO事件管理

  • PrioConfigEventData :继承自ConfigEventData ,申请优先级

  • PrioConfigEvent :继承自 ConfigEvent

  • SetParameterConfigEventData :继承自ConfigEventData ,配置参数

  • SetParameterConfigEvent :继承自 ConfigEvent

  • CreateAudioPatchConfigEventData :继承自ConfigEventData ,创建 Audio Patch

  • CreateAudioPatchConfigEvent :继承自 ConfigEvent

  • ReleaseAudioPatchConfigEventData :继承自ConfigEventData ,释放Audio Patch

  • ReleaseAudioPatchConfigEvent :继承自 ConfigEvent


@ \src\frameworks\av\services\audioflinger\Threads.h

class ThreadBase : public Thread {
public:
	// 用于打印当前 Event 事件的数据
	class ConfigEventData: public RefBase {
    public:
        virtual ~ConfigEventData() {}
        virtual  void dump(char *buffer, size_t size) = 0;
    protected:
        ConfigEventData() {}
    };
    
	// 用于 Event 事件的管理
	class ConfigEvent: public RefBase {
    public:
        virtual ~ConfigEvent() {}

        void dump(char *buffer, size_t size) { mData->dump(buffer, size); }

        const int mType; // event type e.g. CFG_EVENT_IO
        Mutex mLock;     // mutex associated with mCond
        Condition mCond; // condition for status return
        status_t mStatus; // status communicated to sender
        bool mWaitStatus; // true if sender is waiting for status
        bool mRequiresSystemReady; // true if must wait for system ready to enter event queue
        sp<ConfigEventData> mData;     // event specific parameter data

    protected:
        explicit ConfigEvent(int type, bool requiresSystemReady = false) :
            mType(type), mStatus(NO_ERROR), mWaitStatus(false),
            mRequiresSystemReady(requiresSystemReady), mData(NULL) {}
    };
    
	// 用于 IO Event 事件管理
	class IoConfigEventData : public ConfigEventData {
    public:
        IoConfigEventData(audio_io_config_event event, pid_t pid) :
            mEvent(event), mPid(pid) {}

        virtual  void dump(char *buffer, size_t size) {
            snprintf(buffer, size, "IO event: event %d\n", mEvent);
        }
        const audio_io_config_event mEvent;
        const pid_t                 mPid;
    };
    
	class IoConfigEvent : public ConfigEvent {
    public:
        IoConfigEvent(audio_io_config_event event, pid_t pid) :
            ConfigEvent(CFG_EVENT_IO) {
            mData = new IoConfigEventData(event, pid);
        }
        virtual ~IoConfigEvent() {}
    };

class PrioConfigEventData : public ConfigEventData {
    public:
        PrioConfigEventData(pid_t pid, pid_t tid, int32_t prio, bool forApp) :
            mPid(pid), mTid(tid), mPrio(prio), mForApp(forApp) {}

        virtual  void dump(char *buffer, size_t size) {
            snprintf(buffer, size, "Prio event: pid %d, tid %d, prio %d, for app? %d\n",
                    mPid, mTid, mPrio, mForApp);
        }

        const pid_t mPid;
        const pid_t mTid;
        const int32_t mPrio;
        const bool mForApp;
    };

    class PrioConfigEvent : public ConfigEvent {
    public:
        PrioConfigEvent(pid_t pid, pid_t tid, int32_t prio, bool forApp) :
            ConfigEvent(CFG_EVENT_PRIO, true) {
            mData = new PrioConfigEventData(pid, tid, prio, forApp);
        }
        virtual ~PrioConfigEvent() {}
    };

    class SetParameterConfigEventData : public ConfigEventData {
    public:
        explicit SetParameterConfigEventData(String8 keyValuePairs) :
            mKeyValuePairs(keyValuePairs) {}

        virtual  void dump(char *buffer, size_t size) {
            snprintf(buffer, size, "KeyValue: %s\n", mKeyValuePairs.string());
        }

        const String8 mKeyValuePairs;
    };

    class SetParameterConfigEvent : public ConfigEvent {
    public:
        explicit SetParameterConfigEvent(String8 keyValuePairs) :
            ConfigEvent(CFG_EVENT_SET_PARAMETER) {
            mData = new SetParameterConfigEventData(keyValuePairs);
            mWaitStatus = true;
        }
        virtual ~SetParameterConfigEvent() {}
    };

    class CreateAudioPatchConfigEventData : public ConfigEventData {
    public:
        CreateAudioPatchConfigEventData(const struct audio_patch patch,
                                        audio_patch_handle_t handle) :
            mPatch(patch), mHandle(handle) {}

        virtual  void dump(char *buffer, size_t size) {
            snprintf(buffer, size, "Patch handle: %u\n", mHandle);
        }

        const struct audio_patch mPatch;
        audio_patch_handle_t mHandle;
    };

    class CreateAudioPatchConfigEvent : public ConfigEvent {
    public:
        CreateAudioPatchConfigEvent(const struct audio_patch patch,
                                    audio_patch_handle_t handle) :
            ConfigEvent(CFG_EVENT_CREATE_AUDIO_PATCH) {
            mData = new CreateAudioPatchConfigEventData(patch, handle);
            mWaitStatus = true;
        }
        virtual ~CreateAudioPatchConfigEvent() {}
    };

    class ReleaseAudioPatchConfigEventData : public ConfigEventData {
    public:
        explicit ReleaseAudioPatchConfigEventData(const audio_patch_handle_t handle) :
            mHandle(handle) {}

        virtual  void dump(char *buffer, size_t size) {
            snprintf(buffer, size, "Patch handle: %u\n", mHandle);
        }

        audio_patch_handle_t mHandle;
    };

    class ReleaseAudioPatchConfigEvent : public ConfigEvent {
    public:
        explicit ReleaseAudioPatchConfigEvent(const audio_patch_handle_t handle) :
            ConfigEvent(CFG_EVENT_RELEASE_AUDIO_PATCH) {
            mData = new ReleaseAudioPatchConfigEventData(handle);
            mWaitStatus = true;
        }
        virtual ~ReleaseAudioPatchConfigEvent() {}
    };
}

7.1.2 类Class Threadbase 方法 public 成员

在 public 成员中包括:

  1. 参数获取方法
  2. 事件处理函数
  3. 创建 AudioTrack 方法
  4. 音效相关接口
@ \src\frameworks\av\services\audioflinger\Threads.h

class ThreadBase : public Thread {
public:
 	virtual     status_t    initCheck() const = 0;  //检查是否已经初始化,判断 mOutput == NULL
	// static externally-visible
	type_t      type() const { return mType; }
	 bool isDuplicating() const { return (mType == DUPLICATING); }

	audio_io_handle_t id() const { return mId;}
	
	// 参数获取方法
	// dynamic externally-visible
	uint32_t    sampleRate() const { return mSampleRate; }	// 返回采样速率
	audio_channel_mask_t channelMask() const { return mChannelMask; }	// 反回通道掩码
	audio_format_t format() const { return mHALFormat; }	// 返回格式
	uint32_t channelCount() const { return mChannelCount; }
	// Called by AudioFlinger::frameCount(audio_io_handle_t output) and effects,
	// and returns the [normal mix] buffer's frame count.
	virtual     size_t      frameCount() const = 0;

	// Return's the HAL's frame count i.e. fast mixer buffer size.
	size_t      frameCountHAL() const { return mFrameCount; }

	size_t      frameSize() const { return mFrameSize; }

    // Should be "virtual status_t requestExitAndWait()" and override same
    // method in Thread, but Thread::requestExitAndWait() is not yet virtual.
                void        exit();
    virtual     bool        checkForNewParameter_l(const String8& keyValuePair,
                                                    status_t& status) = 0;
    virtual     status_t    setParameters(const String8& keyValuePairs);
    virtual     String8     getParameters(const String8& keys) = 0;
    virtual     void        ioConfigChanged(audio_io_config_event event, pid_t pid = 0) = 0;
	
	// 事件处理函数
	// sendConfigEvent_l() must be called with ThreadBase::mLock held
	// Can temporarily release the lock if waiting for a reply from
	// processConfigEvents_l().
	status_t    sendConfigEvent_l(sp<ConfigEvent>& event);
	void        sendIoConfigEvent(audio_io_config_event event, pid_t pid = 0);
	void        sendIoConfigEvent_l(audio_io_config_event event, pid_t pid = 0);
	void        sendPrioConfigEvent(pid_t pid, pid_t tid, int32_t prio, bool forApp);
	void        sendPrioConfigEvent_l(pid_t pid, pid_t tid, int32_t prio, bool forApp);
	status_t    sendSetParameterConfigEvent_l(const String8& keyValuePair);
	status_t    sendCreateAudioPatchConfigEvent(const struct audio_patch *patch, audio_patch_handle_t *handle);
	status_t    sendReleaseAudioPatchConfigEvent(audio_patch_handle_t handle);
	void        processConfigEvents_l();
	
	// 创建 AudioTrack 方法
    virtual     void        cacheParameters_l() = 0;
    virtual     status_t    createAudioPatch_l(const struct audio_patch *patch,audio_patch_handle_t *handle) = 0;
    virtual     status_t    releaseAudioPatch_l(const audio_patch_handle_t handle) = 0;
    virtual     void        getAudioPortConfig(struct audio_port_config *config) = 0;

    // see note at declaration of mStandby, mOutDevice and mInDevice
    bool        standby() const { return mStandby; }
    audio_devices_t outDevice() const { return mOutDevice; }
    audio_devices_t inDevice() const { return mInDevice; }
    audio_devices_t getDevice() const { return isOutput() ? mOutDevice : mInDevice; }
    virtual     bool        isOutput() const = 0;
    virtual     sp<StreamHalInterface> stream() const = 0;


	// 音效相关接口
	sp<EffectHandle> createEffect_l(	const sp<AudioFlinger::Client>& client,
						const sp<IEffectClient>& effectClient, int32_t priority,
						audio_session_t sessionId, effect_descriptor_t *desc, int *enabled,
                        status_t *status /*non-NULL*/, bool pinned);

	// return values for hasAudioSession (bit field)
	enum effect_state {
		EFFECT_SESSION = 0x1,   // the audio session corresponds to at least one
                                // effect
		TRACK_SESSION = 0x2,    // the audio session corresponds to at least one
                                // track
		FAST_SESSION = 0x4      // the audio session corresponds to at least one
                                // fast track
	};
	
	// get effect chain corresponding to session Id.
	sp<EffectChain> getEffectChain(audio_session_t sessionId);
	// same as getEffectChain() but must be called with ThreadBase mutex locked
	sp<EffectChain> getEffectChain_l(audio_session_t sessionId) const;
	// add an effect chain to the chain list (mEffectChains)
    virtual     status_t addEffectChain_l(const sp<EffectChain>& chain) = 0;
	// remove an effect chain from the chain list (mEffectChains)
    virtual     size_t removeEffectChain_l(const sp<EffectChain>& chain) = 0;
	// lock all effect chains Mutexes. Must be called before releasing the
	// ThreadBase mutex before processing the mixer and effects. This guarantees the
	// integrity of the chains during the process.
	// Also sets the parameter 'effectChains' to current value of mEffectChains.
	void lockEffectChains_l(Vector< sp<EffectChain> >& effectChains);
	// unlock effect chains after process
	void unlockEffectChains(const Vector< sp<EffectChain> >& effectChains);
	// get a copy of mEffectChains vector
	Vector< sp<EffectChain> > getEffectChains_l() const { return mEffectChains; };
	// set audio mode to all effect chains
	void setMode(audio_mode_t mode);
	// get effect module with corresponding ID on specified audio session
	sp<AudioFlinger::EffectModule> getEffect(audio_session_t sessionId, int effectId);
	sp<AudioFlinger::EffectModule> getEffect_l(audio_session_t sessionId, int effectId);
	// add and effect module. Also creates the effect chain is none exists for
	// the effects audio session
	status_t addEffect_l(const sp< EffectModule>& effect);
	// remove and effect module. Also removes the effect chain is this was the last
	// effect
	 void removeEffect_l(const sp< EffectModule>& effect, bool release = false);
	// disconnect an effect handle from module and destroy module if last handle
	 void disconnectEffectHandle(EffectHandle *handle, bool unpinIfLast);
	// detach all tracks connected to an auxiliary effect
    virtual     void detachAuxEffect_l(int effectId __unused) {}
    
	// returns a combination of:
	// - EFFECT_SESSION if effects on this audio session exist in one chain
	// - TRACK_SESSION if tracks on this audio session exist
	// - FAST_SESSION if fast tracks on this audio session exist
    virtual     uint32_t hasAudioSession_l(audio_session_t sessionId) const = 0;
	uint32_t hasAudioSession(audio_session_t sessionId) const {
		Mutex::Autolock _l(mLock);
		return hasAudioSession_l(sessionId);
	}

	// the value returned by default implementation is not important as the
	// strategy is only meaningful for PlaybackThread which implements this method
	virtual uint32_t getStrategyForSession_l(audio_session_t sessionId __unused)
		{ return 0; }

	// check if some effects must be suspended/restored when an effect is enabled or disabled
	void checkSuspendOnEffectEnabled(const sp<EffectModule>& effect, bool enabled,
										audio_session_t sessionId = AUDIO_SESSION_OUTPUT_MIX);
	void checkSuspendOnEffectEnabled_l(const sp<EffectModule>& effect, bool enabled,
										audio_session_t sessionId = AUDIO_SESSION_OUTPUT_MIX);

	virtual status_t    setSyncEvent(const sp<SyncEvent>& event) = 0;
	virtual bool        isValidSyncEvent(const sp<SyncEvent>& event) const = 0;

	// Return a reference to a per-thread heap which can be used to allocate IMemory
	// objects that will be read-only to client processes, read/write to mediaserver,
	// and shared by all client processes of the thread.
	// The heap is per-thread rather than common across all threads, because
	// clients can't be trusted not to modify the offset of the IMemory they receive.
	// If a thread does not have such a heap, this method returns 0.
	virtual sp<MemoryDealer>    readOnlyHeap() const { return 0; }

	virtual sp<IMemory> pipeMemory() const { return 0; }

	void systemReady();

	// checkEffectCompatibility_l() must be called with ThreadBase::mLock held
	virtual status_t    checkEffectCompatibility_l(const effect_descriptor_t *desc, audio_session_t sessionId) = 0;
	void        broadcast_l();

    mutable     Mutex                   mLock;

7.1.3 类Class Threadbase 方法 protected 成员

包含了一系列的 Audio 参数

protected:
	const sp<AudioFlinger>  mAudioFlinger;
	
	// updated by PlaybackThread::readOutputParameters_l() or RecordThread::readInputParameters_l()
	uint32_t                mSampleRate;
	size_t                  mFrameCount;       // output HAL, direct output, record
	audio_channel_mask_t    mChannelMask;
	uint32_t                mChannelCount;
	size_t                  mFrameSize;
	// not HAL frame size, this is for output sink (to pipe to fast mixer)
	audio_format_t          mFormat;           	// Source format for Recording and
	                                            // Sink format for Playback.
                         						// Sink format may be different than
                            					// HAL format if Fastmixer is used.
	audio_format_t          mHALFormat;
	size_t                  mBufferSize;       // HAL buffer size for read() or write()

	Vector< sp<ConfigEvent> >     mConfigEvents;
	Vector< sp<ConfigEvent> >     mPendingConfigEvents; // events awaiting system ready	
	
	// These fields are written and read by thread itself without lock or barrier,
	// and read by other threads without lock or barrier via standby(), outDevice()
	// and inDevice().
	// Because of the absence of a lock or barrier, any other thread that reads
	// these fields must use the information in isolation, or be prepared to deal
	// with possibility that it might be inconsistent with other information.
	bool                    mStandby;     // Whether thread is currently in standby.
	audio_devices_t         mOutDevice;   // output device
	audio_devices_t         mInDevice;    // input device
	audio_devices_t         mPrevOutDevice;   // previous output device
	audio_devices_t         mPrevInDevice;    // previous input device
	struct audio_patch      mPatch;
	audio_source_t          mAudioSource;
	
	const audio_io_handle_t mId;
	Vector< sp<EffectChain> > mEffectChains;
	
	static const int        kThreadNameLength = 16; // prctl(PR_SET_NAME) limit
	char                    mThreadName[kThreadNameLength]; // guaranteed NUL-terminated
	sp<IPowerManager>       mPowerManager;
	sp<IBinder>             mWakeLockToken;
	const sp<PMDeathRecipient> mDeathRecipient;

7.1.4 内部类Class ActiveTracks

定义了一个内部类, 用于对 ActiveTracks 的管理

@ \src\frameworks\av\services\audioflinger\Threads.h

class ThreadBase : public Thread {
	template <typename T>
	class ActiveTracks {
	public:
		explicit ActiveTracks(SimpleLog *localLog = nullptr)
			: mActiveTracksGeneration(0)
			, mLastActiveTracksGeneration(0)
			, mLocalLog(localLog)
		{ }
	
		~ActiveTracks() {
			ALOGW_IF(!mActiveTracks.isEmpty(),
					"ActiveTracks should be empty in destructor");
		}
		// returns the last track added (even though it may have been
		// subsequently removed from ActiveTracks).
		//
		// Used for DirectOutputThread to ensure a flush is called when transitioning
		// to a new track (even though it may be on the same session).
		// Used for OffloadThread to ensure that volume and mixer state is
		// taken from the latest track added.
		//
		// The latest track is saved with a weak pointer to prevent keeping an
		// otherwise useless track alive. Thus the function will return nullptr
		// if the latest track has subsequently been removed and destroyed.
		sp<T> getLatest() {
			return mLatestActiveTrack.promote();
		}
	
		// SortedVector methods
		ssize_t         add(const sp<T> &track);
		ssize_t         remove(const sp<T> &track);
		size_t          size() const {
			return mActiveTracks.size();
		}
		ssize_t         indexOf(const sp<T>& item) {
			return mActiveTracks.indexOf(item);
		}
		sp<T>           operator[](size_t index) const {
			return mActiveTracks[index];
		}
		typename SortedVector<sp<T>>::iterator begin() {
			return mActiveTracks.begin();
		}
		typename SortedVector<sp<T>>::iterator end() {
			return mActiveTracks.end();
		}
	
		// Due to Binder recursion optimization, clear() and updatePowerState()
		// cannot be called from a Binder thread because they may call back into
		// the original calling process (system server) for BatteryNotifier
		// (which requires a Java environment that may not be present).
		// Hence, call clear() and updatePowerState() only from the
		// ThreadBase thread.
		void            clear();
		// periodically called in the threadLoop() to update power state uids.
		void            updatePowerState(sp<ThreadBase> thread, bool force = false);
	
	private:
		void            logTrack(const char *funcName, const sp<T> &track) const;
	
		SortedVector<uid_t> getWakeLockUids() {
			SortedVector<uid_t> wakeLockUids;
			for (const sp<T> &track : mActiveTracks) {
				wakeLockUids.add(track->uid());
			}
			return wakeLockUids; // moved by underlying SharedBuffer
		}
	
		std::map<uid_t, std::pair<ssize_t /* previous */, ssize_t /* current */>>
							mBatteryCounter;
		SortedVector<sp<T>> mActiveTracks;
		int                 mActiveTracksGeneration;
		int                 mLastActiveTracksGeneration;
		wp<T>               mLatestActiveTrack; // latest track added to ActiveTracks
		SimpleLog * const   mLocalLog;
	};

7.1.5 定义类Class VolumeInterface 类 用于音量相关操作方法

在 Threads.h 文件中,同样也定义了一个 VolumeInterface 结构体,用于控制音量。

@ \src\frameworks\av\services\audioflinger\Threads.h

class VolumeInterface {
 public:
    virtual ~VolumeInterface() {}
    virtual void        setMasterVolume(float value) = 0;
    virtual void        setMasterMute(bool muted) = 0;
    virtual void        setStreamVolume(audio_stream_type_t stream, float value) = 0;
    virtual void        setStreamMute(audio_stream_type_t stream, bool muted) = 0;
    virtual float       streamVolume(audio_stream_type_t stream) const = 0;
};

7.1.6 定义类Class PlaybackThread (继承自 ThreadBase, VolumeInterface ) 用于对播放线程的管理
@ \src\frameworks\av\services\audioflinger\Threads.h

// --- PlaybackThread ---
class PlaybackThread : public ThreadBase, public StreamOutHalInterfaceCallback, public VolumeInterface 
{
public:
#include "PlaybackTracks.h"
	// 定义了 Mixer 的几种状态
    enum mixer_state {
        MIXER_IDLE,             // no active tracks
        MIXER_TRACKS_ENABLED,   // at least one active track, but no track has any data ready
        MIXER_TRACKS_READY,      // at least one active track, and at least one track has data
        MIXER_DRAIN_TRACK,      // drain currently playing track
        MIXER_DRAIN_ALL,        // fully drain the hardware
        // standby mode does not have an enum value
        // suspend by audio policy manager is orthogonal to mixer state
    };

    // retry count before removing active track in case of underrun on offloaded thread:
    // we need to make sure that AudioTrack client has enough time to send large buffers
    static const int8_t kMaxTrackRetriesOffload = 20;
    static const int8_t kMaxTrackStartupRetriesOffload = 100;
    static const int8_t kMaxTrackStopRetriesOffload = 2;
    
	// 每个 UID client 最大可以支持 14 个线程
    // 14 tracks max per client allows for 2 misbehaving application leaving 4 available tracks.
    static const uint32_t kMaxTracksPerUid = 14;

    // Maximum delay (in nanoseconds) for upcoming buffers in suspend mode, otherwise
    // if delay is greater, the estimated time for timeLoopNextNs is reset.
    // This allows for catch-up to be done for small delays, while resetting the estimate
    // for initial conditions or large delays.
    static const nsecs_t kMaxNextBufferDelayNs = 100000000;

    PlaybackThread(const sp<AudioFlinger>& audioFlinger, AudioStreamOut* output, audio_io_handle_t id, audio_devices_t device, type_t type, bool systemReady);
    virtual	~PlaybackThread();
	void	dump(int fd, const Vector<String16>& args);
    // Thread virtuals
    virtual     bool        threadLoop();
    // RefBase
    virtual     void        onFirstRef();
    virtual     status_t    checkEffectCompatibility_l(const effect_descriptor_t *desc, audio_session_t sessionId);

protected:
    // Code snippets that were lifted up out of threadLoop()
    virtual     void        threadLoop_mix() = 0;
    virtual     void        threadLoop_sleepTime() = 0;
    virtual     ssize_t     threadLoop_write();
    virtual     void        threadLoop_drain();
    virtual     void        threadLoop_standby();
    virtual     void        threadLoop_exit();
    virtual     void        threadLoop_removeTracks(const Vector< sp<Track> >& tracksToRemove);

                // prepareTracks_l reads and writes mActiveTracks, and returns
                // the pending set of tracks to remove via Vector 'tracksToRemove'.  The caller
                // is responsible for clearing or destroying this Vector later on, when it
                // is safe to do so. That will drop the final ref count and destroy the tracks.
    virtual     mixer_state prepareTracks_l(Vector< sp<Track> > *tracksToRemove) = 0;
                void        removeTracks_l(const Vector< sp<Track> >& tracksToRemove);
                
    // StreamOutHalInterfaceCallback implementation
    virtual     void        onWriteReady();
    virtual     void        onDrainReady();
    virtual     void        onError();
                void        resetWriteBlocked(uint32_t sequence);
                void        resetDraining(uint32_t sequence);
    virtual     bool        waitingAsyncCallback();
    virtual     bool        waitingAsyncCallback_l();
    virtual     bool        shouldStandby_l();
    virtual     void        onAddNewTrack_l();
                void        onAsyncError(); // error reported by AsyncCallbackThread

    // ThreadBase virtuals
    virtual     void        preExit();
    virtual     void        onIdleMixer();

    virtual     bool        keepWakeLock() const { return true; }
    virtual     void        acquireWakeLock_l() {
                                ThreadBase::acquireWakeLock_l();
                                mActiveTracks.updatePowerState(this, true /* force */); }
public:
    virtual     status_t    initCheck() const { return (mOutput == NULL) ? NO_INIT : NO_ERROR; }
                // return estimated latency in milliseconds, as reported by HAL
                uint32_t    latency() const;
                // same, but lock must already be held
                uint32_t    latency_l() const;
                // VolumeInterface
    virtual     void        setMasterVolume(float value);
    virtual     void        setMasterMute(bool muted);
    virtual     void        setStreamVolume(audio_stream_type_t stream, float value);
    virtual     void        setStreamMute(audio_stream_type_t stream, bool muted);
    virtual     float       streamVolume(audio_stream_type_t stream) const;
                sp<Track>   createTrack_l(
                                const sp<AudioFlinger::Client>& client,
                                audio_stream_type_t streamType,
                                uint32_t sampleRate,
                                audio_format_t format,
                                audio_channel_mask_t channelMask,
                                size_t *pFrameCount,
                                const sp<IMemory>& sharedBuffer,
                                audio_session_t sessionId,
                                audio_output_flags_t *flags,
                                pid_t tid,
                                uid_t uid,
                                status_t *status /*non-NULL*/,
                                audio_port_handle_t portId);

                AudioStreamOut* getOutput() const;
                AudioStreamOut* clearOutput();
                virtual sp<StreamHalInterface> stream() const;

                // a very large number of suspend() will eventually wraparound, but unlikely
                void        suspend() { (void) android_atomic_inc(&mSuspended); }
                void        restore() {
                                    // if restore() is done without suspend(), get back into
                                    // range so that the next suspend() will operate correctly
                                    if (android_atomic_dec(&mSuspended) <= 0) {
                                        android_atomic_release_store(0, &mSuspended);
                                    }
                                }
                bool        isSuspended() const { return android_atomic_acquire_load(&mSuspended) > 0; }
    virtual     String8     getParameters(const String8& keys);
    virtual     void        ioConfigChanged(audio_io_config_event event, pid_t pid = 0);
                status_t    getRenderPosition(uint32_t *halFrames, uint32_t *dspFrames);
                // FIXME rename mixBuffer() to sinkBuffer() and remove int16_t* dependency.
                // Consider also removing and passing an explicit mMainBuffer initialization
                // parameter to AF::PlaybackThread::Track::Track().
                int16_t     *mixBuffer() const { return reinterpret_cast<int16_t *>(mSinkBuffer); };
    virtual     void detachAuxEffect_l(int effectId);
                status_t attachAuxEffect(const sp<AudioFlinger::PlaybackThread::Track>& track,int EffectId);
                status_t attachAuxEffect_l(const sp<AudioFlinger::PlaybackThread::Track>& track, int EffectId);
                virtual status_t addEffectChain_l(const sp<EffectChain>& chain);
                virtual size_t removeEffectChain_l(const sp<EffectChain>& chain);
                virtual uint32_t hasAudioSession_l(audio_session_t sessionId) const;
                virtual uint32_t getStrategyForSession_l(audio_session_t sessionId);
                virtual status_t setSyncEvent(const sp<SyncEvent>& event);
                virtual bool     isValidSyncEvent(const sp<SyncEvent>& event) const;
                // called with AudioFlinger lock held
                        bool     invalidateTracks_l(audio_stream_type_t streamType);
                virtual void     invalidateTracks(audio_stream_type_t streamType);
    virtual     size_t      frameCount() const { return mNormalFrameCount; }
    virtual     status_t    getTimestamp_l(AudioTimestamp& timestamp);
                void        addPatchTrack(const sp<PatchTrack>& track);
                void        deletePatchTrack(const sp<PatchTrack>& track);
    virtual     void        getAudioPortConfig(struct audio_port_config *config);
                // Return the asynchronous signal wait time.
    virtual     int64_t     computeWaitTimeNs_l() const { return INT64_MAX; }
    virtual     bool        isOutput() const override { return true; }
}

7.1.7 定义类Class MixerThread (继承自 PlaybackThread ) 用于 Mixer

MixerThread Class 继承自 PlaybackThread。

@ /frameworks/av/services/audioflinger/Threads.h

class MixerThread : public PlaybackThread {
public:
    MixerThread(const sp<AudioFlinger>& audioFlinger, AudioStreamOut* output, audio_io_handle_t id,
                audio_devices_t device, bool systemReady, type_t type = MIXER);
    virtual             ~MixerThread();
    
    // Thread virtuals
    virtual     bool        checkForNewParameter_l(const String8& keyValuePair, status_t& status);
    virtual     void        dumpInternals(int fd, const Vector<String16>& args);

protected:
    virtual     mixer_state prepareTracks_l(Vector< sp<Track> > *tracksToRemove);
    virtual     int         getTrackName_l(audio_channel_mask_t channelMask, audio_format_t format,
                                           audio_session_t sessionId, uid_t uid);
    virtual     void        deleteTrackName_l(int name);
    virtual     uint32_t    idleSleepTimeUs() const;
    virtual     uint32_t    suspendSleepTimeUs() const;
    virtual     void        cacheParameters_l();

    // threadLoop snippets
    virtual     ssize_t     threadLoop_write();
    virtual     void        threadLoop_standby();
    virtual     void        threadLoop_mix();
    virtual     void        threadLoop_sleepTime();
    virtual     void        threadLoop_removeTracks(const Vector< sp<Track> >& tracksToRemove);
    virtual     void        onIdleMixer();
    virtual     uint32_t    correctLatency_l(uint32_t latency) const;

    virtual     status_t    createAudioPatch_l(const struct audio_patch *patch, audio_patch_handle_t *handle);
    virtual     status_t    releaseAudioPatch_l(const audio_patch_handle_t handle);

                AudioMixer* mAudioMixer;    // normal mixer
private:
                // one-time initialization, no locks required
                sp<FastMixer>     mFastMixer;     // non-0 if there is also a fast mixer
                sp<AudioWatchdog> mAudioWatchdog; // non-0 if there is an audio watchdog thread

                // contents are not guaranteed to be consistent, no locks required
                FastMixerDumpState mFastMixerDumpState;
#ifdef STATE_QUEUE_DUMP
                StateQueueObserverDump mStateQueueObserverDump;
                StateQueueMutatorDump  mStateQueueMutatorDump;
#endif
                AudioWatchdogDump mAudioWatchdogDump;
};

7.1.8 定义类Class DirectOutputThread (继承自 PlaybackThread ) 用于播放不需混音的音乐
class DirectOutputThread : public PlaybackThread {
public:
    DirectOutputThread(const sp<AudioFlinger>& audioFlinger, AudioStreamOut* output,
                       audio_io_handle_t id, audio_devices_t device, bool systemReady);
    virtual                 ~DirectOutputThread();
    // Thread virtuals
    virtual     bool        checkForNewParameter_l(const String8& keyValuePair,
                                                   status_t& status);
    virtual     void        flushHw_l();

protected:
    virtual     int         getTrackName_l(audio_channel_mask_t channelMask, audio_format_t format,
                                           audio_session_t sessionId, uid_t uid);
    virtual     void        deleteTrackName_l(int name);
    virtual     uint32_t    activeSleepTimeUs() const;
    virtual     uint32_t    idleSleepTimeUs() const;
    virtual     uint32_t    suspendSleepTimeUs() const;
    virtual     void        cacheParameters_l();

    // threadLoop snippets
    virtual     mixer_state prepareTracks_l(Vector< sp<Track> > *tracksToRemove);
    virtual     void        threadLoop_mix();
    virtual     void        threadLoop_sleepTime();
    virtual     void        threadLoop_exit();
    virtual     bool        shouldStandby_l();
    virtual     void        onAddNewTrack_l();
    bool mVolumeShaperActive;

    DirectOutputThread(const sp<AudioFlinger>& audioFlinger, AudioStreamOut* output,
                        audio_io_handle_t id, uint32_t device, ThreadBase::type_t type,
                        bool systemReady);
    void processVolume_l(Track *track, bool lastTrack);
    // prepareTracks_l() tells threadLoop_mix() the name of the single active track
    sp<Track>               mActiveTrack;
    wp<Track>               mPreviousTrack;         // used to detect track switch
    uint64_t                mFramesWrittenAtStandby;// used to reset frames on track reset
    uint64_t                mFramesWrittenForSleep; // used to reset frames on track removal
                                                    // or underrun before entering standby
public:
    virtual     bool        hasFastMixer() const { return false; }
    virtual     int64_t     computeWaitTimeNs_l() const override;
    virtual     status_t    getTimestamp_l(AudioTimestamp& timestamp) override;
};

7.1.9 定义类Class OffloadThread (继承自 DirectOutputThread ) 用于播放不需混音的音乐
class OffloadThread : public DirectOutputThread {
public:

    OffloadThread(const sp<AudioFlinger>& audioFlinger, AudioStreamOut* output,
                        audio_io_handle_t id, uint32_t device, bool systemReady);
    virtual                 ~OffloadThread() {};
    virtual     void        flushHw_l();

protected:
    // threadLoop snippets
    virtual     mixer_state prepareTracks_l(Vector< sp<Track> > *tracksToRemove);
    virtual     void        threadLoop_exit();
    virtual     bool        waitingAsyncCallback();
    virtual     bool        waitingAsyncCallback_l();
    virtual     void        invalidateTracks(audio_stream_type_t streamType);
    virtual     bool        keepWakeLock() const { return (mKeepWakeLock || (mDrainSequence & 1)); }

private:
    size_t      mPausedWriteLength;     // length in bytes of write interrupted by pause
    size_t      mPausedBytesRemaining;  // bytes still waiting in mixbuffer after resume
    bool        mKeepWakeLock;          // keep wake lock while waiting for write callback
    uint64_t    mOffloadUnderrunPosition; // Current frame position for offloaded playback
                                          // used and valid only during underrun.  ~0 if
                                          // no underrun has occurred during playback and
                                          // is not reset on standby.
};


7.1.10 定义类Class AsyncCallbackThread (继承自 Thread ) 用于回调相关
class AsyncCallbackThread : public Thread {
public:
    explicit AsyncCallbackThread(const wp<PlaybackThread>& playbackThread);
    virtual             ~AsyncCallbackThread();
    // Thread virtuals
    virtual bool        threadLoop();
    // RefBase
    virtual void        onFirstRef();

            void        exit();
            void        setWriteBlocked(uint32_t sequence);
            void        resetWriteBlocked();
            void        setDraining(uint32_t sequence);
            void        resetDraining();
            void        setAsyncError();

private:
    const wp<PlaybackThread>   mPlaybackThread;
    // mWriteAckSequence corresponds to the last write sequence passed by the offload thread via
    // setWriteBlocked(). The sequence is shifted one bit to the left and the lsb is used
    // to indicate that the callback has been received via resetWriteBlocked()
    uint32_t                   mWriteAckSequence;
    // mDrainSequence corresponds to the last drain sequence passed by the offload thread via
    // setDraining(). The sequence is shifted one bit to the left and the lsb is used
    // to indicate that the callback has been received via resetDraining()
    uint32_t                   mDrainSequence;
    Condition                  mWaitWorkCV;
    Mutex                      mLock;
    bool                       mAsyncError;
};


7.1.11 定义类Class DuplicatingThread (继承自 MixerThread ) 复制线程
class DuplicatingThread : public MixerThread {
public:
    DuplicatingThread(const sp<AudioFlinger>& audioFlinger, MixerThread* mainThread,
                      audio_io_handle_t id, bool systemReady);
    virtual                 ~DuplicatingThread();

    // Thread virtuals
                void        addOutputTrack(MixerThread* thread);
                void        removeOutputTrack(MixerThread* thread);
                uint32_t    waitTimeMs() const { return mWaitTimeMs; }
protected:
    virtual     uint32_t    activeSleepTimeUs() const;

private:
                bool        outputsReady(const SortedVector< sp<OutputTrack> > &outputTracks);
protected:
    // threadLoop snippets
    virtual     void        threadLoop_mix();
    virtual     void        threadLoop_sleepTime();
    virtual     ssize_t     threadLoop_write();
    virtual     void        threadLoop_standby();
    virtual     void        cacheParameters_l();

private:
    // called from threadLoop, addOutputTrack, removeOutputTrack
    virtual     void        updateWaitTime_l();
protected:
    virtual     void        saveOutputTracks();
    virtual     void        clearOutputTracks();
private:

                uint32_t    mWaitTimeMs;
    SortedVector < sp<OutputTrack> >  outputTracks;
    SortedVector < sp<OutputTrack> >  mOutputTracks;
public:
    virtual     bool        hasFastMixer() const { return false; }
};


7.1.12 定义类Class RecordThread (继承自 ThreadBase ) 录音线程
// record thread
class RecordThread : public ThreadBase
{
public:
    class RecordTrack;
    /* The ResamplerBufferProvider is used to retrieve recorded input data from the
     * RecordThread.  It maintains local state on the relative position of the read
     * position of the RecordTrack compared with the RecordThread. */
    class ResamplerBufferProvider : public AudioBufferProvider
    {
    public:
        explicit ResamplerBufferProvider(RecordTrack* recordTrack) :
            mRecordTrack(recordTrack), mRsmpInUnrel(0), mRsmpInFront(0) { }
        virtual ~ResamplerBufferProvider() { }

        // called to set the ResamplerBufferProvider to head of the RecordThread data buffer,
        // skipping any previous data read from the hal.
        virtual void reset();

        /* Synchronizes RecordTrack position with the RecordThread.
         * Calculates available frames and handle overruns if the RecordThread
         * has advanced faster than the ResamplerBufferProvider has retrieved data.
         * TODO: why not do this for every getNextBuffer?
         *
         * Parameters
         * framesAvailable:  pointer to optional output size_t to store record track
         *                   frames available.
         *      hasOverrun:  pointer to optional boolean, returns true if track has overrun.
         */
        virtual void sync(size_t *framesAvailable = NULL, bool *hasOverrun = NULL);
        
        // AudioBufferProvider interface
        virtual status_t    getNextBuffer(AudioBufferProvider::Buffer* buffer);
        virtual void        releaseBuffer(AudioBufferProvider::Buffer* buffer);
    private:
        RecordTrack * const mRecordTrack;
        size_t              mRsmpInUnrel;   // unreleased frames remaining from
                                            // most recent getNextBuffer
                                            // for debug only
        int32_t             mRsmpInFront;   // next available frame
                                            // rolling counter that is never cleared
    };

#include "RecordTracks.h"

            RecordThread(const sp<AudioFlinger>& audioFlinger,
                    AudioStreamIn *input,
                    audio_io_handle_t id,
                    audio_devices_t outDevice,
                    audio_devices_t inDevice,
                    bool systemReady
#ifdef TEE_SINK
                    , const sp<NBAIO_Sink>& teeSink
#endif
                    );
            virtual     ~RecordThread();

    // no addTrack_l ?
    void        destroyTrack_l(const sp<RecordTrack>& track);
    void        removeTrack_l(const sp<RecordTrack>& track);

    void        dumpInternals(int fd, const Vector<String16>& args);
    void        dumpTracks(int fd, const Vector<String16>& args);

    // Thread virtuals
    virtual bool        threadLoop();
    virtual void        preExit();

    // RefBase
    virtual void        onFirstRef();

    virtual status_t    initCheck() const { return (mInput == NULL) ? NO_INIT : NO_ERROR; }
    virtual sp<MemoryDealer>    readOnlyHeap() const { return mReadOnlyHeap; }
    virtual sp<IMemory> pipeMemory() const { return mPipeMemory; }

            sp<AudioFlinger::RecordThread::RecordTrack>  createRecordTrack_l(
                    const sp<AudioFlinger::Client>& client,
                    uint32_t sampleRate,
                    audio_format_t format,
                    audio_channel_mask_t channelMask,
                    size_t *pFrameCount,
                    audio_session_t sessionId,
                    size_t *notificationFrames,
                    uid_t uid,
                    audio_input_flags_t *flags,
                    pid_t tid,
                    status_t *status /*non-NULL*/,
                    audio_port_handle_t portId);

            status_t    start(RecordTrack* recordTrack,
                              AudioSystem::sync_event_t event,
                              audio_session_t triggerSession);

            // ask the thread to stop the specified track, and
            // return true if the caller should then do it's part of the stopping process
            bool        stop(RecordTrack* recordTrack);

            void        dump(int fd, const Vector<String16>& args);
            AudioStreamIn* clearInput();
            virtual sp<StreamHalInterface> stream() const;


    virtual bool        checkForNewParameter_l(const String8& keyValuePair,status_t& status);
    virtual void        cacheParameters_l() {}
    virtual String8     getParameters(const String8& keys);
    virtual void        ioConfigChanged(audio_io_config_event event, pid_t pid = 0);
    virtual status_t    createAudioPatch_l(const struct audio_patch *patch,audio_patch_handle_t *handle);
    virtual status_t    releaseAudioPatch_l(const audio_patch_handle_t handle);
            void        addPatchRecord(const sp<PatchRecord>& record);
            void        deletePatchRecord(const sp<PatchRecord>& record);
            void        readInputParameters_l();
    virtual uint32_t    getInputFramesLost();

    virtual status_t addEffectChain_l(const sp<EffectChain>& chain);
    virtual size_t removeEffectChain_l(const sp<EffectChain>& chain);
    virtual uint32_t hasAudioSession_l(audio_session_t sessionId) const;

            // Return the set of unique session IDs across all tracks.
            // The keys are the session IDs, and the associated values are meaningless.
            // FIXME replace by Set [and implement Bag/Multiset for other uses].
            KeyedVector<audio_session_t, bool> sessionIds() const;

    virtual status_t setSyncEvent(const sp<SyncEvent>& event);
    virtual bool     isValidSyncEvent(const sp<SyncEvent>& event) const;
    static void syncStartEventCallback(const wp<SyncEvent>& event);
    virtual size_t      frameCount() const { return mFrameCount; }
            bool        hasFastCapture() const { return mFastCapture != 0; }
    virtual void        getAudioPortConfig(struct audio_port_config *config);
    virtual status_t    checkEffectCompatibility_l(const effect_descriptor_t *desc, audio_session_t sessionId);
    virtual void        acquireWakeLock_l() {
                            ThreadBase::acquireWakeLock_l();
                            mActiveTracks.updatePowerState(this, true /* force */); }
    virtual bool        isOutput() const override { return false; }
            void        checkBtNrec();

private:
            // Enter standby if not already in standby, and set mStandby flag
            void    standbyIfNotAlreadyInStandby();
            // Call the HAL standby method unconditionally, and don't change mStandby flag
            void    inputStandBy();
            void    checkBtNrec_l();
            AudioStreamIn                       *mInput;
            SortedVector < sp<RecordTrack> >    mTracks;
            // mActiveTracks has dual roles:  it indicates the current active track(s), and
            // is used together with mStartStopCond to indicate start()/stop() progress
            ActiveTracks<RecordTrack>           mActiveTracks;
            Condition                           mStartStopCond;
            // resampler converts input at HAL Hz to output at AudioRecord client Hz
            void                               *mRsmpInBuffer;  // size = mRsmpInFramesOA
            size_t                              mRsmpInFrames;  // size of resampler input in frames
            size_t                              mRsmpInFramesP2;// size rounded up to a power-of-2
            size_t                              mRsmpInFramesOA;// mRsmpInFramesP2 + over-allocation
            // rolling index that is never cleared
            int32_t                             mRsmpInRear;    // last filled frame + 1

            // For dumpsys
            const sp<NBAIO_Sink>                mTeeSink;
            const sp<MemoryDealer>              mReadOnlyHeap;
            // one-time initialization, no locks required
            sp<FastCapture>                     mFastCapture;   // non-0 if there is also
                                                                // a fast capture
            // FIXME audio watchdog thread
            // contents are not guaranteed to be consistent, no locks required
            FastCaptureDumpState                mFastCaptureDumpState;
#ifdef STATE_QUEUE_DUMP
            // FIXME StateQueue observer and mutator dump fields
#endif
            // FIXME audio watchdog dump
            // accessible only within the threadLoop(), no locks required
            //          mFastCapture->sq()      // for mutating and pushing state
            int32_t     mFastCaptureFutex;      // for cold idle
            // The HAL input source is treated as non-blocking,
            // but current implementation is blocking
            sp<NBAIO_Source>                    mInputSource;
            // The source for the normal capture thread to read from: mInputSource or mPipeSource
            sp<NBAIO_Source>                    mNormalSource;
            // If a fast capture is present, the non-blocking pipe sink written to by fast capture,
            // otherwise clear
            sp<NBAIO_Sink>                      mPipeSink;
            // If a fast capture is present, the non-blocking pipe source read by normal thread,
            // otherwise clear
            sp<NBAIO_Source>                    mPipeSource;
            // Depth of pipe from fast capture to normal thread and fast clients, always power of 2
            size_t                              mPipeFramesP2;
            // If a fast capture is present, the Pipe as IMemory, otherwise clear
            sp<IMemory>                         mPipeMemory;
            static const size_t                 kFastCaptureLogSize = 4 * 1024;
            sp<NBLog::Writer>                   mFastCaptureNBLogWriter;
            bool                                mFastTrackAvail;    // true if fast track available
            // common state to all record threads
            std::atomic_bool                    mBtNrecSuspended;
};

7.1.13 定义类Class MmapThread (继承自 ThreadBase ) 录音线程
class MmapThread : public ThreadBase
{
 public:
#include "MmapTracks.h"
    MmapThread(const sp<AudioFlinger>& audioFlinger, audio_io_handle_t id,
                      AudioHwDevice *hwDev, sp<StreamHalInterface> stream,
                      audio_devices_t outDevice, audio_devices_t inDevice, bool systemReady);
    virtual     ~MmapThread();
    virtual     void        configure(const audio_attributes_t *attr,
                                      audio_stream_type_t streamType,
                                      audio_session_t sessionId,
                                      const sp<MmapStreamCallback>& callback,
                                      audio_port_handle_t deviceId,
                                      audio_port_handle_t portId);
                void        disconnect();

    // MmapStreamInterface
    status_t createMmapBuffer(int32_t minSizeFrames, struct audio_mmap_buffer_info *info);
    status_t getMmapPosition(struct audio_mmap_position *position);
    status_t start(const AudioClient& client, audio_port_handle_t *handle);
    status_t stop(audio_port_handle_t handle);
    status_t standby();

    // RefBase
    virtual     void        onFirstRef();
    // Thread virtual
    virtual     bool        threadLoop();
    virtual     void        threadLoop_exit();
    virtual     void        threadLoop_standby();
    virtual     bool        shouldStandby_l() { return false; }

    virtual     status_t    initCheck() const { return (mHalStream == 0) ? NO_INIT : NO_ERROR; }
    virtual     size_t      frameCount() const { return mFrameCount; }
    virtual     bool        checkForNewParameter_l(const String8& keyValuePair, status_t& status);
    virtual     String8     getParameters(const String8& keys);
    virtual     void        ioConfigChanged(audio_io_config_event event, pid_t pid = 0);
                void        readHalParameters_l();
    virtual     void        cacheParameters_l() {}
    virtual     status_t    createAudioPatch_l(const struct audio_patch *patch, audio_patch_handle_t *handle);
    virtual     status_t    releaseAudioPatch_l(const audio_patch_handle_t handle);
    virtual     void        getAudioPortConfig(struct audio_port_config *config);

    virtual     sp<StreamHalInterface> stream() const { return mHalStream; }
    virtual     status_t    addEffectChain_l(const sp<EffectChain>& chain);
    virtual     size_t      removeEffectChain_l(const sp<EffectChain>& chain);
    virtual     status_t    checkEffectCompatibility_l(const effect_descriptor_t *desc, audio_session_t sessionId);
    
    virtual     uint32_t    hasAudioSession_l(audio_session_t sessionId) const;
    virtual     status_t    setSyncEvent(const sp<SyncEvent>& event);
    virtual     bool        isValidSyncEvent(const sp<SyncEvent>& event) const;

    virtual     void        checkSilentMode_l() {}
    virtual     void        processVolume_l() {}
                void        checkInvalidTracks_l();

    virtual     audio_stream_type_t streamType() { return AUDIO_STREAM_DEFAULT; }

    virtual     void        invalidateTracks(audio_stream_type_t streamType __unused) {}

                void        dump(int fd, const Vector<String16>& args);
    virtual     void        dumpInternals(int fd, const Vector<String16>& args);
                void        dumpTracks(int fd, const Vector<String16>& args);

 protected:

                audio_attributes_t      mAttr;
                audio_session_t         mSessionId;
                audio_port_handle_t     mDeviceId;
                audio_port_handle_t     mPortId;

                wp<MmapStreamCallback>  mCallback;
                sp<StreamHalInterface>  mHalStream;
                sp<DeviceHalInterface>  mHalDevice;
                AudioHwDevice* const    mAudioHwDev;
                ActiveTracks<MmapTrack> mActiveTracks;
};

7.1.14 定义类Class MmapPlaybackThread (继承自 MmapThread,VolumeInterface )
class MmapPlaybackThread : public MmapThread, public VolumeInterface
{

public:
    MmapPlaybackThread(const sp<AudioFlinger>& audioFlinger, audio_io_handle_t id,
                      AudioHwDevice *hwDev, AudioStreamOut *output,
                      audio_devices_t outDevice, audio_devices_t inDevice, bool systemReady);
    virtual     ~MmapPlaybackThread() {}

    virtual     void        configure(const audio_attributes_t *attr,
                                      audio_stream_type_t streamType,
                                      audio_session_t sessionId,
                                      const sp<MmapStreamCallback>& callback,
                                      audio_port_handle_t deviceId,
                                      audio_port_handle_t portId);

                AudioStreamOut* clearOutput();

                // VolumeInterface
    virtual     void        setMasterVolume(float value);
    virtual     void        setMasterMute(bool muted);
    virtual     void        setStreamVolume(audio_stream_type_t stream, float value);
    virtual     void        setStreamMute(audio_stream_type_t stream, bool muted);
    virtual     float       streamVolume(audio_stream_type_t stream) const;

                void        setMasterMute_l(bool muted) { mMasterMute = muted; }

    virtual     void        invalidateTracks(audio_stream_type_t streamType);

    virtual     audio_stream_type_t streamType() { return mStreamType; }
    virtual     void        checkSilentMode_l();
    virtual     void        processVolume_l();

    virtual     void        dumpInternals(int fd, const Vector<String16>& args);

    virtual     bool        isOutput() const override { return true; }

protected:

                audio_stream_type_t         mStreamType;
                float                       mMasterVolume;
                float                       mStreamVolume;
                bool                        mMasterMute;
                bool                        mStreamMute;
                float                       mHalVolFloat;
                AudioStreamOut*             mOutput;
};

7.1.15 定义类Class MmapCaptureThread (继承自 MmapThread )
class MmapCaptureThread : public MmapThread
{
public:
    MmapCaptureThread(const sp<AudioFlinger>& audioFlinger, audio_io_handle_t id, AudioHwDevice *hwDev,
    			 AudioStreamIn *input, audio_devices_t outDevice, audio_devices_t inDevice, bool systemReady);
    virtual     ~MmapCaptureThread() {}
                AudioStreamIn* clearInput();
    virtual     bool           isOutput() const override { return false; }
protected:
                AudioStreamIn*  mInput;
};

7.1.16 ThreadBase 及其内部类关系总结
ThreadBase {

	ConfigEventData
		------>
		+	IoConfigEventData
		+	PrioConfigEventData
		+	SetParameterConfigEventData
		+	CreateAudioPatchConfigEventData
		+	ReleaseAudioPatchConfigEventData
		<------

	IoConfigEvent
		------>
		+	IoConfigEvent
		+	PrioConfigEvent
		+	SetParameterConfigEvent
		+	CreateAudioPatchConfigEvent
		+	ReleaseAudioPatchConfigEvent
		<------
			
	ActiveTracks
	
	VolumeInterface
		------>
		+	PlaybackThread
		+		------>
		+		+	MixerThread
		+		+	------> DuplicatingThread
		+		+			
		+		+	DirectOutputThread
		+		+	------> OffloadThread
		+		<------
		+	AsyncCallbackThread
		+	RecordThread
		+	MmapThread
		+		------>
		+		+	MmapPlaybackThread
		+		+	MmapCaptureThread
		+		<------
		<------
}

7.2 Threads.cpp 具体实现分析

7.2.1 ThreadBase 构造方法实现
@ /frameworks/av/services/audioflinger/Threads.cpp

// 在构造函数中,主要是对一些参数的初始化
AudioFlinger::ThreadBase::ThreadBase(const sp<AudioFlinger>& audioFlinger, audio_io_handle_t id,
        audio_devices_t outDevice, audio_devices_t inDevice, type_t type, bool systemReady)
    :   Thread(false /*canCallJava*/),
        mType(type),
        mAudioFlinger(audioFlinger),
        // mSampleRate, mFrameCount, mChannelMask, mChannelCount, mFrameSize, mFormat, mBufferSize
        // are set by PlaybackThread::readOutputParameters_l() or
        // RecordThread::readInputParameters_l()
        //FIXME: mStandby should be true here. Is this some kind of hack?
        mStandby(false), mOutDevice(outDevice), mInDevice(inDevice),
        mPrevOutDevice(AUDIO_DEVICE_NONE), mPrevInDevice(AUDIO_DEVICE_NONE),
        mAudioSource(AUDIO_SOURCE_DEFAULT), mId(id),
        // mName will be set by concrete (non-virtual) subclass
        mDeathRecipient(new PMDeathRecipient(this)),
        mSystemReady(systemReady),
        mSignalPending(false)
{
    memset(&mPatch, 0, sizeof(struct audio_patch));
}

}


7.2.2 设置参数函数 ThreadBase::setParameters( )

// 设置参数,其实就是发送事件
status_t AudioFlinger::ThreadBase::setParameters(const String8& keyValuePairs)
{
    ALOGV("ThreadBase::setParameters() %s", keyValuePairs.string());
    return sendSetParameterConfigEvent_l(keyValuePairs);
    ------>
        param.remove(String8(AudioParameter::keyMonoOutput));
        configEvent = new SetParameterConfigEvent(param.toString());
        sendConfigEvent_l(configEvent);
    <------

// sendConfigEvent_l() must be called with ThreadBase::mLock held
// Can temporarily release the lock if waiting for a reply from processConfigEvents_l().
status_t AudioFlinger::ThreadBase::sendConfigEvent_l(sp<ConfigEvent>& event)
{
    status_t status = NO_ERROR;

    if (event->mRequiresSystemReady && !mSystemReady) {
        event->mWaitStatus = false;
        mPendingConfigEvents.add(event);
        return status;
    }
    mConfigEvents.add(event);
    ALOGV("sendConfigEvent_l() num events %zu event %d", mConfigEvents.size(), event->mType);
    mWaitWorkCV.signal();
    return status;
}

sendIoConfigEvent(event, pid);
------> sendIoConfigEvent_l(event, pid);
		------>
			sp<ConfigEvent> configEvent = (ConfigEvent *)new IoConfigEvent(event, pid); 
			sendConfigEvent_l(configEvent);
		<------
<------

sendPrioConfigEvent(pid, tid, prio, forApp)
------>	sendPrioConfigEvent_l(pid, tid, prio, forApp);
		------>
			sp<ConfigEvent> configEvent = (ConfigEvent *)new PrioConfigEvent(pid, tid, prio, forApp);
   		 	sendConfigEvent_l(configEvent);
		<------
<------
 

7.2.3 事件处理函数实现 ThreadBase::processConfigEvents_l( )

在 processConfigEvents_l 函数中,主要是对 如下几种事件的处理

CFG_EVENT_IO,						// 调用 ioConfigChanged 函数
CFG_EVENT_PRIO,						// 调用 requestPriority 函数 申请优先级
CFG_EVENT_SET_PARAMETER,			// 调用 checkForNewParameter_l 函数
CFG_EVENT_CREATE_AUDIO_PATCH,		// 调用 createAudioPatch_l 函数,创建 Audio patch
CFG_EVENT_RELEASE_AUDIO_PATCH,		// 调用 releaseAudioPatch_l 函数,释放 Audio patch
// post condition: mConfigEvents.isEmpty()
void AudioFlinger::ThreadBase::processConfigEvents_l()
{
    bool configChanged = false;
    while (!mConfigEvents.isEmpty()) {
        ALOGV("processConfigEvents_l() remaining events %zu", mConfigEvents.size());
        sp<ConfigEvent> event = mConfigEvents[0];
        mConfigEvents.removeAt(0);
        switch (event->mType) {
        case CFG_EVENT_PRIO: {
            PrioConfigEventData *data = (PrioConfigEventData *)event->mData.get();
            // FIXME Need to understand why this has to be done asynchronously
            int err = requestPriority(data->mPid, data->mTid, data->mPrio, data->mForApp, true /*asynchronous*/);
        } break;
        case CFG_EVENT_IO: {
            IoConfigEventData *data = (IoConfigEventData *)event->mData.get();
            ioConfigChanged(data->mEvent, data->mPid);
        } break;
        case CFG_EVENT_SET_PARAMETER: {
            SetParameterConfigEventData *data = (SetParameterConfigEventData *)event->mData.get();
            if (checkForNewParameter_l(data->mKeyValuePairs, event->mStatus)) {
                configChanged = true;
                mLocalLog.log("CFG_EVENT_SET_PARAMETER: (%s) configuration changed", 
                			data->mKeyValuePairs.string());
            }
        } break;
        case CFG_EVENT_CREATE_AUDIO_PATCH: {
            const audio_devices_t oldDevice = getDevice();
            CreateAudioPatchConfigEventData *data = (CreateAudioPatchConfigEventData *)event->mData.get();
            event->mStatus = createAudioPatch_l(&data->mPatch, &data->mHandle);
            const audio_devices_t newDevice = getDevice();
            mLocalLog.log("CFG_EVENT_CREATE_AUDIO_PATCH: old device %#x (%s) new device %#x (%s)",
                    	(unsigned)oldDevice, devicesToString(oldDevice).c_str(),
                    	(unsigned)newDevice, devicesToString(newDevice).c_str());
        } break;
        case CFG_EVENT_RELEASE_AUDIO_PATCH: {
            const audio_devices_t oldDevice = getDevice();
            ReleaseAudioPatchConfigEventData *data = (ReleaseAudioPatchConfigEventData *)event->mData.get();
            event->mStatus = releaseAudioPatch_l(data->mHandle);
            const audio_devices_t newDevice = getDevice();
            mLocalLog.log("CFG_EVENT_RELEASE_AUDIO_PATCH: old device %#x (%s) new device %#x (%s)",
                    	(unsigned)oldDevice, devicesToString(oldDevice).c_str(),
                   		(unsigned)newDevice, devicesToString(newDevice).c_str());
        } break;
        default:
            ALOG_ASSERT(false, "processConfigEvents_l() unknown event type %d", event->mType);
            break;
        }
        {
            Mutex::Autolock _l(event->mLock);
            if (event->mWaitStatus) {
                event->mWaitStatus = false;
                event->mCond.signal();
            }
        }
        ALOGV_IF(mConfigEvents.isEmpty(), "processConfigEvents_l() DONE thread %p", this);
    }

    if (configChanged) {
        cacheParameters_l();
    }
}

7.2.4 创建音效 ThreadBase::createEffect_l( )
  1. 判断音效是否已经初始化过了
  2. 通过线程类型来 判断 加载哪一种音效
  3. 获得音效
  4. 初始化 并配置 音效策略
  5. 如果已经初始化过了,则直接获取即可
  6. 注册音效,如果没注册过。则分配一个 effectID 重新注册,并创建 音效模块
  7. 配置音效的 输入、输出 设备
  8. 添加到 handle 中
// ThreadBase::createEffect_l() must be called with AudioFlinger::mLock held
sp<AudioFlinger::EffectHandle> AudioFlinger::ThreadBase::createEffect_l(
        const sp<AudioFlinger::Client>& client,
        const sp<IEffectClient>& effectClient,
        int32_t priority,
        audio_session_t sessionId,
        effect_descriptor_t *desc,
        int *enabled,
        status_t *status,
        bool pinned)
{

    lStatus = initCheck();	// 1. 判断音效是否已经初始化过了
    ALOGV("createEffect_l() thread %p effect %s on session %d", this, desc->name, sessionId);

    { // scope for mLock
		// 2. 通过线程类型来 判断 加载哪一种音效
        lStatus = checkEffectCompatibility_l(desc, sessionId);
		// 3. 获得音效
        // check for existing effect chain with the requested audio session
        chain = getEffectChain_l(sessionId);
        if (chain == 0) {
            // create a new chain for this session
            ALOGV("createEffect_l() new effect chain for session %d", sessionId);
            // 4. 初始化 并配置 音效策略
            chain = new EffectChain(this, sessionId);
            addEffectChain_l(chain);
            chain->setStrategy(getStrategyForSession_l(sessionId));
            chainCreated = true;
        } else {
        	// 5. 如果已经初始化过了,则直接获取即可
            effect = chain->getEffectFromDesc_l(desc);
        }

        ALOGV("createEffect_l() got effect %p on chain %p", effect.get(), chain.get());
		// 6. 注册音效,如果没注册过。则分配一个 effectID 重新注册,并创建 音效模块
        if (effect == 0) {
            effectId = mAudioFlinger->nextUniqueId(AUDIO_UNIQUE_ID_USE_EFFECT);
            // Check CPU and memory usage
            lStatus = AudioSystem::registerEffect(desc, mId, chain->strategy(), sessionId, effectId);
            effectRegistered = true;
            // create a new effect module if none present in the chain
            lStatus = chain->createEffect_l(effect, this, desc, effectId, sessionId, pinned);
        
            effectCreated = true;
			// 配置音效的 输入、输出 设备,及模式
            effect->setDevice(mOutDevice);
            effect->setDevice(mInDevice);
            effect->setMode(mAudioFlinger->getMode());
            effect->setAudioSource(mAudioSource);
        }
        // 7. 创建 EffectHandle 用于绑定 effect module
        // create effect handle and connect it to effect module
        handle = new EffectHandle(effect, client, effectClient, priority);
        lStatus = handle->initCheck();
        // 8. 添加到  handle 中。
        if (lStatus == OK) {
            lStatus = effect->addHandle(handle.get());
        }
        // 9. 初始化音效
        if (enabled != NULL) {
            *enabled = (int)effect->isEnabled();
        }
    }

    *status = lStatus;
    return handle;
}


7.2.5 ActiveTracks 操作方法
  • 添加 track
    mActiveTracks.add(track);

  • 移除 track
    mActiveTracks.remove(track);

  • 清楚 track
    mActiveTracks.clear();


7.2.6 PlaybackTread 类方法实现
7.2.6.1 PlaybackTread 初始化
// ----------------------------------------------------------------------------
//      Playback
// ----------------------------------------------------------------------------

AudioFlinger::PlaybackThread::PlaybackThread(const sp<AudioFlinger>& audioFlinger,
                                             AudioStreamOut* output,
                                             audio_io_handle_t id,
                                             audio_devices_t device,
                                             type_t type,
                                             bool systemReady)
    :   ThreadBase(audioFlinger, id, device, AUDIO_DEVICE_NONE, type, systemReady),
        mNormalFrameCount(0), mSinkBuffer(NULL),
        mMixerBufferEnabled(AudioFlinger::kEnableExtendedPrecision),
        mMixerBuffer(NULL),
        mMixerBufferSize(0),
        mMixerBufferFormat(AUDIO_FORMAT_INVALID),
        mMixerBufferValid(false),
        mEffectBufferEnabled(AudioFlinger::kEnableExtendedPrecision),
        mEffectBuffer(NULL),
        mEffectBufferSize(0),
        mEffectBufferFormat(AUDIO_FORMAT_INVALID),
        mEffectBufferValid(false),
        mSuspended(0), mBytesWritten(0),
        mFramesWritten(0),
        mSuspendedFrames(0),
        mActiveTracks(&this->mLocalLog),
        // mStreamTypes[] initialized in constructor body
        mOutput(output),
        mLastWriteTime(-1), mNumWrites(0), mNumDelayedWrites(0), mInWrite(false),
        mMixerStatus(MIXER_IDLE),
        mMixerStatusIgnoringFastTracks(MIXER_IDLE),
        mStandbyDelayNs(AudioFlinger::mStandbyTimeInNsecs),
        mBytesRemaining(0),
        mCurrentWriteLength(0),
        mUseAsyncWrite(false),
        mWriteAckSequence(0),
        mDrainSequence(0),
        mScreenState(AudioFlinger::mScreenState),
        // index 0 is reserved for normal mixer's submix
        mFastTrackAvailMask(((1 << FastMixerState::sMaxFastTracks) - 1) & ~1),
        mHwSupportsPause(false), mHwPaused(false), mFlushPending(false),
        mLeftVolFloat(-1.0), mRightVolFloat(-1.0), mHwSupportsSuspend(false)
{
    snprintf(mThreadName, kThreadNameLength, "AudioOut_%X", id);
    mNBLogWriter = audioFlinger->newWriter_l(kLogSize, mThreadName);

    // Assumes constructor is called by AudioFlinger with it's mLock held, but
    // it would be safer to explicitly pass initial masterVolume/masterMute as
    // parameter.
    //
    // If the HAL we are using has support for master volume or master mute,
    // then do not attenuate or mute during mixing (just leave the volume at 1.0
    // and the mute set to false).
    mMasterVolume = audioFlinger->masterVolume_l();
    mMasterMute = audioFlinger->masterMute_l();
    if (mOutput && mOutput->audioHwDev) {
        if (mOutput->audioHwDev->canSetMasterVolume()) {
            mMasterVolume = 1.0;
        }

        if (mOutput->audioHwDev->canSetMasterMute()) {
            mMasterMute = false;
        }
    }

    readOutputParameters_l();

    // ++ operator does not compile
    for (audio_stream_type_t stream = AUDIO_STREAM_MIN; stream < AUDIO_STREAM_CNT;
            stream = (audio_stream_type_t) (stream + 1)) {
        mStreamTypes[stream].volume = mAudioFlinger->streamVolume_l(stream);
        mStreamTypes[stream].mute = mAudioFlinger->streamMute_l(stream);
    }
}

// Thread virtuals

void AudioFlinger::PlaybackThread::onFirstRef()
{
    run(mThreadName, ANDROID_PRIORITY_URGENT_AUDIO);
}

在实例化 PlaybackThread 后,在构造函数中初始化相关的量后,就会调用 onFirstRef 函数,运行线程。
线程的名字为 AudioOut_%x。
snprintf(mThreadName, kThreadNameLength, “AudioOut_%X”, id);

7.2.6.2 PlaybackThread::createTrack_l( )

waitting update…

@ \frameworks\av\services\audioflinger\Threads.cpp

// PlaybackThread::createTrack_l() must be called with AudioFlinger::mLock held
sp<AudioFlinger::PlaybackThread::Track> AudioFlinger::PlaybackThread::createTrack_l()
{
    // special case for FAST flag considered OK if fast mixer is present
    if (hasFastMixer()) {
        outputFlags = (audio_output_flags_t)(outputFlags | AUDIO_OUTPUT_FLAG_FAST);
    }

    // Check if requested flags are compatible with output stream flags
    if ((*flags & outputFlags) != *flags) {
        ALOGW("createTrack_l(): mismatch between requested flags (%08x) and output flags (%08x)", *flags, outputFlags);
        *flags = (audio_output_flags_t)(*flags & outputFlags);
    }

    // client expresses a preference for FAST, but we get the final say
    if (*flags & AUDIO_OUTPUT_FLAG_FAST) {
      if (
            // PCM data
            audio_is_linear_pcm(format) &&
            // TODO: extract as a data library function that checks that a computationally
            // expensive downmixer is not required: isFastOutputChannelConversion()
            (channelMask == mChannelMask ||
                    mChannelMask != AUDIO_CHANNEL_OUT_STEREO ||
                    (channelMask == AUDIO_CHANNEL_OUT_MONO
                            /* && mChannelMask == AUDIO_CHANNEL_OUT_STEREO */)) &&
            // hardware sample rate
            (sampleRate == mSampleRate) &&
            // normal mixer has an associated fast mixer
            hasFastMixer() &&
            // there are sufficient fast track slots available
            (mFastTrackAvailMask != 0)
            // FIXME test that MixerThread for this fast track has a capable output HAL
            // FIXME add a permission test also?
        ) {
        // static tracks can have any nonzero framecount, streaming tracks check against minimum.
        if (sharedBuffer == 0) {
            // read the fast track multiplier property the first time it is needed
            int ok = pthread_once(&sFastTrackMultiplierOnce, sFastTrackMultiplierInit);
            if (ok != 0) {
                ALOGE("%s pthread_once failed: %d", __func__, ok);
            }
            frameCount = max(frameCount, mFrameCount * sFastTrackMultiplier); // incl framecount 0
        }

        // check compatibility with audio effects.
        { // scope for mLock
            Mutex::Autolock _l(mLock);
            for (audio_session_t session : {
                    AUDIO_SESSION_OUTPUT_STAGE,
                    AUDIO_SESSION_OUTPUT_MIX,
                    sessionId,
                }) {
                sp<EffectChain> chain = getEffectChain_l(session);
                if (chain.get() != nullptr) {
                    audio_output_flags_t old = *flags;
                    chain->checkOutputFlagCompatibility(flags);
                    if (old != *flags) {
                        ALOGV("AUDIO_OUTPUT_FLAGS denied by effect, session=%d old=%#x new=%#x",
                                (int)session, (int)old, (int)*flags);
                    }
                }
            }
        }
        ALOGV_IF((*flags & AUDIO_OUTPUT_FLAG_FAST) != 0,
                 "AUDIO_OUTPUT_FLAG_FAST accepted: frameCount=%zu mFrameCount=%zu",
                 frameCount, mFrameCount);
      } else {
        ALOGV("AUDIO_OUTPUT_FLAG_FAST denied: sharedBuffer=%p frameCount=%zu "
                "mFrameCount=%zu format=%#x mFormat=%#x isLinear=%d channelMask=%#x "
                "sampleRate=%u mSampleRate=%u "
                "hasFastMixer=%d tid=%d fastTrackAvailMask=%#x",
                sharedBuffer.get(), frameCount, mFrameCount, format, mFormat,
                audio_is_linear_pcm(format),
                channelMask, sampleRate, mSampleRate, hasFastMixer(), tid, mFastTrackAvailMask);
        *flags = (audio_output_flags_t)(*flags & ~AUDIO_OUTPUT_FLAG_FAST);
      }
    }
    // For normal PCM streaming tracks, update minimum frame count.
    // For compatibility with AudioTrack calculation, buffer depth is forced
    // to be at least 2 x the normal mixer frame count and cover audio hardware latency.
    // This is probably too conservative, but legacy application code may depend on it.
    // If you change this calculation, also review the start threshold which is related.
    if (!(*flags & AUDIO_OUTPUT_FLAG_FAST)
            && audio_has_proportional_frames(format) && sharedBuffer == 0) {
        // this must match AudioTrack.cpp calculateMinFrameCount().
        // TODO: Move to a common library
        uint32_t latencyMs = 0;
        lStatus = mOutput->stream->getLatency(&latencyMs);
        if (lStatus != OK) {
            ALOGE("Error when retrieving output stream latency: %d", lStatus);
            goto Exit;
        }
        uint32_t minBufCount = latencyMs / ((1000 * mNormalFrameCount) / mSampleRate);
        if (minBufCount < 2) {
            minBufCount = 2;
        }
        // For normal mixing tracks, if speed is > 1.0f (normal), AudioTrack
        // or the client should compute and pass in a larger buffer request.
        size_t minFrameCount =
                minBufCount * sourceFramesNeededWithTimestretch(
                        sampleRate, mNormalFrameCount,
                        mSampleRate, AUDIO_TIMESTRETCH_SPEED_NORMAL /*speed*/);
        if (frameCount < minFrameCount) { // including frameCount == 0
            frameCount = minFrameCount;
        }
    }
    *pFrameCount = frameCount;

    switch (mType) {

    case DIRECT:
        if (audio_is_linear_pcm(format)) { // TODO maybe use audio_has_proportional_frames()?
            if (sampleRate != mSampleRate || format != mFormat || channelMask != mChannelMask) {
                ALOGE("createTrack_l() Bad parameter: sampleRate %u format %#x, channelMask 0x%08x "
                        "for output %p with format %#x",
                        sampleRate, format, channelMask, mOutput, mFormat);
                lStatus = BAD_VALUE;
                goto Exit;
            }
        }
        break;

    case OFFLOAD:
        if (sampleRate != mSampleRate || format != mFormat || channelMask != mChannelMask) {
            ALOGE("createTrack_l() Bad parameter: sampleRate %d format %#x, channelMask 0x%08x \""
                    "for output %p with format %#x",
                    sampleRate, format, channelMask, mOutput, mFormat);
            lStatus = BAD_VALUE;
            goto Exit;
        }
        break;

    default:
        if (!audio_is_linear_pcm(format)) {
                ALOGE("createTrack_l() Bad parameter: format %#x \""
                        "for output %p with format %#x",
                        format, mOutput, mFormat);
                lStatus = BAD_VALUE;
                goto Exit;
        }
        if (sampleRate > mSampleRate * AUDIO_RESAMPLER_DOWN_RATIO_MAX) {
            ALOGE("Sample rate out of range: %u mSampleRate %u", sampleRate, mSampleRate);
            lStatus = BAD_VALUE;
            goto Exit;
        }
        break;

    }

    lStatus = initCheck();
    if (lStatus != NO_ERROR) {
        ALOGE("createTrack_l() audio driver not initialized");
        goto Exit;
    }

    { // scope for mLock
        Mutex::Autolock _l(mLock);

        // all tracks in same audio session must share the same routing strategy otherwise
        // conflicts will happen when tracks are moved from one output to another by audio policy
        // manager
        uint32_t strategy = AudioSystem::getStrategyForStream(streamType);
        for (size_t i = 0; i < mTracks.size(); ++i) {
            sp<Track> t = mTracks[i];
            if (t != 0 && t->isExternalTrack()) {
                uint32_t actual = AudioSystem::getStrategyForStream(t->streamType());
                if (sessionId == t->sessionId() && strategy != actual) {
                    ALOGE("createTrack_l() mismatched strategy; expected %u but found %u",
                            strategy, actual);
                    lStatus = BAD_VALUE;
                    goto Exit;
                }
            }
        }

        track = new Track(this, client, streamType, sampleRate, format,
                          channelMask, frameCount,
                          nullptr /* buffer */, (size_t)0 /* bufferSize */, sharedBuffer,
                          sessionId, uid, *flags, TrackBase::TYPE_DEFAULT, portId);

        lStatus = track != 0 ? track->initCheck() : (status_t) NO_MEMORY;
        if (lStatus != NO_ERROR) {
            ALOGE("createTrack_l() initCheck failed %d; no control block?", lStatus);
            // track must be cleared from the caller as the caller has the AF lock
            goto Exit;
        }
        mTracks.add(track);

        sp<EffectChain> chain = getEffectChain_l(sessionId);
        if (chain != 0) {
            ALOGV("createTrack_l() setting main buffer %p", chain->inBuffer());
            track->setMainBuffer(chain->inBuffer());
            chain->setStrategy(AudioSystem::getStrategyForStream(track->streamType()));
            chain->incTrackCnt();
        }

        if ((*flags & AUDIO_OUTPUT_FLAG_FAST) && (tid != -1)) {
            pid_t callingPid = IPCThreadState::self()->getCallingPid();
            // we don't have CAP_SYS_NICE, nor do we want to have it as it's too powerful,
            // so ask activity manager to do this on our behalf
            sendPrioConfigEvent_l(callingPid, tid, kPriorityAudioApp, true /*forApp*/);
        }
    }

    lStatus = NO_ERROR;

线程的类型 ,保存在 mType 私有变量中:

@ /frameworks/av/services/audioflinger/Threads.cpp

// 获得当前线程的名字
const char *AudioFlinger::ThreadBase::threadTypeToString(AudioFlinger::ThreadBase::type_t type)
{
    switch (type) {
    case MIXER: 		return "MIXER";				--->	MixerThread
    case DIRECT: 		return "DIRECT";			--->	DirectOutputThread
    case DUPLICATING: 	return "DUPLICATING";		--->	DuplicatingThread
    case RECORD: 		return "RECORD";			--->	RecordThread
    case OFFLOAD: 		return "OFFLOAD";			--->	OffloadThread
    case MMAP: 			return "MMAP";				--->	MmapThread
    default: 			return "unknown";
    }
}
@ /frameworks/av/services/audioflinger/Threads.cpp
// 获得当前输入类型 的名字
const char *sourceToString(audio_source_t source)
{
    switch (source) {
    case AUDIO_SOURCE_DEFAULT:              return "default";
    case AUDIO_SOURCE_MIC:                  return "mic";
    case AUDIO_SOURCE_VOICE_UPLINK:         return "voice uplink";
    case AUDIO_SOURCE_VOICE_DOWNLINK:       return "voice downlink";
    case AUDIO_SOURCE_VOICE_CALL:           return "voice call";
    case AUDIO_SOURCE_CAMCORDER:            return "camcorder";
    case AUDIO_SOURCE_VOICE_RECOGNITION:    return "voice recognition";
    case AUDIO_SOURCE_VOICE_COMMUNICATION:  return "voice communication";
    case AUDIO_SOURCE_REMOTE_SUBMIX:        return "remote submix";
    case AUDIO_SOURCE_UNPROCESSED:          return "unprocessed";
    case AUDIO_SOURCE_FM_TUNER:             return "FM tuner";
    case AUDIO_SOURCE_HOTWORD:              return "hotword";
    default:                                return "unknown";
    }
}

8. Audio EffectModule 音效模块分析

Audio EffectModule 主要是对 Effect 库的方法封装。
给不同的 thread client 提供 process() 和 command() 方法实现。

8.1 EffectModule 类定义

先来看下类定义:

@ \frameworks\av\services\audioflinger\Effects.h

class EffectModule : public RefBase {
public:
	// 构造函数
    EffectModule(ThreadBase *thread, const wp<AudioFlinger::EffectChain>& chain,
                    effect_descriptor_t *desc, int id, audio_session_t sessionId, bool pinned);
    virtual ~EffectModule();		// 析构函数
    
    enum effect_state {				// 音效的状态 enum
        IDLE,		// 0
        RESTART,	// 1
        STARTING,	// 2
        ACTIVE,		// 3
        STOPPING,	// 4
        STOPPED,	// 5
        DESTROYED	// 6
    };
	
    int	id() const { return mId; }	// 返回当前入口的 特殊ID  (unique ID)
    void process();					// 音效处理函数
    bool updateState();				// 更新音效状态
    
    // 音效命令处理
    status_t command(uint32_t cmdCode,uint32_t cmdSize,void *pCmdData, uint32_t *replySize,void *pReplyData);

    void reset_l();			// 重置音效 发送 command( EFFECT_CMD_RESET )
    status_t configure();	// 音效参数配置,发送 command( EFFECT_CMD_SET_PARAM ) 
    status_t init();		// 音效初始化,发送 command( EFFECT_CMD_INIT )
    effect_state state() const { return mState; }	// 返回当前的状态
    uint32_t status() { return mStatus; }
    audio_session_t sessionId() const { return mSessionId; }
    status_t setEnabled(bool enabled);	// 使能 EffectModule
    status_t setEnabled_l(bool enabled);// 使能 EffectModule
    bool isEnabled() const;				// 返回当前 EffectModule 是否已使能
    bool isProcessEnabled() const;

    void setInBuffer(const sp<EffectBufferHalInterface>& buffer); // 配置 buff 为输入 buff
    int16_t *inBuffer() const {			// 返回输入 buff
        return mInBuffer != 0 ? reinterpret_cast<int16_t*>(mInBuffer->ptr()) : NULL;
    }
    void        setOutBuffer(const sp<EffectBufferHalInterface>& buffer); // 配置 buff 为输出 buff
    int16_t     *outBuffer() const {	// 返回输出 buff
        return mOutBuffer != 0 ? reinterpret_cast<int16_t*>(mOutBuffer->ptr()) : NULL;
    }
    void        setChain(const wp<EffectChain>& chain) { mChain = chain; }		// 配置通道
    void        setThread(const wp<ThreadBase>& thread) { mThread = thread; }	// 配置线程
    const wp<ThreadBase>& thread() { return mThread; }							// 返回当前全局线程 

    status_t addHandle(EffectHandle *handle);		// 将当前 handle 插入到全局 mHandles 链表中
    ssize_t disconnectHandle(EffectHandle *handle, bool unpinIfLast); // 
    ssize_t removeHandle(EffectHandle *handle);		// 多除 handle
    ssize_t removeHandle_l(EffectHandle *handle);

    const effect_descriptor_t& desc() const { return mDescriptor; }
    wp<EffectChain>&     chain() { return mChain; }

    status_t         setDevice(audio_devices_t device);
    status_t         setVolume(uint32_t *left, uint32_t *right, bool controller);
    status_t         setMode(audio_mode_t mode);
    status_t         setAudioSource(audio_source_t source);
    status_t         start();
    status_t         stop();
    void             setSuspended(bool suspended);
    bool             suspended() const;

    EffectHandle*    controlHandle_l();

    bool             isPinned() const { return mPinned; }
    void             unPin() { mPinned = false; }
    bool             purgeHandles();
    void             lock() { mLock.lock(); }
    void             unlock() { mLock.unlock(); }
    bool             isOffloadable() const
                        { return (mDescriptor.flags & EFFECT_FLAG_OFFLOAD_SUPPORTED) != 0; }
    bool             isImplementationSoftware() const
                        { return (mDescriptor.flags & EFFECT_FLAG_HW_ACC_MASK) == 0; }
    bool             isProcessImplemented() const
                        { return (mDescriptor.flags & EFFECT_FLAG_NO_PROCESS) == 0; }
    status_t         setOffloaded(bool offloaded, audio_io_handle_t io);
    bool             isOffloaded() const;
    void             addEffectToHal_l();
    void             release_l();

    void             dump(int fd, const Vector<String16>& args);

private:
    friend class AudioFlinger;      // for mHandles
    bool                mPinned;

    // Maximum time allocated to effect engines to complete the turn off sequence
    static const uint32_t MAX_DISABLE_TIME_MS = 10000;

    DISALLOW_COPY_AND_ASSIGN(EffectModule);

    status_t start_l();
    status_t stop_l();
    status_t remove_effect_from_hal_l();

mutable Mutex               mLock;      // mutex for process, commands and handles list protection
    wp<ThreadBase>      mThread;    // parent thread
    wp<EffectChain>     mChain;     // parent effect chain
    const int           mId;        // this instance unique ID
    const audio_session_t mSessionId; // audio session ID
    const effect_descriptor_t mDescriptor;// effect descriptor received from effect engine
    effect_config_t     mConfig;    // input and output audio configuration
    sp<EffectHalInterface> mEffectInterface; // Effect module HAL
    sp<EffectBufferHalInterface> mInBuffer;  // Buffers for interacting with HAL
    sp<EffectBufferHalInterface> mOutBuffer;
    status_t            mStatus;    // initialization status
    effect_state        mState;     // current activation state
    Vector<EffectHandle *> mHandles;    // list of client handles
                // First handle in mHandles has highest priority and controls the effect module
    uint32_t mMaxDisableWaitCnt;    // maximum grace period before forcing an effect off after
                                    // sending disable command.
    uint32_t mDisableWaitCnt;       // current process() calls count during disable period.
    bool     mSuspended;            // effect is suspended: temporarily disabled by framework
    bool     mOffloaded;            // effect is currently offloaded to the audio DSP
    wp<AudioFlinger>    mAudioFlinger;
};
发布了329 篇原创文章 · 获赞 66 · 访问量 10万+

猜你喜欢

转载自blog.csdn.net/Ciellee/article/details/102602201