Analysis of android O's mixing sound reduction

Preface

Originated from when the navigation voice broadcast and the media player are mixed, as long as the navigation broadcast is playing, the volume of the media player that appears is lowered by default. The initial tracking direction is the consideration of the focus duck mechanism in AudioService, but The actual situation is that the encapsulated setwillPausewhenduck is just a flag bit, and the app does not necessarily call. AudioService is the implementation of the audio interface of AudioManager. In addition to the common interface calls, there is also a focus processing mechanism and audio playback status monitoring.

analysis

From the code flow tracking of AudioService, the PlaybackActivityMonitor is initialized in the construction method, which is explained by the name as a playback activity monitor

public AudioService(Context context) {
    ...
        //初始化PlaybackActivityMonitor的对象
        mPlaybackMonitor =
            new PlaybackActivityMonitor(context, MAX_STREAM_VOLUME[AudioSystem.STREAM_ALARM]);
 
        mMediaFocusControl = new MediaFocusControl(mContext, mPlaybackMonitor);
    ...
}

PlaybackActivityMonitor first implements the two interface classes PlayerDeathMonitor and PlayerFocusEnforcer of the media player

, Currently only need to mention PlayerFocusEnforcer

First analyze the specific process of the interface method duckPlayer

@Override
public boolean duckPlayers(FocusRequester winner, FocusRequester loser) {
    if (DEBUG) {
        Log.v(TAG, String.format("duckPlayers: uids winner=%d loser=%d",
                winner.getClientUid(), loser.getClientUid()));
    }
    synchronized (mPlayerLock) {
        if (mPlayers.isEmpty()) {
            return true;
        }
        // check if this UID needs to be ducked (return false if not), and gather list of
        // eligible players to duck
        //确认当前UID的player是否需要被降音(若不需要降音则返回false),总起归纳有资格被降音player的列表AudioPlaybackConfiguration
        final Iterator<AudioPlaybackConfiguration> apcIterator = mPlayers.values().iterator();
        final ArrayList<AudioPlaybackConfiguration> apcsToDuck =
                new ArrayList<AudioPlaybackConfiguration>();
        while (apcIterator.hasNext()) {
            final AudioPlaybackConfiguration apc = apcIterator.next();
            if (!winner.hasSameUid(apc.getClientUid())
                    && loser.hasSameUid(apc.getClientUid())
                    && apc.getPlayerState() == AudioPlaybackConfiguration.PLAYER_STATE_STARTED)
            {
                if (apc.getAudioAttributes().getContentType() ==
                        AudioAttributes.CONTENT_TYPE_SPEECH) {
                    // the player is speaking, ducking will make the speech unintelligible
                    // so let the app handle it instead
                    Log.v(TAG, "not ducking player " + apc.getPlayerInterfaceId()
                            + " uid:" + apc.getClientUid() + " pid:" + apc.getClientPid()
                            + " - SPEECH");
                    return false;
                } else if (ArrayUtils.contains(UNDUCKABLE_PLAYER_TYPES, apc.getPlayerType())) {//特殊定义排除不需要降音的palyer类型不做降音处理
                    Log.v(TAG, "not ducking player " + apc.getPlayerInterfaceId()
                            + " uid:" + apc.getClientUid() + " pid:" + apc.getClientPid()
                            + " due to type:"
                            + AudioPlaybackConfiguration.toLogFriendlyPlayerType(
                                    apc.getPlayerType()));
                    return false;
                }
                apcsToDuck.add(apc);//符合条件的player uid添加到
            }
        }
        // add the players eligible for ducking to the list, and duck them
        // (if apcsToDuck is empty, this will at least mark this uid as ducked, so when
        //  players of the same uid start, they will be ducked by DuckingManager.checkDuck())
        mDuckingManager.duckUid(loser.getClientUid(), apcsToDuck);//最终在音频焦点中失败的uid的player被添加到降音列表,<<等待制裁...>>
                                                                  //下面还有一点需要注意的是,如果需要duck的列表为空时,相同uid的player开始播放时会先执行DuckingManager的checkDuck方法
    }
    return true;
}
The static inner class DuckingManager handles the logic that requires ducking
synchronized void duckUid(int uid, ArrayList<AudioPlaybackConfiguration> apcsToDuck) {
    if (DEBUG) {  Log.v(TAG, "DuckingManager: duckUid() uid:"+ uid); }
    if (!mDuckers.containsKey(uid)) {
        mDuckers.put(uid, new DuckedApp(uid));
    }
    final DuckedApp da = mDuckers.get(uid);
    for (AudioPlaybackConfiguration apc : apcsToDuck) {
        da.addDuck(apc, false /*skipRamp*/);//满足条件Uid的player添加到AudioPlaybackConfiguration的list中
    }
}

Continue to trace to the internal class DuckedApp

            // pre-conditions:
            //  * apc != null
            //  * apc.getPlayerState() == AudioPlaybackConfiguration.PLAYER_STATE_STARTED
//先决条件,audioplaybackconfiguration不为空,也就是说至少存在一个或以上的player在播放,而且还需处在播放状态
            void addDuck(@NonNull AudioPlaybackConfiguration apc, boolean skipRamp) {
                final int piid = new Integer(apc.getPlayerInterfaceId());
                if (mDuckedPlayers.contains(piid)) {
                    if (DEBUG) { Log.v(TAG, "player piid:" + piid + " already ducked"); }
                    return;
                }
                try {
                    sEventLogger.log((new DuckEvent(apc, skipRamp)).printLog(TAG));
                    apc.getPlayerProxy().applyVolumeShaper(//本文的核心处,就是降音曲线的调用
                            DUCK_VSHAPE,
                            skipRamp ? PLAY_SKIP_RAMP : PLAY_CREATE_IF_NEEDED);
                    mDuckedPlayers.add(piid);
                } catch (Exception e) {
                    Log.e(TAG, "Error ducking player piid:" + piid + " uid:" + mUid, e);
                }
            }

 

Initialization statement of flat curve

private static final VolumeShaper.Configuration DUCK_VSHAPE =
        new VolumeShaper.Configuration.Builder()
            .setId(VOLUME_SHAPER_SYSTEM_DUCK_ID)
            .setCurve(new float[] { 0.f, 1.f } /* times */,
                new float[] { 1.f, 0.2f  } /* volumes */)//被压低的播放器声音降低至0.2f
            .setOptionFlags(VolumeShaper.Configuration.OPTION_FLAG_CLOCK_TIME)
            .setDuration(MediaFocusControl.getFocusRampTimeMs(
                AudioManager.AUDIOFOCUS_GAIN_TRANSIENT_MAY_DUCK,
                new AudioAttributes.Builder().setUsage(AudioAttributes.USAGE_NOTIFICATION)
                        .build()))
            .build();

EVERYTHING

I have always wanted to associate the relationship with the routing sound curve of the jni and hal layers. The jni layer can still be directly noticed, but the state of the audio routing has not yet found the get point, so continue to track.

Guess you like

Origin blog.csdn.net/jeephao/article/details/108612076