HMM - find the probability of the observation sequence

 Given the model \lambda = \left ( \pi ,A,B \right ), O = \left \{o_{1},o_{2} ,...,o_{T} \right \}calculate the probability of occurrence with the observation sequenceP(O |\lambda)

1. Violent solution

\lambda = \left ( \pi ,A,B \right )It requires the probability of the occurrence of the observation sequence under         the given model O = \left \{o_{1},o_{2} ,...,o_{T} \right \}, and the observation state is generated by the hidden state. Therefore, if we can list all the hidden state sequences, and then combine the generated observation state probability (B), we can obtain the occurrence probability of the observation sequence:

P\left ( O|\lambda \right )=\sum_{I}^{}P\left ( O,I|\lambda \right )     

P\left ( O,I|\lambda \right )=P\left ( I|\lambda \right )P\left ( O|I,\lambda \right )

The current state is only related to the previous state

        P\left ( I|\lambda \right ): Under a given model, the probability of a hidden state sequence can be obtained from the initial state (π) and hidden state transition probability (A):

I=\left \{ i_{1}, i_{2},..., i_{T} \right \}  

P\left ( I|\lambda \right )=\pi _{i_{1}}a _{i_{1}i_{2}}a _{i_{2}i_{3}}...a _{i_{T-1}i_{T}}

An observation is only relevant to the state that generated it

        P\left ( O|I,\lambda \right ): For a fixed hidden sequence and model, combined with the generated observation state probability (B), it can be obtained:

P\left ( O|I,\lambda \right )=b_{i_{1}}\left ( o_{i_{1}} \right )b_{i_{2}}\left ( o_{i_{2}} \right )...b_{i_{T}}\left ( o_{i_{T}} \right )

        To sum up :

 P\left ( O,I|\lambda \right )=P\left ( I|\lambda \right )P\left ( O|I,\lambda \right )=\sum_{i_{1},i_{2},...,i_{T}}^{}\pi _{i_{1}}b_{i_{1}}\left ( o_{i_{1}} \right )a _{i_{1}i_{2}}b_{i_{2}}\left ( o_{i_{2}} \right )a _{i_{2}i_{3}}...a _{i_{T-1}i_{T}}b_{i_{T}}\left ( o_{i_{T}} \right )

        Complexity analysis : Assuming that the number of states is N, and the length of the observed state sequence is T, there are a total of N^{T}hidden state sequences, and the corresponding probability needs to be calculated for each sequence  O\left ( TN^{T} \right ).

2. Forward algorithm

        Given the hidden state i at time t,  Y=\left \{y_{1},y_{2} ,...,y_{t} \right \}the probability that the observation sequence is called the forward probability

        \alpha _{i}\left ( t \right )=p\left ( y_{1},y_{2},...,y_{t},q_{t}=i|\lambda \right )

         At that timet = T , \alpha _{i}\left ( T \right )=p\left ( y_{1},y_{2},...,y_{T},q_{T}=i|\lambda \right ), this formula represents the probability that at the last moment, the observation sequence is \left ( y_{1},y_{2},...,y_{T} \right )and the last hidden state is on number i.

        Assuming that the number of states is N, then:

P\left ( Y|\lambda \right )=\sum_{i}^{N} \alpha _{i}\left ( T \right )

        The current state is only related to the previous state, so \alpha _{i}\left ( T \right )it can be obtained recursively from the forward probability of the previous moment \alpha _{1-N}\left ( T-1 \right ). This seems to be a dynamic programming problem.

        At the first moment:

\alpha _{i}\left ( 1 \right )=P\left ( y_{1},q_{1}=i|\lambda \right )=\pi _{i}b_{iy_{1}}

 Indicates the forward probability that the observation sequence is y_{1} and          at the first moment  .q_{1}=i

        Now it is known that at the tth moment, the forward probability of state j is \alpha _{j}\left ( t \right ), then at the t+1th moment, the forward probability of state i is:

        \alpha _{i}\left ( t+1 \right )=\left ( \sum_{j}^{N}\alpha _{j}\left ( t \right )a_{ji} \right )b_{iy_{t+1}}

        With the forward probability of the first moment, it can be obtained according to the recursive formula from time t to time t + 1 \alpha _{i}\left ( T \right ), thus:

        P\left ( Y|\lambda \right )=\sum_{i}^{N} \alpha _{i}\left ( T \right )

        Computational complexity analysis: There are N states at each moment, each state can be obtained from the N states at the previous moment, and there are T moments in total, so O\left ( TN^{2} \right ).

        

Guess you like

Origin blog.csdn.net/weixin_43284996/article/details/127322556
HMM