策略梯度方法 Policy Gradient Methods for Reinforcement Learning with Function Approximation Policy Gradient Methods for Reinforcement Learning with Function Approximation

该理解建立在Policy Gradient Methods for Reinforcement Learning with Function Approximation论文阅读理解之上

首先明确优化目标$\rho(\pi)$,其中策略$\pi$是包含参数$\theta$的未知函数,一般有两种形式。

一种为每步期望奖励:

\[\rho ( \pi ) = \lim _ { n \rightarrow \infty } \frac { 1 } { n } E \left\{ r _ { 1 } + r _ { 2 } + \cdots + r _ { n } | \pi \right\} = \sum _ { s } d ^ { \pi } ( s ) \sum _ { a } \pi ( s , a ) \mathcal { R } _ { s } ^ { a }\] 

另一种为由某一状态出发的累计奖励:

\[\rho ( \pi ) = E \left\{ \sum _ { t = 1 } ^ { \infty } \gamma ^ { t - 1 } r _ { t } | s _ { 0 } , \pi \right\}\]

为了优化策略函数Sutton在论文Policy Gradient Methods for Reinforcement Learning with Function Approximation中证明了无论上述哪种形式的目标函数,都有

\[\frac { \partial \rho } { \partial \theta } = \sum _ { s } d ^ { \pi } ( s ) \sum _ { a } \frac { \partial \pi ( s , a ) } { \partial \theta } Q ^ { \pi } ( s , a )\]

 即表明梯度不涉及到平稳分布函数$d ^ { \pi } ( s )$的导数。记$\rho =\nabla J ( \boldsymbol { \theta } )$,有:

\begin{aligned} \nabla J ( \boldsymbol { \theta } ) & = \mathbb { E } _ { \pi } \left[ \sum _ { a } \pi \left( a | S _ { t } , \boldsymbol { \theta } \right) q _ { \pi } \left( S _ { t } , a \right) \frac { \nabla _ { \boldsymbol { \theta } } \pi \left( a | S _ { t } , \boldsymbol { \theta } \right) } { \pi \left( a | S _ { t } , \boldsymbol { \theta } \right) } \right] \\ & = \mathbb { E } _ { \pi } \left[ q _ { \pi } \left( S _ { t } , A _ { t } \right) \frac { \nabla _ { \boldsymbol { \theta } } \pi \left( A _ { t } | S _ { t } , \boldsymbol { \theta } \right) } { \pi \left( A _ { t } | S _ { t } , \boldsymbol { \theta } \right) } \right] \\ & = \mathbb { E } _ { \pi } \left[ G _ { t } \frac { \nabla _ { \boldsymbol { \theta } } \pi \left( A _ { t } | S _ { t } , \boldsymbol { \theta } \right) } { \pi \left( A _ { t } | S _ { t } , \boldsymbol { \theta } \right) } \right] \end{aligned}

其中第二个等式是通过抽样$A _ { t } \sim \pi$替换动作a,最后一个等式是因为$\mathbb { E } _ { \pi } \left[ G _ { t } | S _ { t } , A _ { t } \right] = q _ { \pi } \left( S _ { t } , A _ { t } \right)$ 

因此策略梯度的更新法则为:

扫描二维码关注公众号,回复: 5984337 查看本文章

\[\boldsymbol { \theta } _ { t + 1 } \doteq \boldsymbol { \theta } _ { t } + \alpha G _ { t } \frac { \nabla _ { \boldsymbol { \theta } } \pi \left( A _ { t } | S _ { t } , \boldsymbol { \theta } _ { t } \right) } { \pi \left( A _ { t } | S _ { t } , \boldsymbol { \theta } _ { t } \right) }\]

 

 

 

 

猜你喜欢

转载自www.cnblogs.com/statruidong/p/10755683.html