Policy gradient theorem or, likelihood ratio policy gradient, is a theorem that transforms policy gradient into a sample based estimation problem. The derivation is the following.
We want to optimize the overall utility of using a policy over a state-action sequence, a trajectory, , where . We can express the utility as:
Given an expectation under a distribution, we can turn it into a sum over all possible events weighted by their probabilities (the definition of expectation). In our case, we can rewrite this expectation in terms of a probability function, , of a trajectory, , under policy and the corresponding reward, :
The goal is to find the parameter that gives the maximum utility:
Next, we will use gradient optimization to solve this problem. We take the gradient of with respect to :
Based on the linearity of gradient, the gradient of sum is the sum of gradient. Therefore, we have:
Here, we want to have a weighted sum so that we can sample the trajectories instead of enumerate all trajectories. To get there, we multiply and divide by :
Notice now we have a derivative of a function:
Our gradient becomes:
Here, we can apply the definition of expectation again, now in reverse:
which gives us the expected value of a function, , under distribution . This allows us to use a sample-based estimate of instead of enumerating all possible trajectories. Using an empirical estimate of the expectation with samples, we get:
In the above function, there is no gradient is taken for the reward function, which means the reward function can be discontinuous or unknown.
Next we will further expand the probability, , into a temporal based function of state and action. We write out the expression for , which is probability of future state given the present state and action, times policy, , which is the probability of action given state:
We use use some log rules to separate the product:
and we get:
since probability isn’t a function of , it’s gradient is 0. Thus we get:
The distribution function, , disappears. This gives us the final form of the policy gradient theorem:
This tells us to improve the overall utility, move the policy in the direction tht make the high reward more likely.
Note
Policy gradient theorem says that to improve the overall utility, move the policy in the direction tht make the high reward more likely.