Policy and Value Iteration Algorithms

This section on value-based methods is split into two parts. I will first lay out three classic algorithms: policy iteration, value iteration, fitted-Q iteration; and then shift to state-of-the-art deep Q learning. I think it's a main goal to not only understand each algorithm but also how these value-based methods relate to each other. It's always important to know exactly what you are doing before throwing a deep neural net at it :)

Before, in policy gradient algorithms, we directly learn and update a policy by evaluating it using a value or advantage function. Here in the world of value-based methods, we instead only learn a good value function for each (state, action) pair, so that eventually for each state, we can induce a good policy by just taking the argmax\text{argmax} action for the value function at each state we encounter.

Tabular Case: policy iteration v.s. fitted value iteration

If we put together policy iteration and value iteration algorithms, the key difference is whether to update the policy immediately after we finish evaluating all the experienced states under this current policy.

Policy Iteration

For policy iteration algorithms, there are two alternating steps:

Value Iteration

However, since the step two above uses a simple argmax\text{argmax} update rule, we can omit this action-reselecting step and fit a Q-function instead of V-function, hence the name value iteration:

Notice we can do this because the state value V doesn't depend on action choices, thus the argmax\text{argmax}of AA and QQ is the same. So if we have Q(s,a)Q(s,a) values, we can use it to get a policy that chooses the best action at every state.

Aside: should I use stochastic or deterministic policy?

I think this is an interesting question that came up during lecture, and the short answer is, for fully-observed MDPs, there always exists an optimal, deterministic policy (but at the same time there could also be a stochastic one that's equally optimal). However, for partially-observed MDPs such as games, stochastic can outperform any deterministic.

When the table is too big: fitted value iterations

A big assumption for the above policy and value iteration algorithms to work is that all the possible state-action pairs can be put in one reasonably-sized table like this:

and we can use Dynamic Programming to solve for an optimal policy. However, most problems we are trying to solve in RL (e.g. driving a car or playing a challenging game) contain some much larger state and action spaces. Simple DP won't be efficient enough in this case, hence the use of fitted value iterations, whose key distinction from tabular algorithms is the option to use a neural network to parametrize the value function V, and using gradient update to improve its evaluation.

Notice how, if we use this fitted value update above, we still need to iterate over all possible next states induced by all possible next actions, before giving an estimate of the current state value. This means that we need to visit states outside of the sample trajectory we have, and this process can be expensive; plus, sometimes we might not understand the transition dynamics at all. In order to avoid this, we notice Q-functions narrow down our estimate to be state-action pair specific, hence fitted Q-value iteration:

Unfortunately, it's nearly impossible for fitted-Q to converge under large state/action spaces, but it's still important to understand how we get here so we can discuss why the below deep-Q algorithm works.

Details on Fitted-Q

A few details to better grasp fitted Q algorithm:

  • It's off-policy: the sample state-action transitions don't need to be generated under the same policy for the Q function to give it an estimate (thus the step-2 update)

  • In the online version above, we can use different strategies in step-1 to give the algorithm sufficient exploration in the often-huge state-action space. For now, two such strategies to know about are “epsilon-greedy” and “Boltzmann exploration".

Theories on why fitted-Q doesn't work

Skipped for now, but generally:

works: Value function learning theory

not working: Non-tabular value function learning

moving target and correlated samples

All included screenshots credit to Lecture 7 (slides)

Last updated