# Control as Inference

####

At a high level, this section introduces a new decision making model that accounts for occasionally suboptimal actions, which is similar to real-world human behaviors. Then use this new model, we can derive optimal control and RL algorithms. As shown below, the new $$O$$ nodes are called optimality variables, each of them gives an "intention", a binary variable that denotes either the action is optimal or not. But note that when we write $$O$$ in conditional probabilities below such as $$p(\tau | O)$$, this means conditioning on all those optimality being *true*.

![](https://1896916525-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MSZWjXi87wJwJVBghav%2F-MTIChcFxyun31NtJ12-%2F-MTID-7bJaEjgjeXt3oz%2Flec14_1.png?alt=media\&token=56a6dfa2-34eb-48f6-9436-0f878ce91888)

Additionally, given a state-action pair, we *define* the probability of optimality to be exponential of its current reward (assuming negative reward so exponential stays between $$\[0,1]$$). Thus we can write the probability of *any* trajectory under all-time optimality as proportional to a feasibility probability $$p(\tau)$$, multiplied by a cumulation of exponential rewards throughout that trajectory. So that *if*  the agent is always operating under optimality, the probability of experiencing a high-reward trajectory is higher.

![](https://1896916525-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MSZWjXi87wJwJVBghav%2F-MTIDMZJ9D1ZecVXD6ua%2F-MTIDOfrQYOcZm2RHvqX%2Flec14_2.png?alt=media\&token=cb0a50aa-41ef-4bc2-b96f-797f26dfc14f)

## Probabilistic Model for Behaviors

Key idea: the *probability* of a trajectory is proportional to expected rewards.

Model the suboptimal decision making process. We add a new *binary* node $$O$$ here to represent the agent's "intention", the hidden behavioral logic that largely functions according to expected rewards but also has some stochastic.

Below, let's first look at three values we can compute/infer from this model and then use them to formulate inference as an RL problem.

#### Backward Messages

We can think of computing backward messages as *inferring* what's the probability for **onward optimality** starting from the current state and action in the trajectory, i.e. probability of all onward optimality variables having value 1. After expanding this definition we can separate this conditional probability to the multiplication of several probabilities, and notice how the onward optimality probability conditioned on only one state can be written as an expectation over action. The math can be a little tricky to wrap your head around, but this derivation enables us to calculate backward messages in a recursive way:

![](https://1896916525-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MSZWjXi87wJwJVBghav%2F-MTIDMZJ9D1ZecVXD6ua%2F-MTIDn7J27m4sIYNQUbb%2Flec14_3.png?alt=media\&token=3a428129-5ef4-4096-8c64-b13993eeddb4)

And, if we take a closer look at **log** of the variables, there's an uncanny resemblance to value iteration algorithms we've seen before. But here, the value function in a log expectation over all possible current actions. Taking log of a sum (integration here is roughly a sum) of exponentials will let the big Q values dominate, so unlike value iteration, $$V$$ only approaches the max Q value of some action at the current state.

Another nuance to notice here is that, in the new definition of Q value here, because we are taking into account optimality, transition dynamics makes a difference: if the state transition is deterministic, then there's only one possible next state given the current state-action, then the log expectation term equals to $$V$$of next state; but if the transition is stochastic and since we are assuming optimality everywhere, the log expectation term will be *biased* towards states that give higher rewards.

![](https://1896916525-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MSZWjXi87wJwJVBghav%2F-MTIDMZJ9D1ZecVXD6ua%2F-MTIDs_2dEMn3IXGU8fF%2Flec14_4.png?alt=media\&token=98b2ed59-5059-4f7b-9da6-8e9e108dabb3)

I'm omitting some more math here that basically shows the action prior distribution doesn't affect the formulation because it can always be folded into the reward. So above we assumed uniform action prior without loss of generality.

### Optimal Policy

Under this new model with optimality nodes added, now under a policy, the probability of an action is defined conditioned on both state and O. And for optimal policy at each timestep, we can assume all previous and onward optimality variables are all true, then calculate probability for current action. After a lot of Baye's rule derivations, we can actually conveniently write this policy as a ratio of backward messages, or exponential of "Advantage" value with the new $$V$$ and $$Q$$ definitions:

![](https://1896916525-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MSZWjXi87wJwJVBghav%2F-MTIDMZJ9D1ZecVXD6ua%2F-MTIE-kDatCRTYi5jN-g%2Flec14_5.png?alt=media\&token=3e6a4fb8-f828-4988-b304-648bd8950010)

#### Forward Message

Defined as probability of a state given up-until-now optimal actions, it can be expanded out, again using chain rule of conditional probability, so that it can be calculated recursively from the beginning state. The first set of long equations below are essentially showing how we can using gathered known quantities to calculate forward message; and using both forward and backward messages, we can actually calculate the probability of a state (i.e. state marginals) under *overall* optimality, which is proportional to them multiplied together:

![](https://1896916525-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MSZWjXi87wJwJVBghav%2F-MTIDMZJ9D1ZecVXD6ua%2F-MTIE3IH9B_HftAHFuRS%2Flec14_6.png?alt=media\&token=37170785-2675-48ae-85a8-ac332521df73)

## Probabilistic RL

To begin with, let's see why and how **variational inference**, as introduced in the previous lecture/note, can help us recover the optimal policy under the new model discussed above.

Recall how we've been setting optimality variables to all true and put them as given conditions/evidences when we calculate posterior action or state probabilities. But while this "evidence" allows us to calculate the best action under given optimality, it also affects the state transition dynamics:

<div align="left"><img src="https://1896916525-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MSZWjXi87wJwJVBghav%2F-MTIDMZJ9D1ZecVXD6ua%2F-MTIE8OtlGZdaTUM3-6M%2Flec14_7.png?alt=media&#x26;token=e6c41fce-b0de-4936-859c-0cd77bf43e01" alt=""></div>

This makes sense from an inference perspective: given you are under optimal policy, the high-reward next states are more likely to come up; but not for control: what we want is to select the best actions *assuming* the state transition dynamic is the same. Recall the idea of variational inference, we can learn a model that *approximates* a posterior distribution.

![](https://1896916525-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MSZWjXi87wJwJVBghav%2F-MTIDMZJ9D1ZecVXD6ua%2F-MTIEClsNSRcBS16uADu%2Flec14_8.png?alt=media\&token=02eba04a-a973-4f52-b3a7-f6a666b17381)

Notice how this q-distribution is supposed to do two things: it both gives the posterior probability of any *trajectory* under optimality, and yields the approximate *transition* dynamics probability if conditioned on a current state-action pair. And to achieve these two, we *choose* a form for it:

![](https://1896916525-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MSZWjXi87wJwJVBghav%2F-MTIDMZJ9D1ZecVXD6ua%2F-MTIEGWaXPDPATuck41D%2Flec14_9.png?alt=media\&token=6454ecef-d767-4340-bd05-b4f524e27d1f)

So now we can draw a new transition model as below. Notice how the q distribution *preserves* the transition dynamics in the original optimality model, but allows us to omit the script-O nodes because it's already conditioned on optimality to the action choices forms an optimal policy.

![](https://1896916525-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MSZWjXi87wJwJVBghav%2F-MTIDMZJ9D1ZecVXD6ua%2F-MTIEKApZjGDaF4--1ON%2Flec14_10.png?alt=media\&token=89f0d84e-85ef-4f58-87ba-f7b9be28448f)

Now we are ready solve this reformed problem with maximizing the variational lower bound on prob(all-time optimality). And with more derivations skipped, we can actually show this lower bound resembles a lot like a reinforcement learning objective, only with an entropy term on q added, and the backward pass to dynamically calculate the backward messages, as discussed above, can be seen as a *soft* value iteration that no longer has the optimistic problem and takes a $$softmax$$ of Q values.

<div align="left"><img src="https://1896916525-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MSZWjXi87wJwJVBghav%2F-MTIDMZJ9D1ZecVXD6ua%2F-MTIET4hE6GVt4wWMR4N%2Flec14_11.png?alt=media&#x26;token=7b26950b-bf0c-4740-bd4c-57a8336458e0" alt=""></div>

<div align="left"><img src="https://1896916525-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MSZWjXi87wJwJVBghav%2F-MTIDMZJ9D1ZecVXD6ua%2F-MTIEVJmFaKvXtP-4UMh%2Flec14_12.png?alt=media&#x26;token=627526e8-4715-4b72-815f-5831f5646aa1" alt=""></div>

Furthermore, this soft value iteration has variants that allows discounted expected V's and explicit temperature that weights the V-function towards a hard-max, to control the stochasticity as desired:

![](https://1896916525-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MSZWjXi87wJwJVBghav%2F-MTIDMZJ9D1ZecVXD6ua%2F-MTIEZKOHAHhD_izn1B-%2Flec14_13.png?alt=media\&token=00e77f23-c6c4-43fc-9a98-1e7a923689e0)

I'll stop the notes here for now, but in the rest of this lecture we can see modified RL algorithms with **soft optimality** added to the original RL objectives, and stochastic models for learning control.

All included screenshots credit to  Lecture 14


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://mandi-zhao.gitbook.io/deeprl-notes/inference-control-and-inverse-rl/untitled.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
