Why Biological Systems Suddenly Change State: An Intuitive Guide to Freidlin–Wentzell Theory

Image
  Stochasticity is ubiquitous in biology and neuroscience, manifesting in various forms, including ion channel noise, synaptic variability, gene regulatory fluctuations, noisy population dynamics, and more. Many biological systems spend long periods in a stable “state” and only rarely transition to another state due to noise. For instance, a neuron typically remains inactive but may occasionally trigger a spontaneous spike. Similarly, a gene can switch from the OFF state to the ON state due to rare bursts of transcription factors. Cells can also transition out of metabolic or epigenetic states, populations might shift between different ecological equilibria, and a viral infection can fluctuate between phases of control and uncontrollability. Freidlin–Wentzell theory provides a mathematically rigorous framework to study these phenomena when noise is small but nonzero . It tells you, firstly, h ow likely rare transitions are,    secondly,   h ow fast they occ...

Sanov’s Theorem and Girsanov Transformations in Diffusion Processes

 

In the context of large deviation theory, it is valuable to explore how Sanov’s theorem and the Girsanov transformation interact in the context of two fundamental diffusion models: Brownian motion and the Ornstein–Uhlenbeck process.

1. Introduction

Sanov’s theorem describes the exponential decay of probabilities associated with atypical empirical distributions of i.i.d. random variables. Girsanov’s theorem, on the other hand, provides a way to modify the drift of a stochastic process by changing the underlying probability measure. Together, they form a natural bridge between empirical deviations and pathwise deviations in diffusion processes.

Notation

For clarity, we summarize the variables and symbols used throughout this post:

  • \(X_t\): generic diffusion process.
  • \(B_t\): Brownian motion with volatility \(\sigma\).
  • \(\sigma\): diffusion coefficient (scalar or matrix).
  • \(W_t\): standard Brownian motion under the reference measure \(P\).
  • \(\tilde{W}_t\): Brownian motion under the tilted measure \(Q\) obtained via Girsanov.
  • \(\mu_t\): drift term of the diffusion process.
  • \(\theta_t\): adapted process used to define the Girsanov change of measure.
  • \(Z_t\): exponential martingale defining the Radon–Nikodym derivative \(dQ = Z_T\,dP\).
  • \(\hat{P}_n\): empirical measure of i.i.d. samples.
  • \(D_{\mathrm{KL}}(Q\|P)\): Kullback–Leibler divergence between distributions \(Q\) and \(P\).
  • Brownian bridge: process obtained by conditioning \(B_t\) to reach a fixed value at time \(T\).

Note on the induced norm

When computing KL divergences or action-like quantities, expressions of the form \(\|v\|_{\sigma^{-1}}^2\) appear naturally. This denotes the quadratic form

\[ \|v\|_{\sigma^{-1}}^2 = v^\top (\sigma \sigma^\top)^{-1} v, \]

which measures deviations in units of the noise covariance. It plays the same role as the inverse variance in Gaussian large deviations and is the natural metric induced by the diffusion coefficient.

2. Sanov’s Theorem: A Brief Reminder

Let \(X_1,\dots,X_n\) be i.i.d. with law \(P\), and let \(\hat{P}_n\) be their empirical measure. Sanov’s theorem states that

\[ \mathbb{P}(\hat{P}_n \approx Q) \asymp e^{-n D_{\mathrm{KL}}(Q\|P)}, \]

where \(D_{\mathrm{KL}}(Q\|P)\) is the Kullback–Leibler divergence. This result quantifies the rarity of observing an empirical distribution \(Q\) far from the true distribution \(P\).

3. Girsanov Transformation: Changing the Drift

Consider a diffusion process

\[ dX_t = \mu_t\,dt + \sigma_t\,dW_t, \]

where \(W_t\) is Brownian motion under a probability measure \(P\). If we introduce an adapted process \(\theta_t\), the exponential martingale

\[ Z_t = \exp\!\left( -\frac{1}{2}\int_0^t \theta_s^2\,ds - \int_0^t \theta_s\,dW_s \right) \]

defines a new measure \(Q\) via \(dQ = Z_T\,dP\). Under \(Q\), the process has modified drift:

\[ dX_t = (\mu_t + \sigma_t \theta_t)\,dt + \sigma_t\,d\tilde{W}_t. \]

This mechanism is central for constructing tilted path distributions that realize rare events.

4. Application to Brownian Motion

Let \(B_t\) be Brownian motion with volatility \(\sigma\):

\[ dB_t = \sigma\,dW_t. \]

Suppose we want to estimate the probability of a rare event such as \(B_T > x\). A natural approach is to consider the empirical average

\[ \frac{1}{T}\int_0^T B_s\,ds, \]

and apply Sanov’s theorem to the increments. The distribution \(Q\) that enforces an average value \(x\) corresponds, under Girsanov, to a Brownian bridge from \(0\) to \(x\).

The KL divergence between the Brownian bridge measure and the original Brownian motion is

\[ D(Q\|P) = \frac{x^2}{2\sigma^2 T}. \]

Thus,

\[ \mathbb{P}(B_T > x) \approx e^{-x^2/(2\sigma^2 T)}, \]

which matches the exact Gaussian tail.

5. Application to the Ornstein–Uhlenbeck Process

The Ornstein–Uhlenbeck (OU) process satisfies

\[ dX_t = a(b - X_t)\,dt + \sigma\,dW_t. \]

Its mean-reverting structure makes it a natural candidate for studying rare deviations from equilibrium. By applying Girsanov, one can shift the drift to

\[ dX_t = \big(a(b - X_t) + c\big)\,dt + \sigma\,d\tilde{W}_t, \]

thereby constructing a tilted measure under which the process exhibits atypical long-term behavior.

Sanov’s theorem then quantifies the cost of such deviations through the KL divergence between the empirical measure of the OU process under the tilted and original dynamics.

6. Concluding Remarks

Sanov’s theorem and Girsanov transformations provide a unified framework for understanding rare events in diffusion processes. Sanov characterizes the rarity of empirical deviations, while Girsanov constructs the path measures that realize them. Together, they form the backbone of modern large-deviation analysis for stochastic differential equations.

Comments

Popular posts from this blog

Understanding Anaerobic Threshold (VT2) and VO2 Max in Endurance Training

Owen's Function: A Simple Solution to Complex Problems

Cell Count Analysis with cycleTrendR