Markovian Lifting

Markovian Lifting

Represent rough fractional dynamics as a finite Markov lift: mix exponential (OU) modes chosen by quadrature in the Laplace domain so a single correlated Gaussian Markov chain reproduces—approximately—the singular kernels used in rough volatility and \(H < \tfrac{1}{2}\) fractional models.

Simulating via Markovian Lifting

Discrete-time Recipe for Markovian Lifting

Intuition

Markovian lifting approximates fractional and rough-volatility dynamics by replacing a singular, non-Markov kernel with a Laplace mixture of exponentials. Informally, one writes a memory kernel \(G(\tau)\) as a Laplace transform of a measure on mean-reversion rates \(\lambda\): $$ G(\tau) \approx \int_0^\infty e^{-\lambda \tau}\,\nu(d\lambda), \qquad \tau \ge 0. $$ Each exponential corresponds to an Ornstein–Uhlenbeck (OU) factor. Stacking \(M\) OU states driven by a single Brownian motion \(W\) gives a Gaussian Markov process \(Z_t = (X^{(1)}_t,\ldots,X^{(M)}_t)^\top\), and the linear aggregate $$ Y_t = \sum_{k=1}^M w_k\, X^{(k)}_t $$ mimics rough fractional covariance when the pairs \((\lambda_k, w_k)\) are chosen by quadrature in the Laplace domain—analogous in spirit to the Volterra recipe, where a causal kernel weights Gaussian noise, except here the kernel is represented by a finite exponential mixture and the state vector makes the dynamics Markovian for filtering and simulation.

In continuous time, with rates \(\lambda_k > 0\) and weights \(w_k\), $$ dX^{(k)}_t = -\lambda_k X^{(k)}_t\,dt + w_k\,dW_t, \qquad X^{(k)}_0 = 0. $$ The observable \(Y_t\) is the rough-vol-style factor used in lifted stochastic-volatility models when coupled with price.

Discretization

Fix a time grid \( \{t_i\}_{i=0}^n \) with uniform step \(\Delta t\). Over each interval, the exact OU transition driven by the same \(W\) implies correlated Gaussian innovations \(\eta^{(k)}_{i}\) for each factor. Conditionally on the past, one updates $$ X^{(k)}_{i+1} = e^{-\lambda_k \Delta t}\, X^{(k)}_i + \eta^{(k)}_{i+1}, $$ where the innovation vector \(\eta\) is Gaussian with mean zero and $$ \mathrm{Cov}\big(\eta^{(j)},\eta^{(k)}\big) = w_j w_k\,\frac{1 - e^{-(\lambda_j+\lambda_k)\Delta t}}{\lambda_j+\lambda_k}. $$ In practice one forms the \(M \times M\) covariance matrix of \(\eta\), computes its Cholesky factor \(L\), draws \(z \sim \mathcal{N}(0,I_M)\), sets \(\eta = L z\), then updates each \(X^{(k)}\) and sets \(Y_{i} = \sum_k w_k X^{(k)}_{i}\). For \(H \in (0,\tfrac{1}{2})\), nodes \(\lambda_k\) are typically log-spaced and weights scale like \(w_k \propto \lambda_k^{-(H+1/2)}\) (normalized) as a heuristic Laplace-domain quadrature; larger \(M\) tightens the match to a target fractional covariance, and in applications \(w_k\) may be calibrated to market smiles.

General Recipe

  1. Choose Hurst parameter \(H \in (0,1)\) (rough regimes use \(H < \tfrac{1}{2}\)) and the number of OU factors \(M\).
  2. Define the time grid \( \{t_i\}_{i=0}^n \) with uniform spacing \(\Delta t\).
  3. Select mean-reversion rates \(\lambda_1,\ldots,\lambda_M\) (e.g. log-spaced on \([2/T, 400/T]\)) and weights \(w_k \propto \lambda_k^{-(H+1/2)}\) with \(\sum_k w_k = 1\).
  4. Build the innovation covariance \(\mathrm{Cov}(\eta^{(j)},\eta^{(k)})\) for step \(\Delta t\) and decompose it with Cholesky: \(\Sigma = L L^\top\).
  5. For each path and each time step: draw \(z \sim \mathcal{N}(0,I_M)\), \(\eta = L z\), update \(X^{(k)} \leftarrow e^{-\lambda_k \Delta t} X^{(k)} + \eta^{(k)}\), then \(Y \leftarrow \sum_k w_k X^{(k)}\).

Python Implementation

import numpy as np

def laplace_ou_weights(M, H, T=1.0):
    """Log-spaced rates lambda_k and weights w_k propto lambda_k^{-(H+1/2)}."""
    lam = np.logspace(np.log10(2.0 / T), np.log10(400.0 / T), M)
    nu = H + 0.5
    w = lam ** (-nu)
    w /= w.sum()
    return lam, w

def ou_innovation_cov(lam, w, dt):
    M = len(lam)
    S = np.zeros((M, M))
    for j in range(M):
        for k in range(M):
            s = lam[j] + lam[k]
            S[j, k] = w[j] * w[k] * (1.0 - np.exp(-s * dt)) / s if s > 1e-14 else w[j] * w[k] * dt
    return S

def markov_lift_ou_paths(steps, paths, H, M, T=1.0, rng=None):
    rng = rng or np.random.default_rng()
    dt = T / (steps - 1)
    lam, w = laplace_ou_weights(M, H, T)
    L = np.linalg.cholesky(ou_innovation_cov(lam, w, dt) + 1e-12 * np.eye(M))
    a = np.exp(-lam * dt)
    X = np.zeros(M)
    Y = np.zeros((paths, steps))
    for p in range(paths):
        X[:] = 0.0
        for n in range(steps - 1):
            eta = L @ rng.standard_normal(M)
            X = a * X + eta
            Y[p, n + 1] = np.dot(w, X)
    return Y, lam, w

# When comparing to fBM, scale each time column so Var(Y(t_i)) matches t_i^{2H}
# (raw lift uses \\sum w_k=1 and is far smaller than fractional Brownian motion).

Summary: This approach yields a Gaussian Markov chain in the lifted state \(Z_n\) whose linear functional \(Y_n\) approximates fractional / rough covariance when \((\lambda_k,w_k)\) are chosen in the Laplace domain, enabling efficient simulation and filtering for rough-volatility-style models without full-path fBM Cholesky at every step.

Simulation with Markov Lift

Sample Paths

Covariance

Covariance Error

Theoretical Covariance

Markovian Lifting Equations

Relevant Articles

Recipes for simulating stochastic processes

This note discusses the necessary steps to simulate a stochastic process with a desired covariance structure. In this context, I outline the Karhunen–Loéve theorem and provide intuition and general recipes to decompose a stochastic process by establishing the integral eigenvalue problem (continuous-time) or, equivalently, the covariance matrix diagonalization problem (discrete-time). I then apply these general recipes to Brownian motion, Brownian bridge, and fractional Brownian motion to numerically simulate paths.

View on SSRN
Suggested Citation:
Paolucci, Roman, Recipes for simulating stochastic processes (June 30, 2025). Available at SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5332011