Let’s imagine you’re trying to guess how badly your investments might behave on a bad day. You gather data from past performance, hoping history will be a good $($or bad$)$ teacher. But instead of just looking back once, you decide to ask history over and over — like a clingy ex who keeps showing up. That, my friends, is the spirit of Bootstrap Historical Simulation.
🥾 What is Bootstrap Historical Simulation?
Bootstrap historical simulation is like replaying old football games with random commentary just to see if you still lose by the same margin. You take your original data, pick a sample with replacement $($yes, just like second helpings at a buffet$)$, calculate VaR for that sample, then return the data and repeat…again and again.
🔁 Key Term:
- With replacement: Like shuffling a deck but never removing any card. Same Ace might appear 5 times!
The average of all these simulated VaRs gives a smoother, more robust VaR estimate — the kind you’d trust with your Wi-Fi password.
🧠 But why stop at VaR?
We can use the same logic to estimate Expected Shortfall $($ES$)$. Here, we slice the tail of the distribution into $n$ pieces — think of it as cutting a burnt pizza to study which part is most edible. For each slice, we calculate mini-VaRs and average them.
But wait… what makes Expected Shortfall so special?
Because VaR tells you what the worst-case loss could be at a specific confidence level, but ES tells you what happens on average if that worst-case scenario occurs. It’s like VaR says: “You might lose $10,000$.” ES says: “Oh, and if you do, you’ll probably lose closer to $15,000$.”
This leads us to the next topic.
📈 Why Not Just Use Regular Historical Simulation?
Nonparametric Estimation to the Rescue!
Let’s say historical simulation is like counting people in a queue. It works fine if you’re counting whole people. But what if someone asks, “How many people were halfway through the door?” 🤔 That’s where the nonparametric density estimation comes in.
Historical simulation limits you to $n$ confidence levels if you have $n$ data points. What if you want a VaR at $95.5%$ instead of $95%$ or $96%$?
🧊 Enter Smoothing:
Nonparametric estimation smooths out the histogram by connecting bar midpoints — turning your Lego blocks into a slippery slope. So now, even weirdly specific confidence intervals $($like $97.3%$$)$ are possible!
But this raises a natural question: What if the past data isn’t equally important?
🧓 Let’s Talk About Weighted Historical Simulations
Equal weighting is like saying your first relationship is as relevant as the one you’re in now. Uh…no.
1. 📅 Age-Weighted Simulation
We give more weight to recent data using a decay factor $\lambda$. Here’s the magic formula: $w(i) = \frac{\lambda^{i-1}(1 – \lambda)}{1 – \lambda^n}$
Where:
- $i$: age of observation $($1 = yesterday, 2 = day before…$)$
- $\lambda$: decay factor between $0$ and $1$
When $\lambda = 1$, there’s no decay. That’s just the traditional method in disguise.
But time isn’t the only issue. What if volatility changes?
💥 Volatility-Weighted Historical Simulation
Imagine wearing the same winter coat in June just because you wore it last December. Doesn’t make sense, right?
This method, proposed by Hull & White, adjusts for current volatility conditions. $r^*_{T,i} = \left( \frac{\sigma_{T,i}}{\sigma_{t,i}} \right) r_{t,i}$
Where:
- $r_{t,i}$ = historical return
- $\sigma_{t,i}$ = forecasted volatility for day $t$
- $\sigma_{T,i}$ = current forecast of volatility
So if today’s volatility is higher, past returns are scaled up — and vice versa.
Now you’re probably wondering: What about correlation between assets?
🔗 Correlation-Weighted Historical Simulation
Just like friends influence each other’s fashion choices, assets don’t move in isolation. This method adjusts returns based on the updated correlation matrix: $\Sigma = \begin{pmatrix} \sigma^2_{i,i} & \text{Cov}(X_i, X_j) \\ \text{Cov}(X_j, X_i) & \sigma^2_{j,j} \end{pmatrix}$
We tweak the variance-covariance matrix so that both variances and covariances reflect current market relationships. You multiply the old returns by the revised matrix to get correlation-adjusted returns — like putting old music through auto-tune to match today’s vibe.
Hmm…but how do we combine the best of all these approaches?
🧪 Filtered Historical Simulation
This one’s the Avengers of simulation methods.
- Combines historical simulation
- Adds volatility modeling via GARCH
- Uses bootstrapping for flexibility
Returns are first standardized, then simulated using bootstrapped samples. The method adapts to current market mood — if it’s anxious, the model reacts. If calm, it chills too.
All this sounds good. But are there any trade-offs?
⚖️ Pros and Cons of Nonparametric Methods
✅ Advantages:
- No rigid assumptions $($bye-bye normality$)$
- Easy to compute — Excel friendly!
- Flexible — mix-and-match: age + volatility + correlation
- Great for modeling fat tails and weird distributions
❌ Disadvantages:
- Overly reliant on history. If your past was boring, so will your estimates be.
- Doesn’t handle Black Swan events unless they actually happened
- May miss regime shifts — like moving from bull to bear markets
- Needs lots of data — hard for new instruments
🧠 Wrap-Up
Imagine risk models as chefs. Parametric models are like chefs who follow recipes exactly — precise but sometimes clueless about real flavors. Nonparametric models? They’re like street vendors who adjust based on the crowd — flexible, fast, but may not have Michelin stars.
And the bootstrap historical simulation is that persistent chef who keeps retrying the same ingredients till he gets a consistent taste!
📌 In the world of risk, sometimes asking history again and again gives better answers than trusting it just once.