You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Frank copula is the only Archimedean copula that allows both positive and negative dependence ($\theta > 0$ for positive, $\theta < 0$ for negative).
@@ -195,7 +195,7 @@ var amhCopula = new AMHCopula(0.5);
\hat{Q}^{*}_b = \hat{F}^{-1}(p \mid \hat{\theta}^{*}_b), \quad b = 1, 2, \ldots, B
29
29
```
30
30
31
-
**Step 4.** The empirical distribution of $\{\hat{Q}^{*}_1, \hat{Q}^{*}_2, \ldots, \hat{Q}^{*}_B\}$ approximates the sampling distribution of the quantile estimator $\hat{Q}$.
31
+
**Step 4.** The empirical distribution of $\lbrace\hat{Q}^{*}_1, \hat{Q}^{*}_2, \ldots, \hat{Q}^{*}_B\rbrace$ approximates the sampling distribution of the quantile estimator $\hat{Q}$.
32
32
33
33
From this bootstrap distribution, we can compute several useful summaries. The **bootstrap standard error** is the sample standard deviation of the bootstrap replicates:
34
34
@@ -258,7 +258,7 @@ The Percentile method [[1]](#1) is the simplest bootstrap confidence interval. I
where $\hat{Q}^{*}_{(p)}$ denotes the $p$-th percentile of the bootstrap distribution $\{\hat{Q}^{*}_1, \ldots, \hat{Q}^{*}_B\}$. For a 90% confidence interval ($\alpha = 0.1$), this takes the 5th and 95th percentiles of the bootstrap replicates.
261
+
where $\hat{Q}^{*}_{(p)}$ denotes the $p$-th percentile of the bootstrap distribution $\lbrace\hat{Q}^{*}_1, \ldots, \hat{Q}^{*}_B\rbrace$. For a 90% confidence interval ($\alpha = 0.1$), this takes the 5th and 95th percentiles of the bootstrap replicates.
262
262
263
263
The Percentile method is intuitive and easy to implement, but it does **not** correct for bias or skewness in the bootstrap distribution. It works well when the bootstrap distribution is approximately symmetric and the estimator is approximately unbiased. This is the default method used by the `Estimate()` method.
264
264
@@ -405,7 +405,7 @@ where $\tilde{Q} = \hat{Q}^{1/3}$ is the transformed original estimate. The conf
where $t^{*}_{(p)}$ is the $p$-th percentile of $\{t^{*}_1, \ldots, t^{*}_B\}$, and $\widetilde{SE}$ is the standard deviation of the transformed bootstrap replicates $\{\tilde{Q}^{*}_1, \ldots, \tilde{Q}^{*}_B\}$.
408
+
where $t^{*}_{(p)}$ is the $p$-th percentile of $\lbrace t^{*}_1, \ldots, t^{*}_B\rbrace$, and $\widetilde{SE}$ is the standard deviation of the transformed bootstrap replicates $\lbrace\tilde{Q}^{*}_1, \ldots, \tilde{Q}^{*}_B\rbrace$.
409
409
410
410
The Bootstrap-t method is the most computationally expensive method because it requires a **double bootstrap**: each of the $B$ outer replications requires an inner bootstrap (300 replications by default) to estimate the standard error. However, it can provide the most accurate coverage probabilities for location parameters and is second-order accurate.
Copy file name to clipboardExpand all lines: docs/sampling/convergence-diagnostics.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -209,7 +209,7 @@ where $N$ is the number of samples, $\rho_k$ is the autocorrelation at lag $k$,
209
209
210
210
### Mathematical Derivation
211
211
212
-
The ESS formula arises from analyzing the variance of the sample mean of a correlated sequence. For a stationary process $\{\theta_1, \theta_2, \ldots, \theta_N\}$ with marginal variance $\sigma^2$ and autocorrelation function $\rho_k = \text{Corr}(\theta_t, \theta_{t+k})$, the variance of the sample mean $\bar{\theta} = \frac{1}{N}\sum_{t=1}^{N}\theta_t$ is:
212
+
The ESS formula arises from analyzing the variance of the sample mean of a correlated sequence. For a stationary process $\lbrace\theta_1, \theta_2, \ldots, \theta_N\rbrace$ with marginal variance $\sigma^2$ and autocorrelation function $\rho_k = \text{Corr}(\theta_t, \theta_{t+k})$, the variance of the sample mean $\bar{\theta} = \frac{1}{N}\sum_{t=1}^{N}\theta_t$ is:
Copy file name to clipboardExpand all lines: docs/sampling/mcmc.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -320,7 +320,7 @@ where:
320
320
- $\gamma = 2.38 / \sqrt{2d}$ is the default jump rate (`Jump` property), with $d$ the number of parameters
321
321
- $z_{R_1}$ and $z_{R_2}$ are two randomly selected states from the **population matrix** (a memory of past states from all chains)
322
322
- $e \sim \mathcal{N}(0, b^2)$ is a small noise perturbation with default $b = 10^{-3}$ (`Noise` property)
323
-
- $R_1$ and $R_2$ are drawn uniformly without replacement from $\{1, 2, \ldots, M\}$, where $M$ is the current size of the population matrix
323
+
- $R_1$ and $R_2$ are drawn uniformly without replacement from $\lbrace 1, 2, \ldots, M\rbrace$, where $M$ is the current size of the population matrix
324
324
325
325
The proposal is accepted using the standard Metropolis ratio in log space:
where $\pi_j$ is a random permutation of $\{0, 1, \ldots, n-1\}$ (independent for each dimension) and $U_{ij} \sim \text{Uniform}(0, 1)$. The library's `Median` variant replaces $U_{ij}$ with $0.5$, placing each point at the stratum center.
225
+
where $\pi_j$ is a random permutation of $\lbrace 0, 1, \ldots, n-1\rbrace$ (independent for each dimension) and $U_{ij} \sim \text{Uniform}(0, 1)$. The library's `Median` variant replaces $U_{ij}$ with $0.5$, placing each point at the stratum center.
226
226
227
227
The random permutations are generated using the **Fisher-Yates shuffle**, and each dimension uses an independent Mersenne Twister seeded from a master RNG.
0 commit comments