Skip to content

Commit 4f91155

Browse files
committed
Fixing brokern math symbols, updating JOSS paper.
1 parent d8b5058 commit 4f91155

10 files changed

Lines changed: 205 additions & 363 deletions

File tree

CITATION.cff

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,22 @@ keywords:
2424
- C#
2525
authors:
2626
- family-names: "Smith"
27-
given-names: "Haden"
27+
given-names: "C. Haden"
2828
email: "cole.h.smith@usace.army.mil"
2929
affiliation: "U.S. Army Corps of Engineers, Risk Management Center"
3030
orcid: "https://orcid.org/0000-0001-7881-5814"
31+
- family-names: "Fields"
32+
given-names: "Woodrow L."
33+
affiliation: "U.S. Army Corps of Engineers, Risk Management Center"
34+
- family-names: "Gonzalez"
35+
given-names: "Julian"
36+
affiliation: "U.S. Army Corps of Engineers, Risk Management Center"
37+
- family-names: "Niblett"
38+
given-names: "Sadie"
39+
affiliation: "U.S. Army Corps of Engineers, Risk Management Center"
40+
- family-names: "Beam"
41+
given-names: "Brennan"
42+
affiliation: "U.S. Army Corps of Engineers, Risk Management Center"
43+
- family-names: "Skahill"
44+
given-names: "Brian"
45+
affiliation: "U.S. Army Corps of Engineers, Risk Management Center"

docs/distributions/copulas.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -145,7 +145,7 @@ The Frank copula has **no tail dependence** and produces a symmetric dependence
145145
|----------|-------|
146146
| Generator | $\varphi(t) = -\ln\!\left(\frac{e^{-\theta t} - 1}{e^{-\theta} - 1}\right)$ |
147147
| CDF | $C(u,v) = -\frac{1}{\theta}\ln\!\left(1 + \frac{(e^{-\theta u}-1)(e^{-\theta v}-1)}{e^{-\theta}-1}\right)$ |
148-
| Parameter range | $\theta \in (-\infty, \infty) \setminus \{0\}$ |
148+
| Parameter range | $\theta \in (-\infty, \infty) \setminus \lbrace 0\rbrace$ |
149149
| Tail dependence | $\lambda_L = \lambda_U = 0$ |
150150

151151
The Frank copula is the only Archimedean copula that allows both positive and negative dependence ($\theta > 0$ for positive, $\theta < 0$ for negative).
@@ -195,7 +195,7 @@ var amhCopula = new AMHCopula(0.5);
195195
| Student-t | Symmetric | $\rho \in [-1, 1]$, $\nu > 2$ | Heavy-tailed joint extremes |
196196
| Clayton | Lower tail | $\theta \in (0, \infty)$ | Joint low extremes (droughts) |
197197
| Gumbel | Upper tail | $\theta \in [1, \infty)$ | Joint high extremes (floods) |
198-
| Frank | None | $\theta \in \mathbb{R} \setminus \{0\}$ | Moderate symmetric dependence |
198+
| Frank | None | $\theta \in \mathbb{R} \setminus \lbrace 0\rbrace$ | Moderate symmetric dependence |
199199
| Joe | Upper tail | $\theta \in [1, \infty)$ | Strong upper tail dependence |
200200
| AMH | None | $\theta \in [-1, 1]$ | Weak dependence structures |
201201

docs/distributions/multivariate.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -653,7 +653,7 @@ A random vector $\mathbf{X} = (X_1, X_2, \ldots, X_k)$ follows a multinomial dis
653653
P(X_1 = x_1, \ldots, X_k = x_k) = \frac{n!}{\prod_{i=1}^{k} x_i!} \prod_{i=1}^{k} p_i^{x_i}
654654
```
655655

656-
with constraints $x_i \in \{0, 1, \ldots, n\}$, $\sum_{i=1}^{k} x_i = n$, $p_i \geq 0$, and $\sum_{i=1}^{k} p_i = 1$.
656+
with constraints $x_i \in \lbrace 0, 1, \ldots, n\rbrace$, $\sum_{i=1}^{k} x_i = n$, $p_i \geq 0$, and $\sum_{i=1}^{k} p_i = 1$.
657657

658658
The multinomial coefficient $n! / \prod x_i!$ counts the number of ways to arrange $n$ trials into $k$ categories with the specified counts.
659659

docs/distributions/uncertainty-analysis.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ x^{*}_{b,1}, x^{*}_{b,2}, \ldots, x^{*}_{b,n} \sim \hat{F}(\hat{\theta}), \quad
2828
\hat{Q}^{*}_b = \hat{F}^{-1}(p \mid \hat{\theta}^{*}_b), \quad b = 1, 2, \ldots, B
2929
```
3030

31-
**Step 4.** The empirical distribution of $\{\hat{Q}^{*}_1, \hat{Q}^{*}_2, \ldots, \hat{Q}^{*}_B\}$ approximates the sampling distribution of the quantile estimator $\hat{Q}$.
31+
**Step 4.** The empirical distribution of $\lbrace\hat{Q}^{*}_1, \hat{Q}^{*}_2, \ldots, \hat{Q}^{*}_B\rbrace$ approximates the sampling distribution of the quantile estimator $\hat{Q}$.
3232

3333
From this bootstrap distribution, we can compute several useful summaries. The **bootstrap standard error** is the sample standard deviation of the bootstrap replicates:
3434

@@ -258,7 +258,7 @@ The Percentile method [[1]](#1) is the simplest bootstrap confidence interval. I
258258
CI_{1-\alpha} = \left[\hat{Q}^{*}_{(\alpha/2)},\;\hat{Q}^{*}_{(1-\alpha/2)}\right]
259259
```
260260

261-
where $\hat{Q}^{*}_{(p)}$ denotes the $p$-th percentile of the bootstrap distribution $\{\hat{Q}^{*}_1, \ldots, \hat{Q}^{*}_B\}$. For a 90% confidence interval ($\alpha = 0.1$), this takes the 5th and 95th percentiles of the bootstrap replicates.
261+
where $\hat{Q}^{*}_{(p)}$ denotes the $p$-th percentile of the bootstrap distribution $\lbrace\hat{Q}^{*}_1, \ldots, \hat{Q}^{*}_B\rbrace$. For a 90% confidence interval ($\alpha = 0.1$), this takes the 5th and 95th percentiles of the bootstrap replicates.
262262

263263
The Percentile method is intuitive and easy to implement, but it does **not** correct for bias or skewness in the bootstrap distribution. It works well when the bootstrap distribution is approximately symmetric and the estimator is approximately unbiased. This is the default method used by the `Estimate()` method.
264264

@@ -405,7 +405,7 @@ where $\tilde{Q} = \hat{Q}^{1/3}$ is the transformed original estimate. The conf
405405
CI_{1-\alpha} = \left[\left(\tilde{Q} + t^{*}_{(\alpha/2)}\cdot\widetilde{SE}\right)^3,\;\left(\tilde{Q} + t^{*}_{(1-\alpha/2)}\cdot\widetilde{SE}\right)^3\right]
406406
```
407407

408-
where $t^{*}_{(p)}$ is the $p$-th percentile of $\{t^{*}_1, \ldots, t^{*}_B\}$, and $\widetilde{SE}$ is the standard deviation of the transformed bootstrap replicates $\{\tilde{Q}^{*}_1, \ldots, \tilde{Q}^{*}_B\}$.
408+
where $t^{*}_{(p)}$ is the $p$-th percentile of $\lbrace t^{*}_1, \ldots, t^{*}_B\rbrace$, and $\widetilde{SE}$ is the standard deviation of the transformed bootstrap replicates $\lbrace\tilde{Q}^{*}_1, \ldots, \tilde{Q}^{*}_B\rbrace$.
409409

410410
The Bootstrap-t method is the most computationally expensive method because it requires a **double bootstrap**: each of the $B$ outer replications requires an inner bootstrap (300 replications by default) to estimate the standard error. However, it can provide the most accurate coverage probabilities for location parameters and is second-order accurate.
411411

docs/sampling/convergence-diagnostics.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -209,7 +209,7 @@ where $N$ is the number of samples, $\rho_k$ is the autocorrelation at lag $k$,
209209

210210
### Mathematical Derivation
211211

212-
The ESS formula arises from analyzing the variance of the sample mean of a correlated sequence. For a stationary process $\{\theta_1, \theta_2, \ldots, \theta_N\}$ with marginal variance $\sigma^2$ and autocorrelation function $\rho_k = \text{Corr}(\theta_t, \theta_{t+k})$, the variance of the sample mean $\bar{\theta} = \frac{1}{N}\sum_{t=1}^{N}\theta_t$ is:
212+
The ESS formula arises from analyzing the variance of the sample mean of a correlated sequence. For a stationary process $\lbrace\theta_1, \theta_2, \ldots, \theta_N\rbrace$ with marginal variance $\sigma^2$ and autocorrelation function $\rho_k = \text{Corr}(\theta_t, \theta_{t+k})$, the variance of the sample mean $\bar{\theta} = \frac{1}{N}\sum_{t=1}^{N}\theta_t$ is:
213213

214214
```math
215215
\text{Var}(\bar{\theta}) = \frac{\sigma^2}{N}\left(1 + 2\sum_{k=1}^{N-1}\left(1 - \frac{k}{N}\right)\rho_k\right)

docs/sampling/mcmc.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -320,7 +320,7 @@ where:
320320
- $\gamma = 2.38 / \sqrt{2d}$ is the default jump rate (`Jump` property), with $d$ the number of parameters
321321
- $z_{R_1}$ and $z_{R_2}$ are two randomly selected states from the **population matrix** (a memory of past states from all chains)
322322
- $e \sim \mathcal{N}(0, b^2)$ is a small noise perturbation with default $b = 10^{-3}$ (`Noise` property)
323-
- $R_1$ and $R_2$ are drawn uniformly without replacement from $\{1, 2, \ldots, M\}$, where $M$ is the current size of the population matrix
323+
- $R_1$ and $R_2$ are drawn uniformly without replacement from $\lbrace 1, 2, \ldots, M\rbrace$, where $M$ is the current size of the population matrix
324324

325325
The proposal is accepted using the standard Metropolis ratio in log space:
326326

docs/sampling/random-generation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -222,7 +222,7 @@ Latin Hypercube Sampling divides each dimension's range $[0, 1)$ into $n$ equal
222222
x_{ij} = \frac{\pi_j(i) + U_{ij}}{n}, \quad i = 0, \ldots, n-1
223223
```
224224

225-
where $\pi_j$ is a random permutation of $\{0, 1, \ldots, n-1\}$ (independent for each dimension) and $U_{ij} \sim \text{Uniform}(0, 1)$. The library's `Median` variant replaces $U_{ij}$ with $0.5$, placing each point at the stratum center.
225+
where $\pi_j$ is a random permutation of $\lbrace 0, 1, \ldots, n-1\rbrace$ (independent for each dimension) and $U_{ij} \sim \text{Uniform}(0, 1)$. The library's `Median` variant replaces $U_{ij}$ with $0.5$, placing each point at the stratum center.
226226

227227
The random permutations are generated using the **Fisher-Yates shuffle**, and each dimension uses an independent Mersenne Twister seeded from a master RNG.
228228

paper.md

Lines changed: 0 additions & 193 deletions
This file was deleted.

0 commit comments

Comments
 (0)