Appendix A: Sums of Variances and Covariances
Given the variance-covariance matrix of the parameters of a linear model (Equation 43), variance and covariances of sums
of those coefficients are calculable [17, 19].
(43) ?(𝛃) =
[
Var(β0) Cov(β0, β1) Cov(β0, β2) ⋯ Cov(β0, σε)
Cov(β0, β1) Var(β1) Cov(β1, β2) ⋯ Cov(β1, σε)
Cov(β2, β0) Cov(β2, β1) Var(β2) ⋯ Cov(β2, σε)
⋮ ⋮ ⋮ ⋱ ⋮
Cov(β0, σε) Cov(β1, σε) Cov(β2, σε) ⋯ Var(σε) ]
For any coefficients β0 through β𝑘, the variance of their sums is the sum of their variances plus the sum of each
covariance pair:
(44) Var(β0 +β1 +··· +β𝑘) =Var(β0) +
Cov(β0, β1) +··· +Cov(β0, β𝑘) +Cov(β1, β0) +
Var(β1) +··· +Cov(β1, β𝑘) +Cov(β𝑘, β0) +
Cov(β𝑘, β1) +··· +Var(β𝑘)
The off-diagonal covariance terms for ?(𝛃) can be calculated using the sum of the unique covariance pairs. For
example, consider the covariance between two variables ? and ? with scalar 𝑘:
(45) Var(? +𝑘?) =Var(?) +𝑘2Var(?) +
2𝑘Cov(?, ?)
Cov(?, 𝑘?) =0.5 (
Var(? +𝑘?) − Var(?) −
𝑘2Var(?) )=
0.5 (
Var(?) +𝑘2Var(?) +2𝑘Cov(?, ?) −
Var(?) − 𝑘2Var(?) )=
𝑘Cov(?, ?)
If the coefficients are ? =?1 +?2 +?3 and ? =?1 +?2 +?3, then the proof is more complicated:
(46) Var(?1 +?2 +?3, ?1 +?2 +?3) =
Var(?1 +?2 +?3) +Var(?1 +?2 +?3) +
2Cov(?1 +?2 +?3, ?1 +?2 +?3)
(47) Cov(?1 +?2 +?3, ?1 +?2 +?3) =
0.5(Var(?1 +?2 +?3 +?1 +?2 +?3)) −
Var(?1 +?2 +?3) − Var(?1 +?2 +?3)) =
0.5(Var(?1 +?2 +?3 +?1 +?2 +?3)) −
Var(?1 +?2 +?3) − Var(?1 +?2 +?3) =
0.5(Var(?1) +Var(?2) +Var(?3) +Var(?1) +
Var(?2) +Var(?3) +2Cov(?1, ?2) +2Cov(?1, ?3) +
2Cov(?2, ?3) +2Cov(?1, ?1) +2Cov(?1, ?2) +
2Cov(?1, ?3) +2Cov(?2, ?1) +2Cov(?2, ?2) +
2Cov(?2, ?3) +2Cov(?3, ?1) +2Cov(?3, ?2) +
2Cov(?3, ?3) +2Cov(?1, ?2) +2Cov(?1, ?3) +
2Cov(?2, ?3) − Var(?1) − Var(?2) − Var(?3)) −
Given the variance-covariance matrix of the parameters of a linear model (Equation 43), variance and covariances of sums
of those coefficients are calculable [17, 19].
(43) ?(𝛃) =
[
Var(β0) Cov(β0, β1) Cov(β0, β2) ⋯ Cov(β0, σε)
Cov(β0, β1) Var(β1) Cov(β1, β2) ⋯ Cov(β1, σε)
Cov(β2, β0) Cov(β2, β1) Var(β2) ⋯ Cov(β2, σε)
⋮ ⋮ ⋮ ⋱ ⋮
Cov(β0, σε) Cov(β1, σε) Cov(β2, σε) ⋯ Var(σε) ]
For any coefficients β0 through β𝑘, the variance of their sums is the sum of their variances plus the sum of each
covariance pair:
(44) Var(β0 +β1 +··· +β𝑘) =Var(β0) +
Cov(β0, β1) +··· +Cov(β0, β𝑘) +Cov(β1, β0) +
Var(β1) +··· +Cov(β1, β𝑘) +Cov(β𝑘, β0) +
Cov(β𝑘, β1) +··· +Var(β𝑘)
The off-diagonal covariance terms for ?(𝛃) can be calculated using the sum of the unique covariance pairs. For
example, consider the covariance between two variables ? and ? with scalar 𝑘:
(45) Var(? +𝑘?) =Var(?) +𝑘2Var(?) +
2𝑘Cov(?, ?)
Cov(?, 𝑘?) =0.5 (
Var(? +𝑘?) − Var(?) −
𝑘2Var(?) )=
0.5 (
Var(?) +𝑘2Var(?) +2𝑘Cov(?, ?) −
Var(?) − 𝑘2Var(?) )=
𝑘Cov(?, ?)
If the coefficients are ? =?1 +?2 +?3 and ? =?1 +?2 +?3, then the proof is more complicated:
(46) Var(?1 +?2 +?3, ?1 +?2 +?3) =
Var(?1 +?2 +?3) +Var(?1 +?2 +?3) +
2Cov(?1 +?2 +?3, ?1 +?2 +?3)
(47) Cov(?1 +?2 +?3, ?1 +?2 +?3) =
0.5(Var(?1 +?2 +?3 +?1 +?2 +?3)) −
Var(?1 +?2 +?3) − Var(?1 +?2 +?3)) =
0.5(Var(?1 +?2 +?3 +?1 +?2 +?3)) −
Var(?1 +?2 +?3) − Var(?1 +?2 +?3) =
0.5(Var(?1) +Var(?2) +Var(?3) +Var(?1) +
Var(?2) +Var(?3) +2Cov(?1, ?2) +2Cov(?1, ?3) +
2Cov(?2, ?3) +2Cov(?1, ?1) +2Cov(?1, ?2) +
2Cov(?1, ?3) +2Cov(?2, ?1) +2Cov(?2, ?2) +
2Cov(?2, ?3) +2Cov(?3, ?1) +2Cov(?3, ?2) +
2Cov(?3, ?3) +2Cov(?1, ?2) +2Cov(?1, ?3) +
2Cov(?2, ?3) − Var(?1) − Var(?2) − Var(?3)) −















































































































