The variance and covariance estimates for each param-
eter are obtained when fitting the model (see Equation 25).
However, Equation 27 is written with sums of variances and
covariances. Statistics theory [19] provides the equations (see
Appendix A: Sums of Variances and Covariances). All the
sums needed for Equation 27 are provided in Equation 28.
Also note that, since is a scalar, ar(mβi) = m 2 Var(βi) and
Cov(βi,mβj) =mCov(βi,βj).
(28) Var[ˆ 0 ] =Var[ˆ 0 +mˆ 2 ] Var[ˆ0] + m2Var[ˆ2] +2mCov[ˆ 0 ,ˆ 2 ]
Var[ˆ1] =Var[ˆ1 +mˆ3] Var[ˆ 1 ] +mVar[ˆ3] +2mCov[ˆ 1 ,ˆ3]
Var[ˆε] =Var[ˆε]
Cov[ˆ0,ˆ1] =Cov[ˆ0 +mˆ2,ˆ1 +mˆ3] Cov[ˆ 0 ,ˆ1] +mCov[ˆ 0 ,ˆ3]
+mCov[ˆ2,ˆ 1 ] + m2Cov[ˆ 2 ,ˆ3]
Cov[ˆ0,ˆε] =Cov[ˆ 0 +mˆ 2 ,ˆε] =Cov[ˆ 0 ,ˆε] +mCov[ˆ 2 ,ˆε]
Cov[ˆ1,ˆε] =Cov[ˆ 1 +mˆ 3 ,ˆε] Cov[ˆ 1 ,ˆε] +mCov[ˆ 3 ,ˆε]
Then, to calculate 90/95 one can follow the steps in
Equations 16–19, using the pod values from Equation 27.
3.2. Polynomial Alternatives to the Simple Linear
Model Setup
The conversion from a linear model to a POD curve is nontriv-
ial for any linear model that extends beyond the simple linear
model. This section describes how to perform POD for a linear
model defined by an invertible function of discontinuity size ,
using a change-of-variable methodology.
First, fit a linear model relating to (x) where is a dif-
ferentiable and invertible function. For example, consider
a linear model where is a second-order polynomial.
Then =f(x) = α0 + α1x + α2x 2 and the estimated model
would be y = ˆ 0 + ˆ 1 x + α2x 2 The next step in POD estima-
tion requires writing yl = ˆ 0 + ˆ 1 xi + ˆ 2 xi2 in terms of prob-
ability (as in Equation 6). However, as shown in Equation 29,
the quadratic form does not allow for to be separated into the
form of ([x − ˆ μpod]/ˆ) σpod so that ˆ μpod and ˆ2 σpod are able to
be estimated separately.
(29) POD(a) =Φ( ˆ − ydec _
ˆε
)=Φ( ˆ1x +ˆ2x2 − (ydec − ˆ0) _____________
ˆε
)
Define a new variable, =f(x) to facilitate a change of
variables so that a separable version of Equation 29 is possible.
However, to use the variable we need to know its mean and
variance. The next section will show how to estimate them.
3.2.1. MOMENTS OF Y: EXPECTED VALUE AND VARIANCE
Consider the fitted model, f( ˆ ) which describes the mean
behavior of (x) with respect to Then, a good estimate of the
expected value of comes from estimating f( ˆ ) In practice,
one can define the function = β0z + β1zz =f(x) and fit a linear
model. As shown in Equation 30, the form of the expected
value is identical to the simple linear case, except is replaced
by so ˆ μpod = (ydec − β0z)/ˆ ˆ β1z Since
[ z
] = ˆ f( ) then
ˆ β0z ≈ 0 and ˆ β1z ≈ 1 (which are approximate only because of
potential rounding errors during estimation), so μpod ˆ ≈ ydec.
(30) E[y] =E[β0z + β1zz] = β0z + β1zE[z]
Next, the variance of is needed. A naive approach would
use the residual variance σz2 provided during model fitting.
However, since the variable depends on the predicted values
(i.e., the mean prediction) of f( ˆ ) this may underestimate the
variance of Equation 31 provides the variance of in terms
of (x):
(31) Var[y] =Var[β0z + β1zf(x)]=β1z2Var[f(x)]
Next, the variance of (x) is needed. According to the Delta
method, for a differentiable function with derivative (T)
where [x] =µ then ar[f(X)] = ∑ i=0 fi(T) (μ)2 Var[xi] +
2∑i j fi(T)fj(T)Cov[xi,xj] This requires taking derivatives of (x)
with respect to each parameter ( i in (x) and similarly, the
variance and covariance of each parameter.
Besides differentiability of the regression function (which
is assured for the models we propose), the Delta method also
assumes asymptotic normality for the statistic of interest. Since
the statistic of interest is the predicted response, when normal-
ity is not met, transformations found through methods such
as the Box-Cox transformation can help justify the use of the
Delta method for variance estimation.
For the example where =f(x) = ˆ 0 + ˆ 1 x + ˆ 2 x 2 ,
the necessary first derivatives are given in Equation 32. All
higher-order derivatives are zero in this case, so they do not
contribute to the sum. Note, however, that in some functions,
these higher-order derivatives may need to be included. We
assume that normality is either directly met in the model or
through the use of a transform such as Box-Cox.
(32) (
∂ f(x) _
∂ ˆ0
) =1, (
∂ f(x) _
∂ ˆ1
) =x, (
∂ f(x) _
∂ ˆ2
) = x2
Thus, using the Delta method, the variance of (x) is given
in Equation 33:
(33) Var[f(x)]=∑fi(T)(μ)2Var[xi]
i=0
2
+2∑fi
i j
(T) Cov[xi,xj]
=(_ ∂ f(x)
∂ ˆ0
)
2 Var[α0] + (
∂ f(x) _
∂ ˆ1
)
2 Var[α1]+(_ ∂ f(x)
∂ ˆ2
)
2 Var[α2]
+2(_ ∂ f(x)
∂ ˆ0
∂ f(x) _
∂ ˆ1
Cov[α1,α0]) 2(_ ∂ f(x)
∂ ˆ0
∂ f(x) _
∂ ˆ2
Cov[α2,α0])
+2 (
∂ f(x) _
∂ ˆ1
∂ f(x) _
∂ ˆ2
Cov[α2,α1] ) =Var[α0] + x2Var[α1] + x4Var[α2]
+2xCov[α1,α0] +2 x2Cov[α2,α0] +2 x3Cov[α2,α1]
ME
|
PODMODELING
62
M AT E R I A L S E V A L U AT I O N • A U G U S T 2 0 2 5
eter are obtained when fitting the model (see Equation 25).
However, Equation 27 is written with sums of variances and
covariances. Statistics theory [19] provides the equations (see
Appendix A: Sums of Variances and Covariances). All the
sums needed for Equation 27 are provided in Equation 28.
Also note that, since is a scalar, ar(mβi) = m 2 Var(βi) and
Cov(βi,mβj) =mCov(βi,βj).
(28) Var[ˆ 0 ] =Var[ˆ 0 +mˆ 2 ] Var[ˆ0] + m2Var[ˆ2] +2mCov[ˆ 0 ,ˆ 2 ]
Var[ˆ1] =Var[ˆ1 +mˆ3] Var[ˆ 1 ] +mVar[ˆ3] +2mCov[ˆ 1 ,ˆ3]
Var[ˆε] =Var[ˆε]
Cov[ˆ0,ˆ1] =Cov[ˆ0 +mˆ2,ˆ1 +mˆ3] Cov[ˆ 0 ,ˆ1] +mCov[ˆ 0 ,ˆ3]
+mCov[ˆ2,ˆ 1 ] + m2Cov[ˆ 2 ,ˆ3]
Cov[ˆ0,ˆε] =Cov[ˆ 0 +mˆ 2 ,ˆε] =Cov[ˆ 0 ,ˆε] +mCov[ˆ 2 ,ˆε]
Cov[ˆ1,ˆε] =Cov[ˆ 1 +mˆ 3 ,ˆε] Cov[ˆ 1 ,ˆε] +mCov[ˆ 3 ,ˆε]
Then, to calculate 90/95 one can follow the steps in
Equations 16–19, using the pod values from Equation 27.
3.2. Polynomial Alternatives to the Simple Linear
Model Setup
The conversion from a linear model to a POD curve is nontriv-
ial for any linear model that extends beyond the simple linear
model. This section describes how to perform POD for a linear
model defined by an invertible function of discontinuity size ,
using a change-of-variable methodology.
First, fit a linear model relating to (x) where is a dif-
ferentiable and invertible function. For example, consider
a linear model where is a second-order polynomial.
Then =f(x) = α0 + α1x + α2x 2 and the estimated model
would be y = ˆ 0 + ˆ 1 x + α2x 2 The next step in POD estima-
tion requires writing yl = ˆ 0 + ˆ 1 xi + ˆ 2 xi2 in terms of prob-
ability (as in Equation 6). However, as shown in Equation 29,
the quadratic form does not allow for to be separated into the
form of ([x − ˆ μpod]/ˆ) σpod so that ˆ μpod and ˆ2 σpod are able to
be estimated separately.
(29) POD(a) =Φ( ˆ − ydec _
ˆε
)=Φ( ˆ1x +ˆ2x2 − (ydec − ˆ0) _____________
ˆε
)
Define a new variable, =f(x) to facilitate a change of
variables so that a separable version of Equation 29 is possible.
However, to use the variable we need to know its mean and
variance. The next section will show how to estimate them.
3.2.1. MOMENTS OF Y: EXPECTED VALUE AND VARIANCE
Consider the fitted model, f( ˆ ) which describes the mean
behavior of (x) with respect to Then, a good estimate of the
expected value of comes from estimating f( ˆ ) In practice,
one can define the function = β0z + β1zz =f(x) and fit a linear
model. As shown in Equation 30, the form of the expected
value is identical to the simple linear case, except is replaced
by so ˆ μpod = (ydec − β0z)/ˆ ˆ β1z Since
[ z
] = ˆ f( ) then
ˆ β0z ≈ 0 and ˆ β1z ≈ 1 (which are approximate only because of
potential rounding errors during estimation), so μpod ˆ ≈ ydec.
(30) E[y] =E[β0z + β1zz] = β0z + β1zE[z]
Next, the variance of is needed. A naive approach would
use the residual variance σz2 provided during model fitting.
However, since the variable depends on the predicted values
(i.e., the mean prediction) of f( ˆ ) this may underestimate the
variance of Equation 31 provides the variance of in terms
of (x):
(31) Var[y] =Var[β0z + β1zf(x)]=β1z2Var[f(x)]
Next, the variance of (x) is needed. According to the Delta
method, for a differentiable function with derivative (T)
where [x] =µ then ar[f(X)] = ∑ i=0 fi(T) (μ)2 Var[xi] +
2∑i j fi(T)fj(T)Cov[xi,xj] This requires taking derivatives of (x)
with respect to each parameter ( i in (x) and similarly, the
variance and covariance of each parameter.
Besides differentiability of the regression function (which
is assured for the models we propose), the Delta method also
assumes asymptotic normality for the statistic of interest. Since
the statistic of interest is the predicted response, when normal-
ity is not met, transformations found through methods such
as the Box-Cox transformation can help justify the use of the
Delta method for variance estimation.
For the example where =f(x) = ˆ 0 + ˆ 1 x + ˆ 2 x 2 ,
the necessary first derivatives are given in Equation 32. All
higher-order derivatives are zero in this case, so they do not
contribute to the sum. Note, however, that in some functions,
these higher-order derivatives may need to be included. We
assume that normality is either directly met in the model or
through the use of a transform such as Box-Cox.
(32) (
∂ f(x) _
∂ ˆ0
) =1, (
∂ f(x) _
∂ ˆ1
) =x, (
∂ f(x) _
∂ ˆ2
) = x2
Thus, using the Delta method, the variance of (x) is given
in Equation 33:
(33) Var[f(x)]=∑fi(T)(μ)2Var[xi]
i=0
2
+2∑fi
i j
(T) Cov[xi,xj]
=(_ ∂ f(x)
∂ ˆ0
)
2 Var[α0] + (
∂ f(x) _
∂ ˆ1
)
2 Var[α1]+(_ ∂ f(x)
∂ ˆ2
)
2 Var[α2]
+2(_ ∂ f(x)
∂ ˆ0
∂ f(x) _
∂ ˆ1
Cov[α1,α0]) 2(_ ∂ f(x)
∂ ˆ0
∂ f(x) _
∂ ˆ2
Cov[α2,α0])
+2 (
∂ f(x) _
∂ ˆ1
∂ f(x) _
∂ ˆ2
Cov[α2,α1] ) =Var[α0] + x2Var[α1] + x4Var[α2]
+2xCov[α1,α0] +2 x2Cov[α2,α0] +2 x3Cov[α2,α1]
ME
|
PODMODELING
62
M AT E R I A L S E V A L U AT I O N • A U G U S T 2 0 2 5