nlmixr2 defaults controls for nlm
Usage
nlmControl(
typsize = NULL,
fscale = 1,
print.level = 0,
ndigit = NULL,
gradtol = 1e-06,
stepmax = NULL,
steptol = 1e-06,
iterlim = 10000,
check.analyticals = FALSE,
returnNlm = FALSE,
solveType = c("hessian", "grad", "fun"),
stickyRecalcN = 4,
maxOdeRecalc = 5,
odeRecalcFactor = 10^(0.5),
eventType = c("central", "forward"),
shiErr = (.Machine$double.eps)^(1/3),
shi21maxFD = 20L,
optimHessType = c("central", "forward"),
hessErr = (.Machine$double.eps)^(1/3),
shi21maxHess = 20L,
useColor = crayon::has_color(),
printNcol = floor((getOption("width") - 23)/12),
print = 1L,
normType = c("rescale2", "mean", "rescale", "std", "len", "constant"),
scaleType = c("nlmixr2", "norm", "mult", "multAdd"),
scaleCmax = 1e+05,
scaleCmin = 1e-05,
scaleC = NULL,
scaleTo = 1,
gradTo = 1,
rxControl = NULL,
optExpression = TRUE,
sumProd = FALSE,
literalFix = TRUE,
addProp = c("combined2", "combined1"),
calcTables = TRUE,
compress = TRUE,
covMethod = c("r", "nlm", ""),
adjObf = TRUE,
ci = 0.95,
sigdig = 4,
sigdigTable = NULL,
...
)
Arguments
- typsize
an estimate of the size of each parameter at the minimum.
- fscale
an estimate of the size of
f
at the minimum.- print.level
this argument determines the level of printing which is done during the minimization process. The default value of
0
means that no printing occurs, a value of1
means that initial and final details are printed and a value of 2 means that full tracing information is printed.- ndigit
the number of significant digits in the function
f
.- gradtol
a positive scalar giving the tolerance at which the scaled gradient is considered close enough to zero to terminate the algorithm. The scaled gradient is a measure of the relative change in
f
in each directionp[i]
divided by the relative change inp[i]
.- stepmax
a positive scalar which gives the maximum allowable scaled step length.
stepmax
is used to prevent steps which would cause the optimization function to overflow, to prevent the algorithm from leaving the area of interest in parameter space, or to detect divergence in the algorithm.stepmax
would be chosen small enough to prevent the first two of these occurrences, but should be larger than any anticipated reasonable step.- steptol
A positive scalar providing the minimum allowable relative step length.
- iterlim
a positive integer specifying the maximum number of iterations to be performed before the program is terminated.
- check.analyticals
a logical scalar specifying whether the analytic gradients and Hessians, if they are supplied, should be checked against numerical derivatives at the initial parameter values. This can help detect incorrectly formulated gradients or Hessians.
- returnNlm
is a logical that allows a return of the `nlm` object
- solveType
tells if `nlm` will use nlmixr2's analytical gradients when available (finite differences will be used for event-related parameters like parameters controlling lag time, duration/rate of infusion, and modeled bioavailability). This can be:
- `"hessian"` which will use the analytical gradients to create a Hessian with finite differences.
- `"gradient"` which will use the gradient and let `nlm` calculate the finite difference hessian
- `"fun"` where nlm will calculate both the finite difference gradient and the finite difference Hessian
When using nlmixr2's finite differences, the "ideal" step size for either central or forward differences are optimized for with the Shi2021 method which may give more accurate derivatives
- stickyRecalcN
The number of bad ODE solves before reducing the atol/rtol for the rest of the problem.
- maxOdeRecalc
Maximum number of times to reduce the ODE tolerances and try to resolve the system if there was a bad ODE solve.
- odeRecalcFactor
The ODE recalculation factor when ODE solving goes bad, this is the factor the rtol/atol is reduced
- eventType
Event gradient type for dosing events; Can be "central" or "forward"
- shiErr
This represents the epsilon when optimizing the ideal step size for numeric differentiation using the Shi2021 method
- shi21maxFD
The maximum number of steps for the optimization of the forward difference step size when using dosing events (lag time, modeled duration/rate and bioavailability)
- optimHessType
The hessian type for when calculating the individual hessian by numeric differences (in generalized log-likelihood estimation). The options are "central", and "forward". The central differences is what R's `optimHess()` uses and is the default for this method. (Though the "forward" is faster and still reasonable for most cases). The Shi21 cannot be changed for the Gill83 algorithm with the optimHess in a generalized likelihood problem.
- hessErr
This represents the epsilon when optimizing the Hessian step size using the Shi2021 method.
- shi21maxHess
Maximum number of times to optimize the best step size for the hessian calculation
- useColor
Boolean indicating if focei can use ASCII color codes
- printNcol
Number of columns to printout before wrapping parameter estimates/gradient
Integer representing when the outer step is printed. When this is 0 or do not print the iterations. 1 is print every function evaluation (default), 5 is print every 5 evaluations.
- normType
This is the type of parameter normalization/scaling used to get the scaled initial values for nlmixr2. These are used with
scaleType
of.With the exception of
rescale2
, these come from Feature Scaling. Therescale2
The rescaling is the same type described in the OptdesX software manual.In general, all all scaling formula can be described by:
$$v_{scaled}$$ = ($$v_{unscaled}-C_{1}$$)/$$C_{2}$$
Where
The other data normalization approaches follow the following formula
$$v_{scaled}$$ = ($$v_{unscaled}-C_{1}$$)/$$C_{2}$$
rescale2
This scales all parameters from (-1 to 1). The relative differences between the parameters are preserved with this approach and the constants are:$$C_{1}$$ = (max(all unscaled values)+min(all unscaled values))/2
$$C_{2}$$ = (max(all unscaled values) - min(all unscaled values))/2
rescale
or min-max normalization. This rescales all parameters from (0 to 1). As in therescale2
the relative differences are preserved. In this approach:$$C_{1}$$ = min(all unscaled values)
$$C_{2}$$ = max(all unscaled values) - min(all unscaled values)
mean
or mean normalization. This rescales to center the parameters around the mean but the parameters are from 0 to 1. In this approach:$$C_{1}$$ = mean(all unscaled values)
$$C_{2}$$ = max(all unscaled values) - min(all unscaled values)
std
or standardization. This standardizes by the mean and standard deviation. In this approach:$$C_{1}$$ = mean(all unscaled values)
$$C_{2}$$ = sd(all unscaled values)
len
or unit length scaling. This scales the parameters to the unit length. For this approach we use the Euclidean length, that is:$$C_{1}$$ = 0
$$C_{2}$$ = $$\sqrt(v_1^2 + v_2^2 + \cdots + v_n^2)$$
constant
which does not perform data normalization. That is$$C_{1}$$ = 0
$$C_{2}$$ = 1
- scaleType
The scaling scheme for nlmixr2. The supported types are:
nlmixr2
In this approach the scaling is performed by the following equation:$$v_{scaled}$$ = ($$v_{current} - v_{init}$$)*scaleC[i] + scaleTo
The
scaleTo
parameter is specified by thenormType
, and the scales are specified byscaleC
.norm
This approach uses the simple scaling provided by thenormType
argument.mult
This approach does not use the data normalization provided bynormType
, but rather uses multiplicative scaling to a constant provided by thescaleTo
argument.In this case:
$$v_{scaled}$$ = $$v_{current}$$/$$v_{init}$$*scaleTo
multAdd
This approach changes the scaling based on the parameter being specified. If a parameter is defined in an exponential block (ie exp(theta)), then it is scaled on a linearly, that is:$$v_{scaled}$$ = ($$v_{current}-v_{init}$$) + scaleTo
Otherwise the parameter is scaled multiplicatively.
$$v_{scaled}$$ = $$v_{current}$$/$$v_{init}$$*scaleTo
- scaleCmax
Maximum value of the scaleC to prevent overflow.
- scaleCmin
Minimum value of the scaleC to prevent underflow.
- scaleC
The scaling constant used with
scaleType=nlmixr2
. When not specified, it is based on the type of parameter that is estimated. The idea is to keep the derivatives similar on a log scale to have similar gradient sizes. Hence parameters like log(exp(theta)) would have a scaling factor of 1 and log(theta) would have a scaling factor of ini_value (to scale by 1/value; ie d/dt(log(ini_value)) = 1/ini_value or scaleC=ini_value)For parameters in an exponential (ie exp(theta)) or parameters specifying powers, boxCox or yeoJohnson transformations , this is 1.
For additive, proportional, lognormal error structures, these are given by 0.5*abs(initial_estimate)
Factorials are scaled by abs(1/digamma(initial_estimate+1))
parameters in a log scale (ie log(theta)) are transformed by log(abs(initial_estimate))*abs(initial_estimate)
These parameter scaling coefficients are chose to try to keep similar slopes among parameters. That is they all follow the slopes approximately on a log-scale.
While these are chosen in a logical manner, they may not always apply. You can specify each parameters scaling factor by this parameter if you wish.
- scaleTo
Scale the initial parameter estimate to this value. By default this is 1. When zero or below, no scaling is performed.
- gradTo
this is the factor that the gradient is scaled to before optimizing. This only works with scaleType="nlmixr2".
- rxControl
`rxode2` ODE solving options during fitting, created with `rxControl()`
- optExpression
Optimize the rxode2 expression to speed up calculation. By default this is turned on.
- sumProd
Is a boolean indicating if the model should change multiplication to high precision multiplication and sums to high precision sums using the PreciseSums package. By default this is
FALSE
.- literalFix
boolean, substitute fixed population values as literals and re-adjust ui and parameter estimates after optimization; Default is `TRUE`.
- addProp
specifies the type of additive plus proportional errors, the one where standard deviations add (combined1) or the type where the variances add (combined2).
The combined1 error type can be described by the following equation:
$$y = f + (a + b\times f^c) \times \varepsilon$$
The combined2 error model can be described by the following equation:
$$y = f + \sqrt{a^2 + b^2\times f^{2\times c}} \times \varepsilon$$
Where:
- y represents the observed value
- f represents the predicted value
- a is the additive standard deviation
- b is the proportional/power standard deviation
- c is the power exponent (in the proportional case c=1)
- calcTables
This boolean is to determine if the foceiFit will calculate tables. By default this is
TRUE
- compress
Should the object have compressed items
- covMethod
allows selection of "r", which uses nlmixr2's `nlmixr2Hess()` for the hessian calculation or "nlm" which uses the hessian from `stats::nlm(.., hessian=TRUE)`. When using `nlmixr2's` hessian for optimization or `nlmixr2's` gradient for solving this defaults to "nlm" since `stats::optimHess()` assumes an accurate gradient and is faster than `nlmixr2Hess`
- adjObf
is a boolean to indicate if the objective function should be adjusted to be closer to NONMEM's default objective function. By default this is
TRUE
- ci
Confidence level for some tables. By default this is 0.95 or 95% confidence.
- sigdig
Optimization significant digits. This controls:
The tolerance of the inner and outer optimization is
10^-sigdig
The tolerance of the ODE solvers is
0.5*10^(-sigdig-2)
; For the sensitivity equations and steady-state solutions the default is0.5*10^(-sigdig-1.5)
(sensitivity changes only applicable for liblsoda)The tolerance of the boundary check is
5 * 10 ^ (-sigdig + 1)
- sigdigTable
Significant digits in the final output table. If not specified, then it matches the significant digits in the `sigdig` optimization algorithm. If `sigdig` is NULL, use 3.
- ...
additional arguments to be passed to
f
.
Examples
# \donttest{
# A logit regression example with emax model
dsn <- data.frame(i=1:1000)
dsn$time <- exp(rnorm(1000))
dsn$DV=rbinom(1000,1,exp(-1+dsn$time)/(1+exp(-1+dsn$time)))
mod <- function() {
ini({
E0 <- 0.5
Em <- 0.5
E50 <- 2
g <- fix(2)
})
model({
v <- E0+Em*time^g/(E50^g+time^g)
ll(bin) ~ DV * v - log(1 + exp(v))
})
}
fit2 <- nlmixr(mod, dsn, est="nlm")
#>
#>
#>
#>
#> ℹ parameter labels from comments are typically ignored in non-interactive mode
#> ℹ Need to run with the source intact to parse comments
#> → loading into symengine environment...
#> → pruning branches (`if`/`else`) of population log-likelihood model...
#> ✔ done
#> → calculate jacobian
#> → calculate ∂(f)/∂(θ)
#> → finding duplicate expressions in nlm llik gradient...
#> → optimizing duplicate expressions in nlm llik gradient...
#> → finding duplicate expressions in nlm pred-only...
#> → optimizing duplicate expressions in nlm pred-only...
#>
#>
#>
#>
#> → calculating covariance
#> ✔ done
#> → loading into symengine environment...
#> → pruning branches (`if`/`else`) of full model...
#> ✔ done
#> → finding duplicate expressions in EBE model...
#> → optimizing duplicate expressions in EBE model...
#> → compiling EBE model...
#>
#>
#> ✔ done
#> → Calculating residuals/tables
#> ✔ done
#> → compress origData in nlmixr2 object, save 9112
#> → compress parHistData in nlmixr2 object, save 3328
print(fit2)
#> ── nlmixr² log-likelihood nlm ──
#>
#> OBJF AIC BIC Log-likelihood Condition#(Cov) Condition#(Cor)
#> lPop -715.5129 1128.364 1143.087 -561.1821 1017060 206884.9
#>
#> ── Time (sec $time): ──
#>
#> setup table compress other
#> elapsed 0.002701 0.034 0.009 2.139299
#>
#> ── ($parFixed or $parFixedDf): ──
#>
#> Est. SE %RSE Back-transformed(95%CI) BSV(SD) Shrink(SD)%
#> E0 -0.7135 8.163 1144 -0.7135 (-16.71, 15.29)
#> Em 5.649 118.3 2095 5.649 (-226.2, 237.5)
#> E50 2.669 59.97 2247 2.669 (-114.9, 120.2)
#> g 2 FIXED FIXED 2
#>
#> Covariance Type ($covMethod): r (nlm)
#> Censoring ($censInformation): No censoring
#> Minimization message ($message):
#> relative gradient is close to zero, current iterate is probably solution
#>
#> ── Fit Data (object is a modified tibble): ──
#> # A tibble: 1,000 × 5
#> ID TIME DV IPRED v
#> <fct> <dbl> <dbl> <dbl> <dbl>
#> 1 1 0.0394 1 -1.11 -0.712
#> 2 1 0.0410 1 -1.11 -0.712
#> 3 1 0.0491 0 -0.399 -0.712
#> # ℹ 997 more rows
# you can also get the nlm output with fit2$nlm
fit2$nlm
#> $minimum
#> [1] 561.1821
#>
#> $estimate
#> E0 Em E50
#> -0.7135082 5.6486521 2.6692567
#>
#> $gradient
#> [1] 7.739590e-09 5.510995e-08 -9.434968e-07
#>
#> $hessian
#> E0 Em E50
#> E0 0.001691211 0.003155486 -0.008385262
#> Em 0.003155486 0.012661935 -0.028730803
#> E50 -0.008385262 -0.028730803 0.063541769
#>
#> $code
#> [1] 1
#>
#> $iterations
#> [1] 7
#>
#> $scaleC
#> [1] 0.002952846 0.042343090 0.037276791
#>
#> $estimate.scaled
#> E0 Em E50
#> -411.96225 120.59368 18.95371
#>
#> $covMethod
#> [1] "r (nlm)"
#>
#> $cov.scaled
#> E0 Em E50
#> E0 7642403 7721081 4446453
#> Em 7721081 7807404 4495283
#> E50 4446453 4495283 2588433
#>
#> $cov
#> E0 Em E50
#> E0 66.6364 965.3869 489.4327
#> Em 965.3869 13998.1861 7095.4197
#> E50 489.4327 7095.4197 3596.7806
#>
#> $r
#> E0 Em E50
#> E0 0.0008456053 0.001577743 -0.004192631
#> Em 0.0015777428 0.006330967 -0.014365402
#> E50 -0.0041926311 -0.014365402 0.031770884
#>
# The nlm control has been modified slightly to include
# extra components and name the parameters
# }