Skip to contents

nlmixr2 nlminb defaults

Usage

nlminbControl(
  eval.max = 200,
  iter.max = 150,
  trace = 0,
  abs.tol = 0,
  rel.tol = 1e-10,
  x.tol = 1.5e-08,
  xf.tol = 2.2e-14,
  step.min = 1,
  step.max = 1,
  sing.tol = rel.tol,
  scale = 1,
  scale.init = NULL,
  diff.g = NULL,
  rxControl = NULL,
  optExpression = TRUE,
  sumProd = FALSE,
  literalFix = TRUE,
  returnNlminb = FALSE,
  solveType = c("hessian", "grad", "fun"),
  stickyRecalcN = 4,
  maxOdeRecalc = 5,
  odeRecalcFactor = 10^(0.5),
  eventType = c("central", "forward"),
  shiErr = (.Machine$double.eps)^(1/3),
  shi21maxFD = 20L,
  optimHessType = c("central", "forward"),
  hessErr = (.Machine$double.eps)^(1/3),
  shi21maxHess = 20L,
  useColor = crayon::has_color(),
  printNcol = floor((getOption("width") - 23)/12),
  print = 1L,
  normType = c("rescale2", "mean", "rescale", "std", "len", "constant"),
  scaleType = c("nlmixr2", "norm", "mult", "multAdd"),
  scaleCmax = 1e+05,
  scaleCmin = 1e-05,
  scaleC = NULL,
  scaleTo = 1,
  gradTo = 1,
  addProp = c("combined2", "combined1"),
  calcTables = TRUE,
  compress = TRUE,
  covMethod = c("r", "nlminb", ""),
  adjObf = TRUE,
  ci = 0.95,
  sigdig = 4,
  sigdigTable = NULL,
  ...
)

Arguments

eval.max

Maximum number of evaluations of the objective function allowed. Defaults to 200.

iter.max

Maximum number of iterations allowed. Defaults to 150.

trace

The value of the objective function and the parameters is printed every trace'th iteration. When 0 no trace information is to be printed

abs.tol

Absolute tolerance. Defaults to 0 so the absolute convergence test is not used. If the objective function is known to be non-negative, the previous default of `1e-20` would be more appropriate

rel.tol

Relative tolerance. Defaults to `1e-10`.

x.tol

X tolerance. Defaults to `1.5e-8`.

xf.tol

false convergence tolerance. Defaults to `2.2e-14`.

step.min

Minimum step size. Default to ‘1.’.

step.max

Maximum step size. Default to ‘1.’.

sing.tol

singular convergence tolerance; defaults to `rel.tol;.

scale

See PORT documentation (or leave alone).

scale.init

... probably need to check PORT documentation

diff.g

an estimated bound on the relative error in the objective function value

rxControl

`rxode2` ODE solving options during fitting, created with `rxControl()`

optExpression

Optimize the rxode2 expression to speed up calculation. By default this is turned on.

sumProd

Is a boolean indicating if the model should change multiplication to high precision multiplication and sums to high precision sums using the PreciseSums package. By default this is FALSE.

literalFix

boolean, substitute fixed population values as literals and re-adjust ui and parameter estimates after optimization; Default is `TRUE`.

returnNlminb

logical; when TRUE this will return the nlminb result instead of the nlmixr2 fit object

solveType

tells if `nlm` will use nlmixr2's analytical gradients when available (finite differences will be used for event-related parameters like parameters controlling lag time, duration/rate of infusion, and modeled bioavailability). This can be:

- `"hessian"` which will use the analytical gradients to create a Hessian with finite differences.

- `"gradient"` which will use the gradient and let `nlm` calculate the finite difference hessian

- `"fun"` where nlm will calculate both the finite difference gradient and the finite difference Hessian

When using nlmixr2's finite differences, the "ideal" step size for either central or forward differences are optimized for with the Shi2021 method which may give more accurate derivatives

stickyRecalcN

The number of bad ODE solves before reducing the atol/rtol for the rest of the problem.

maxOdeRecalc

Maximum number of times to reduce the ODE tolerances and try to resolve the system if there was a bad ODE solve.

odeRecalcFactor

The ODE recalculation factor when ODE solving goes bad, this is the factor the rtol/atol is reduced

eventType

Event gradient type for dosing events; Can be "central" or "forward"

shiErr

This represents the epsilon when optimizing the ideal step size for numeric differentiation using the Shi2021 method

shi21maxFD

The maximum number of steps for the optimization of the forward difference step size when using dosing events (lag time, modeled duration/rate and bioavailability)

optimHessType

The hessian type for when calculating the individual hessian by numeric differences (in generalized log-likelihood estimation). The options are "central", and "forward". The central differences is what R's `optimHess()` uses and is the default for this method. (Though the "forward" is faster and still reasonable for most cases). The Shi21 cannot be changed for the Gill83 algorithm with the optimHess in a generalized likelihood problem.

hessErr

This represents the epsilon when optimizing the Hessian step size using the Shi2021 method.

shi21maxHess

Maximum number of times to optimize the best step size for the hessian calculation

useColor

Boolean indicating if focei can use ASCII color codes

printNcol

Number of columns to printout before wrapping parameter estimates/gradient

print

Integer representing when the outer step is printed. When this is 0 or do not print the iterations. 1 is print every function evaluation (default), 5 is print every 5 evaluations.

normType

This is the type of parameter normalization/scaling used to get the scaled initial values for nlmixr2. These are used with scaleType of.

With the exception of rescale2, these come from Feature Scaling. The rescale2 The rescaling is the same type described in the OptdesX software manual.

In general, all all scaling formula can be described by:

$$v_{scaled}$$ = ($$v_{unscaled}-C_{1}$$)/$$C_{2}$$

Where

The other data normalization approaches follow the following formula

$$v_{scaled}$$ = ($$v_{unscaled}-C_{1}$$)/$$C_{2}$$

  • rescale2 This scales all parameters from (-1 to 1). The relative differences between the parameters are preserved with this approach and the constants are:

    $$C_{1}$$ = (max(all unscaled values)+min(all unscaled values))/2

    $$C_{2}$$ = (max(all unscaled values) - min(all unscaled values))/2

  • rescale or min-max normalization. This rescales all parameters from (0 to 1). As in the rescale2 the relative differences are preserved. In this approach:

    $$C_{1}$$ = min(all unscaled values)

    $$C_{2}$$ = max(all unscaled values) - min(all unscaled values)

  • mean or mean normalization. This rescales to center the parameters around the mean but the parameters are from 0 to 1. In this approach:

    $$C_{1}$$ = mean(all unscaled values)

    $$C_{2}$$ = max(all unscaled values) - min(all unscaled values)

  • std or standardization. This standardizes by the mean and standard deviation. In this approach:

    $$C_{1}$$ = mean(all unscaled values)

    $$C_{2}$$ = sd(all unscaled values)

  • len or unit length scaling. This scales the parameters to the unit length. For this approach we use the Euclidean length, that is:

    $$C_{1}$$ = 0

    $$C_{2}$$ = $$\sqrt(v_1^2 + v_2^2 + \cdots + v_n^2)$$

  • constant which does not perform data normalization. That is

    $$C_{1}$$ = 0

    $$C_{2}$$ = 1

scaleType

The scaling scheme for nlmixr2. The supported types are:

  • nlmixr2 In this approach the scaling is performed by the following equation:

    $$v_{scaled}$$ = ($$v_{current} - v_{init}$$)*scaleC[i] + scaleTo

    The scaleTo parameter is specified by the normType, and the scales are specified by scaleC.

  • norm This approach uses the simple scaling provided by the normType argument.

  • mult This approach does not use the data normalization provided by normType, but rather uses multiplicative scaling to a constant provided by the scaleTo argument.

    In this case:

    $$v_{scaled}$$ = $$v_{current}$$/$$v_{init}$$*scaleTo

  • multAdd This approach changes the scaling based on the parameter being specified. If a parameter is defined in an exponential block (ie exp(theta)), then it is scaled on a linearly, that is:

    $$v_{scaled}$$ = ($$v_{current}-v_{init}$$) + scaleTo

    Otherwise the parameter is scaled multiplicatively.

    $$v_{scaled}$$ = $$v_{current}$$/$$v_{init}$$*scaleTo

scaleCmax

Maximum value of the scaleC to prevent overflow.

scaleCmin

Minimum value of the scaleC to prevent underflow.

scaleC

The scaling constant used with scaleType=nlmixr2. When not specified, it is based on the type of parameter that is estimated. The idea is to keep the derivatives similar on a log scale to have similar gradient sizes. Hence parameters like log(exp(theta)) would have a scaling factor of 1 and log(theta) would have a scaling factor of ini_value (to scale by 1/value; ie d/dt(log(ini_value)) = 1/ini_value or scaleC=ini_value)

  • For parameters in an exponential (ie exp(theta)) or parameters specifying powers, boxCox or yeoJohnson transformations , this is 1.

  • For additive, proportional, lognormal error structures, these are given by 0.5*abs(initial_estimate)

  • Factorials are scaled by abs(1/digamma(initial_estimate+1))

  • parameters in a log scale (ie log(theta)) are transformed by log(abs(initial_estimate))*abs(initial_estimate)

These parameter scaling coefficients are chose to try to keep similar slopes among parameters. That is they all follow the slopes approximately on a log-scale.

While these are chosen in a logical manner, they may not always apply. You can specify each parameters scaling factor by this parameter if you wish.

scaleTo

Scale the initial parameter estimate to this value. By default this is 1. When zero or below, no scaling is performed.

gradTo

this is the factor that the gradient is scaled to before optimizing. This only works with scaleType="nlmixr2".

addProp

specifies the type of additive plus proportional errors, the one where standard deviations add (combined1) or the type where the variances add (combined2).

The combined1 error type can be described by the following equation:

$$y = f + (a + b\times f^c) \times \varepsilon$$

The combined2 error model can be described by the following equation:

$$y = f + \sqrt{a^2 + b^2\times f^{2\times c}} \times \varepsilon$$

Where:

- y represents the observed value

- f represents the predicted value

- a is the additive standard deviation

- b is the proportional/power standard deviation

- c is the power exponent (in the proportional case c=1)

calcTables

This boolean is to determine if the foceiFit will calculate tables. By default this is TRUE

compress

Should the object have compressed items

covMethod

Method for calculating covariance. In this discussion, R is the Hessian matrix of the objective function. The S matrix is the sum of individual gradient cross-product (evaluated at the individual empirical Bayes estimates).

  • "r,s" Uses the sandwich matrix to calculate the covariance, that is: solve(R) %*% S %*% solve(R)

  • "r" Uses the Hessian matrix to calculate the covariance as 2 %*% solve(R)

  • "s" Uses the cross-product matrix to calculate the covariance as 4 %*% solve(S)

  • "" Does not calculate the covariance step.

adjObf

is a boolean to indicate if the objective function should be adjusted to be closer to NONMEM's default objective function. By default this is TRUE

ci

Confidence level for some tables. By default this is 0.95 or 95% confidence.

sigdig

Optimization significant digits. This controls:

  • The tolerance of the inner and outer optimization is 10^-sigdig

  • The tolerance of the ODE solvers is 0.5*10^(-sigdig-2); For the sensitivity equations and steady-state solutions the default is 0.5*10^(-sigdig-1.5) (sensitivity changes only applicable for liblsoda)

  • The tolerance of the boundary check is 5 * 10 ^ (-sigdig + 1)

sigdigTable

Significant digits in the final output table. If not specified, then it matches the significant digits in the `sigdig` optimization algorithm. If `sigdig` is NULL, use 3.

...

Further arguments to be supplied to objective.

Author

Matthew L. Fidler

Examples

# \donttest{
# A logit regression example with emax model

dsn <- data.frame(i=1:1000)
dsn$time <- exp(rnorm(1000))
dsn$DV=rbinom(1000,1,exp(-1+dsn$time)/(1+exp(-1+dsn$time)))

mod <- function() {
 ini({
   E0 <- 0.5
   Em <- 0.5
   E50 <- 2
   g <- fix(2)
 })
 model({
   v <- E0+Em*time^g/(E50^g+time^g)
   ll(bin) ~ DV * v - log(1 + exp(v))
 })
}

fit2 <- nlmixr(mod, dsn, est="nlminb")
#>  
#>  
#>  
#>  
#> → loading into symengine environment...
#> → pruning branches (`if`/`else`) of population log-likelihood model...
#>  done
#> → calculate jacobian
#> → calculate ∂(f)/∂(θ)
#> → finding duplicate expressions in nlm llik gradient...
#> → optimizing duplicate expressions in nlm llik gradient...
#> → finding duplicate expressions in nlm pred-only...
#> → optimizing duplicate expressions in nlm pred-only...
#>  
#>  
#>  
#>  
#> → calculating covariance
#>  done
#> → loading into symengine environment...
#> → pruning branches (`if`/`else`) of full model...
#>  done
#> → finding duplicate expressions in EBE model...
#> → optimizing duplicate expressions in EBE model...
#> → compiling EBE model...
#>  
#>  
#>  done
#> → Calculating residuals/tables
#>  done
#> → compress origData in nlmixr2 object, save 9072
#> → compress parHistData in nlmixr2 object, save 2808

print(fit2)
#> ── nlmix log-likelihood nlminb ──
#> 
#>           OBJF      AIC      BIC Log-likelihood Condition#(Cov) Condition#(Cor)
#> lPop -687.2881 1156.589 1171.312      -575.2945        587.8147        74.15538
#> 
#> ── Time (sec $time): ──
#> 
#>            setup table compress    other
#> elapsed 0.001967 0.021    0.008 1.529033
#> 
#> ── ($parFixed or $parFixedDf): ──
#> 
#>        Est.     SE  %RSE  Back-transformed(95%CI) BSV(SD) Shrink(SD)%
#> E0  -0.5676 0.2307 40.65 -0.5676 (-1.02, -0.1154)                    
#> Em    5.732  2.927 51.07 5.732 (-0.005937, 11.47)                    
#> E50    3.14  1.492  47.5     3.14 (0.2169, 6.064)                    
#> g         2  FIXED FIXED                        2                    
#>  
#>   Covariance Type ($covMethod): r (nlminb)
#>   Censoring ($censInformation): No censoring
#>   Minimization message ($message):  
#>     relative convergence (4) 
#> 
#> ── Fit Data (object is a modified tibble): ──
#> # A tibble: 1,000 × 5
#>   ID      TIME    DV  IPRED      v
#>   <fct>  <dbl> <dbl>  <dbl>  <dbl>
#> 1 1     0.0165     0 -0.449 -0.567
#> 2 1     0.0372     1 -1.02  -0.567
#> 3 1     0.0882     1 -1.01  -0.563
#> # ℹ 997 more rows

# you can also get the nlm output with fit2$nlminb

fit2$nlminb
#> $par
#>         E0         Em        E50 
#> -0.5676345  5.7316674  3.1404447 
#> 
#> $objective
#> [1] 575.2945
#> 
#> $convergence
#> [1] 0
#> 
#> $iterations
#> [1] 8
#> 
#> $evaluations
#> function gradient 
#>       16        9 
#> 
#> $message
#> [1] "relative convergence (4)"
#> 
#> $scaleC
#> [1] 0.002991343 0.037227160 0.034581662
#> 
#> $parHistData
#>    iter                type     objf            E0            Em           E50
#> 1     1              Scaled 667.4395 -1.000000e+00 -1.000000e+00  1.000000e+00
#> 2     1            Unscaled 667.4395  5.000000e-01  5.000000e-01  2.000000e+00
#> 3     1    Back-Transformed 667.4395  5.000000e-01  5.000000e-01  2.000000e+00
#> 4     2              Scaled 666.5136 -1.225326e+00 -1.115838e-01  1.020822e+00
#> 5     2            Unscaled 666.5136  4.993260e-01  5.330732e-01  2.000720e+00
#> 6     2    Back-Transformed 666.5136  4.993260e-01  5.330732e-01  2.000720e+00
#> 7     3              Scaled 663.9271 -2.093563e+00  2.514772e+00  1.177080e+00
#> 8     3            Unscaled 663.9271  4.967288e-01  6.308450e-01  2.006124e+00
#> 9     3    Back-Transformed 663.9271  4.967288e-01  6.308450e-01  2.006124e+00
#> 10    4              Scaled 655.8650 -9.000211e+00  1.126673e+01  3.342632e+00
#> 11    4            Unscaled 655.8650  4.760686e-01  9.566555e-01  2.081012e+00
#> 12    4    Back-Transformed 655.8650  4.760686e-01  9.566555e-01  2.081012e+00
#> 13    5              Scaled 642.6739 -3.478339e+01  2.137661e+01  1.019116e+01
#> 14    5            Unscaled 642.6739  3.989423e-01  1.333018e+00  2.317846e+00
#> 15    5    Back-Transformed 642.6739  3.989423e-01  1.333018e+00  2.317846e+00
#> 16    6              Scaled 619.4996 -1.030791e+02  3.486979e+01  1.767987e+01
#> 17    6            Unscaled 619.4996  1.946463e-01  1.835330e+00  2.576818e+00
#> 18    6    Back-Transformed 619.4996  1.946463e-01  1.835330e+00  2.576818e+00
#> 19    7              Scaled 589.6730 -2.727326e+02  6.085355e+01  1.808770e+01
#> 20    7            Unscaled 589.6730 -3.128454e-01  2.802632e+00  2.590921e+00
#> 21    7    Back-Transformed 589.6730 -3.128454e-01  2.802632e+00  2.590921e+00
#> 22    8              Scaled 756.6679 -4.903503e+02  8.896321e+01 -4.082895e+01
#> 23    8            Unscaled 756.6679 -9.638147e-01  3.849075e+00  5.534855e-01
#> 24    8    Back-Transformed 756.6679 -9.638147e-01  3.849075e+00  5.534855e-01
#> 25    9              Scaled 581.9043 -2.858492e+02  7.803395e+01  9.813798e+00
#> 26    9            Unscaled 581.9043 -3.520817e-01  3.442210e+00  2.304796e+00
#> 27    9    Back-Transformed 581.9043 -3.520817e-01  3.442210e+00  2.304796e+00
#> 28   10              Scaled 578.7259 -3.010772e+02  9.316947e+01  1.884865e+01
#> 29   10            Unscaled 578.7259 -3.976339e-01  4.005662e+00  2.617236e+00
#> 30   10    Back-Transformed 578.7259 -3.976339e-01  4.005662e+00  2.617236e+00
#> 31   11              Scaled 576.3628 -3.363878e+02  1.057256e+02  2.169725e+01
#> 32   11            Unscaled 576.3628 -5.032600e-01  4.473092e+00  2.715745e+00
#> 33   11    Back-Transformed 576.3628 -5.032600e-01  4.473092e+00  2.715745e+00
#> 34   12              Scaled 575.3869 -3.650403e+02  1.247436e+02  2.639989e+01
#> 35   12            Unscaled 575.3869 -5.889695e-01  5.181076e+00  2.878370e+00
#> 36   12    Back-Transformed 575.3869 -5.889695e-01  5.181076e+00  2.878370e+00
#> 37   13              Scaled 575.3004 -3.591877e+02  1.354750e+02  3.202478e+01
#> 38   13            Unscaled 575.3004 -5.714624e-01  5.580576e+00  3.072888e+00
#> 39   13    Back-Transformed 575.3004 -5.714624e-01  5.580576e+00  3.072888e+00
#> 40   14              Scaled 575.2945 -3.580384e+02  1.391476e+02  3.379290e+01
#> 41   14            Unscaled 575.2945 -5.680245e-01  5.717296e+00  3.134033e+00
#> 42   14    Back-Transformed 575.2945 -5.680245e-01  5.717296e+00  3.134033e+00
#> 43   15              Scaled 575.2945 -3.579110e+02  1.395271e+02  3.397491e+01
#> 44   15            Unscaled 575.2945 -5.676433e-01  5.731426e+00  3.140327e+00
#> 45   15    Back-Transformed 575.2945 -5.676433e-01  5.731426e+00  3.140327e+00
#> 46   16              Scaled 575.2945 -3.579081e+02  1.395336e+02  3.397831e+01
#> 47   16            Unscaled 575.2945 -5.676345e-01  5.731667e+00  3.140445e+00
#> 48   16    Back-Transformed 575.2945 -5.676345e-01  5.731667e+00  3.140445e+00
#> 49   17              Scaled 575.2945 -3.579081e+02  1.395336e+02  3.397831e+01
#> 50   17            Unscaled 575.2945 -5.676345e-01  5.731667e+00  3.140445e+00
#> 51   17    Back-Transformed 575.2945 -5.676345e-01  5.731667e+00  3.140445e+00
#> 52    1 Forward Sensitivity       NA  2.374484e-01 -1.001859e+00 -1.550203e-02
#> 53    7 Forward Sensitivity       NA -1.455773e-02 -6.061276e-01  4.122381e-01
#> 54    9 Forward Sensitivity       NA  8.977175e-02 -7.292494e-02 -1.668496e-01
#> 55   11 Forward Sensitivity       NA  2.009114e-02 -7.229962e-02  3.156044e-02
#> 56   12 Forward Sensitivity       NA -3.813890e-03 -2.128132e-02  1.770812e-02
#> 57   13 Forward Sensitivity       NA -4.754851e-04 -4.823774e-03  4.115251e-03
#> 58   14 Forward Sensitivity       NA -5.738836e-05 -4.434199e-04  4.145758e-04
#> 59   15 Forward Sensitivity       NA -5.381231e-07 -4.341575e-06  5.911731e-07
#> 60   16 Forward Sensitivity       NA  6.723736e-10  1.404030e-09 -7.745115e-08
#> 
#> $par.scaled
#>         E0         Em        E50 
#> -357.90805  139.53362   33.97831 
#> 
#> $hessian
#>               E0           Em          E50
#> E0   0.001787365  0.002540405 -0.006239682
#> Em   0.002540405  0.008714471 -0.017563512
#> E50 -0.006239682 -0.017563512  0.038745748
#> 
#> $covMethod
#> [1] "r (nlminb)"
#> 
#> $cov.scaled
#>           E0       Em      E50
#> E0  5949.638 2276.271 1989.977
#> Em  2276.271 6183.655 3169.636
#> E50 1989.977 3169.636 1860.508
#> 
#> $cov
#>             E0        Em       E50
#> E0  0.05323816 0.2534838 0.2058545
#> Em  0.25348381 8.5696890 4.0805163
#> E50 0.20585446 4.0805163 2.2249650
#> 
#> $r
#>                E0           Em          E50
#> E0   0.0008936825  0.001270203 -0.003119841
#> Em   0.0012702027  0.004357236 -0.008781756
#> E50 -0.0031198409 -0.008781756  0.019372874
#> 
# }