Skip to contents

Introduction

Initial estimates for a compartmental population PK model can be obtained using babelmixr2 with the "pknca" estimation method. Also, the central compartment scaling factor can be auto-generated based on units for dosing, concentration measurement, desired volume of distribution units, and time.

You do not need to perform NCA analysis by hand; the "pknca" estimation method will perform NCA analysis using the PKNCA package automatically.

The methods used for converting NCA calculations to parameter estimates are described in the help for nlmixr2Est.pknca().

Initial example

Initial model setup is the same as for any other nlmixr2 model. You must load the babelmixr2 library so that the nlmixr() function recognizes est = "pknca".

library(babelmixr2)
#> Loading required package: nlmixr2
#> Loading required package: nlmixr2data
one.compartment <- function() {
  ini({
    tka <- log(1.57); label("Ka (1/hr)")
    tcl <- log(2.72); label("Cl (L/hr)")
    tv <- log(31.5); label("V (L)")
    eta.ka ~ 0.6
    eta.cl ~ 0.3
    eta.v ~ 0.1
    add.sd <- 0.7; label("additive residual error (mg/L)")
  })
  # and a model block with the error specification and model specification
  model({
    ka <- exp(tka + eta.ka)
    cl <- exp(tcl + eta.cl)
    vc <- exp(tv + eta.v)
    d/dt(depot) <- -ka * depot
    d/dt(center) <- ka * depot - cl / vc * center
    cp <- center / vc
    cp ~ add(add.sd)
  })
}

To use PKNCA to get initial estimates, use est = "pknca" instead of one of the other nlmixr2 estimation methods.

For unit conversions, provide the units to the control = pkncaControl() argument. Unit conversions are only supported when the units can be automatically converted; mass/volume can be converted to any other mass/volume ratio, but mass to molar or molar to mass cannot because there is not a single mass-to-molar conversion factor.

prepared <-
  nlmixr2(
    one.compartment,
    data = theo_sd,
    est = "pknca",
    control = pkncaControl(concu = "ng/mL", doseu = "mg", timeu = "hr", volumeu = "L")
  )
#>  parameter labels from comments are typically ignored in non-interactive mode
#>  Need to run with the source intact to parse comments
#> Loading required namespace: testthat
#>  change initial estimate (0.89314878960486) and upper/lower bound (-3.50655789731998 to 3.72508541597241) of `tka`
#> → significant model change detected
#> → removed from model: '$getSplitModel'
#>  change initial estimate (8.41044546236311) and upper/lower bound (5.51439905878865 to 10.899462850803) of `tcl`
#>  change initial estimate (10.5377244826318) and upper/lower bound (7.94567233496473 to 13.1050053785005) of `tv`

Now, you have the prepared model with updated initial estimates and the NCA results embedded. You can see the new model and the PKNCA estimates by looking at the prepared$ui (the model as interpreted by rxode2) and prepared$nca (the PKNCAresults object).

Note that in the new model, the fixed effect initial estimates are changed from their original values. The residual error and between-subject variability are unchanged.

prepared$ui
#>  ── rxode2-based free-form 2-cmt ODE model ────────────────────────────────────── 
#>  ── Initalization: ──  
#> Fixed Effects ($theta): 
#>        tka        tcl         tv     add.sd 
#>  0.8931488  8.4104455 10.5377245  0.7000000 
#> 
#> Omega ($omega): 
#>        eta.ka eta.cl eta.v
#> eta.ka    0.6    0.0   0.0
#> eta.cl    0.0    0.3   0.0
#> eta.v     0.0    0.0   0.1
#> 
#> States ($state or $stateDf): 
#>   Compartment Number Compartment Name
#> 1                  1            depot
#> 2                  2           center
#>  ── μ-referencing ($muRefTable): ──  
#>   theta    eta level
#> 1   tka eta.ka    id
#> 2   tcl eta.cl    id
#> 3    tv  eta.v    id
#> 
#>  ── Model (Normalized Syntax): ── 
#> function() {
#>     ini({
#>         tka <- c(-3.50655789731998, 0.89314878960486, 3.72508541597241)
#>         label("Ka (1/hr)")
#>         tcl <- c(5.51439905878865, 8.41044546236311, 10.899462850803)
#>         label("Cl (L/hr)")
#>         tv <- c(7.94567233496473, 10.5377244826318, 13.1050053785005)
#>         label("V (L)")
#>         add.sd <- c(0, 0.7)
#>         label("additive residual error (mg/L)")
#>         eta.ka ~ 0.6
#>         eta.cl ~ 0.3
#>         eta.v ~ 0.1
#>     })
#>     model({
#>         ka <- exp(tka + eta.ka)
#>         cl <- exp(tcl + eta.cl)
#>         vc <- exp(tv + eta.v)
#>         d/dt(depot) <- -ka * depot
#>         d/dt(center) <- ka * depot - cl/vc * center
#>         cp <- 1000 * center/vc
#>         cp ~ add(add.sd)
#>     })
#> }

knitr::knit_print(
  summary(prepared$nca)
)
#>  Interval Start Interval End  N AUClast (hr*ng/mL) Cmax (ng/mL)
#>               0           24 12        74.6 [24.3]            .
#>               0          Inf 12                  .  8.65 [17.0]
#>           Tmax (hr) CL (based on AUClast) (mg/(hr*ng/mL))
#>                   .                           4.22 [23.0]
#>  1.14 [0.630, 3.55]                                     .
#>  Vss (based on AUClast) (mg/(ng/mL)) Half-life (hr) AUCinf,obs (hr*ng/mL)
#>                          25.0 [18.5]              .                     .
#>                                    .    8.18 [2.12]            115 [28.4]
#>  Cmax (dose-normalized) ((ng/mL)/mg)
#>                                    .
#>                        0.0274 [18.1]
#> 
#> Caption: AUClast, Cmax, CL (based on AUClast), Vss (based on AUClast), AUCinf,obs, Cmax (dose-normalized): geometric mean and geometric coefficient of variation; Tmax: median and range; Half-life: arithmetic mean and standard deviation; N: number of subjects

From the updated model, you can perform estimation on the new model object, as with any other model that has been created for nlmixr2:

fit <- nlmixr(prepared, data = theo_sd, est = "focei", control = list(print = 0))
#> → loading into symengine environment...
#> → pruning branches (`if`/`else`) of full model...
#>  done
#> → calculate jacobian
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#> → calculate sensitivities
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#> → calculate ∂(f)/∂(η)
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#> → calculate ∂(R²)/∂(η)
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#> → finding duplicate expressions in inner model...
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#> → optimizing duplicate expressions in inner model...
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#> → finding duplicate expressions in EBE model...
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#> → optimizing duplicate expressions in EBE model...
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#> → compiling inner model...
#> using C compiler: ‘gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0’
#>  done
#> → finding duplicate expressions in FD model...
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#> → optimizing duplicate expressions in FD model...
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#> → compiling EBE model...
#> using C compiler: ‘gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0’
#>  done
#> → compiling events FD model...
#> using C compiler: ‘gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0’
#>  done
#> rxode2 3.0.2.9000 using 2 threads (see ?getRxThreads)
#>   no cache: create with `rxCreateCache()`
#> calculating covariance matrix
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00 
#> done
#> → Calculating residuals/tables
#>  done
#> → compress origData in nlmixr2 object, save 5952
#> → compress parHistData in nlmixr2 object, save 10632

fit
#> ── nlmix FOCEi (outer: nlminb) ──
#> 
#>           OBJF      AIC      BIC Log-likelihood Condition#(Cov) Condition#(Cor)
#> FOCEi 116.9548 373.5546 393.7342      -179.7773         66.5244        12.97786
#> 
#> ── Time (sec fit$time): ──
#> 
#>            setup optimize covariance table compress    other
#> elapsed 0.034318 0.496944   0.496946 0.108    0.009 7.461792
#> 
#> ── Population Parameters (fit$parFixed or fit$parFixedDf): ──
#> 
#>                             Parameter  Est.     SE  %RSE
#> tka                         Ka (1/hr) 0.469  0.224  47.6
#> tcl                         Cl (L/hr)  7.92 0.0929  1.17
#> tv                              V (L)  10.4 0.0602 0.581
#> add.sd additive residual error (mg/L) 0.697             
#>              Back-transformed(95%CI) BSV(CV%) Shrink(SD)%
#> tka                 1.6 (1.03, 2.48)     68.0    -0.666% 
#> tcl     2.75e+03 (2.29e+03, 3.3e+03)     26.1      3.95% 
#> tv     3.19e+04 (2.83e+04, 3.59e+04)     15.4      14.5% 
#> add.sd                         0.697                     
#>  
#>   Covariance Type (fit$covMethod): r,s
#>   No correlations in between subject variability (BSV) matrix
#>   Full BSV covariance (fit$omega) or correlation (fit$omegaR; diagonals=SDs) 
#>   Distribution stats (mean/skewness/kurtosis/p-value) available in fit$shrink 
#>   Information about run found (fit$runInfo):
#>    • gradient problems with initial estimate and covariance; see $scaleInfo 
#>    • last objective function was not at minimum, possible problems in optimization 
#>    • ETAs were reset to zero during optimization; (Can control by foceiControl(resetEtaP=.)) 
#>    • initial ETAs were nudged; (can control by foceiControl(etaNudge=., etaNudge2=)) 
#>   Censoring (fit$censInformation): No censoring
#>   Minimization message (fit$message):  
#>     false convergence (8) 
#>   In an ODE system, false convergence may mean "useless" evaluations were performed.
#>   See https://tinyurl.com/yyrrwkce
#>   It could also mean the convergence is poor, check results before accepting fit
#>   You may also try a good derivative free optimization:
#>     nlmixr2(...,control=list(outerOpt="bobyqa"))
#> 
#> ── Fit Data (object fit is a modified tibble): ──
#> # A tibble: 132 × 22
#>   ID     TIME    DV  PRED    RES   WRES IPRED   IRES  IWRES CPRED   CRES  CWRES
#>   <fct> <dbl> <dbl> <dbl>  <dbl>  <dbl> <dbl>  <dbl>  <dbl> <dbl>  <dbl>  <dbl>
#> 1 1      0     0.74  0     0.74   1.06   0     0.74   1.06   0     0.74   1.06 
#> 2 1      0.25  2.84  3.27 -0.432 -0.234  3.84 -1.00  -1.44   3.23 -0.389 -0.185
#> 3 1      0.57  6.57  5.84  0.730  0.297  6.78 -0.215 -0.308  5.78  0.786  0.287
#> # ℹ 129 more rows
#> # ℹ 10 more variables: eta.ka <dbl>, eta.cl <dbl>, eta.v <dbl>, depot <dbl>,
#> #   center <dbl>, ka <dbl>, cl <dbl>, vc <dbl>, tad <dbl>, dosenum <dbl>

Give PKNCA a different dataset or a completed NCA analysis

To get the initial estimate, babelmixr2 automatically converts your modeling dataset to the format the is needed for PKNCA, and the NCA is automatically performed using all the data.

In some cases (e.g. studies with sparse data), NCA may not be feasible. In those cases, you can provide a different dataset to PKNCA compared to the full modeling dataset. Usually, the simplest method is to provide single-dose, dense-sampling, dose-ranging data (i.e. the single-ascending dose portion of the first-in-human study) to be estimated.

To do this, give your data to PKNCA using the ncaData argument to pkncaControl() as follows:

# Choose a subset of the full dataset for NCA
dNCA <- theo_sd[theo_sd$ID <= 6, ]

preparedNcaData <-
  nlmixr2(
    one.compartment,
    data = theo_sd,
    est = "pknca",
    control = pkncaControl(concu = "ng/mL", doseu = "mg", timeu = "hr", volumeu = "L", ncaData = dNCA)
  )
#>  parameter labels from comments are typically ignored in non-interactive mode
#>  Need to run with the source intact to parse comments
#>  change initial estimate (0.929027077269762) and upper/lower bound (-3.50655789731998 to 3.32136703319919) of `tka`
#> → significant model change detected
#> → removed from model: '$getSplitModel'
#>  change initial estimate (8.3955404628088) and upper/lower bound (5.85241523541802 to 10.7637056987378) of `tcl`
#>  change initial estimate (10.5377244826318) and upper/lower bound (7.94370069836702 to 13.1024358787022) of `tv`

preparedNcaData$ui
#>  ── rxode2-based free-form 2-cmt ODE model ────────────────────────────────────── 
#>  ── Initalization: ──  
#> Fixed Effects ($theta): 
#>        tka        tcl         tv     add.sd 
#>  0.9290271  8.3955405 10.5377245  0.7000000 
#> 
#> Omega ($omega): 
#>        eta.ka eta.cl eta.v
#> eta.ka    0.6    0.0   0.0
#> eta.cl    0.0    0.3   0.0
#> eta.v     0.0    0.0   0.1
#> 
#> States ($state or $stateDf): 
#>   Compartment Number Compartment Name
#> 1                  1            depot
#> 2                  2           center
#>  ── μ-referencing ($muRefTable): ──  
#>   theta    eta level
#> 1   tka eta.ka    id
#> 2   tcl eta.cl    id
#> 3    tv  eta.v    id
#> 
#>  ── Model (Normalized Syntax): ── 
#> function() {
#>     ini({
#>         tka <- c(-3.50655789731998, 0.929027077269762, 3.32136703319919)
#>         label("Ka (1/hr)")
#>         tcl <- c(5.85241523541802, 8.3955404628088, 10.7637056987378)
#>         label("Cl (L/hr)")
#>         tv <- c(7.94370069836702, 10.5377244826318, 13.1024358787022)
#>         label("V (L)")
#>         add.sd <- c(0, 0.7)
#>         label("additive residual error (mg/L)")
#>         eta.ka ~ 0.6
#>         eta.cl ~ 0.3
#>         eta.v ~ 0.1
#>     })
#>     model({
#>         ka <- exp(tka + eta.ka)
#>         cl <- exp(tcl + eta.cl)
#>         vc <- exp(tv + eta.v)
#>         d/dt(depot) <- -ka * depot
#>         d/dt(center) <- ka * depot - cl/vc * center
#>         cp <- 1000 * center/vc
#>         cp ~ add(add.sd)
#>     })
#> }

The initial estimates are now based on NCA calculated from the dNCA dataset rather than the full theo_sd dataset.

If you already have NCA results calculated by PKNCA with the required parameters (“tmax”, “cmax.dn”, and “cllast”), you can provide those instead using the pkncaControl(ncaResults) argument.

Model requirements

To update the initial estimates, the model must have parameters in the model() block with the names that are expected by est = "pknca". The expected names are:

  • ka
  • vc
  • cl
  • vp
  • vp2
  • q
  • q2

Any of those parameter names that are found in the model() block will be automatically traced back to the initial conditions (ini() block), and the parameter values will be updated. If the parameter is estimated on the log scale, the updated parameter value will automatically be converted to the log scale.