OptimizerBatchNLoptr class that implements non-linear optimization.
Calls nloptr::nloptr() from package nloptr.
Source
Johnson, G S (2020). “The NLopt nonlinear-optimization package.” https://github.com/stevengj/nlopt.
Parameters
algorithmcharacter(1)
Algorithm to use. Seenloptr::nloptr.print.options()for available algorithms.x0numeric()
Initial parameter values. Usestart_valuesparameter to create"random"or"center"start values.start_valuescharacter(1)
Create"random"start values or based on"center"of search space? In the latter case, it is the center of the parameters before a trafo is applied. Custom start values can be passed via thex0parameter.approximate_eval_grad_flogical(1)
Should gradients be numerically approximated via finite differences (nloptr::nl.grad). Only required for certain algorithms. Note that function evaluations required for the numerical gradient approximation will be logged as usual and are not treated differently than regular function evaluations by, e.g., Terminators.
For the meaning of other control parameters, see nloptr::nloptr() and nloptr::nloptr.print.options().
Internal Termination Parameters
The algorithm can terminated with all Terminators. Additionally, the following internal termination parameters can be used:
stopvalnumeric(1)
Stop value. Deactivate with-Inf. Default is-Inf.maxtimeinteger(1)
Maximum time. Deactivate with-1L. Default is-1L.maxevalinteger(1)
Maximum number of evaluations. Deactivate with-1L. Default is-1L.xtol_relnumeric(1)
Relative tolerance. Original default is 10^-4. Deactivate with-1. Overwritten with-1.xtol_absnumeric(1)
Absolute tolerance. Deactivate with-1. Default is-1.ftol_relnumeric(1)
Relative tolerance. Deactivate with-1. Default is-1.ftol_absnumeric(1)
Absolute tolerance. Deactivate with-1. Default is-1.
Progress Bars
$optimize() supports progress bars via the package progressr
combined with a Terminator. Simply wrap the function in
progressr::with_progress() to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress").
Super classes
bbotk::Optimizer -> bbotk::OptimizerBatch -> OptimizerBatchNLoptr
Examples
# example only runs if nloptr is available
if (mlr3misc::require_namespaces("nloptr", quietly = TRUE)) {
# define the objective function
fun = function(xs) {
list(y = - (xs[[1]] - 2)^2 - (xs[[2]] + 3)^2 + 10)
}
# set domain
domain = ps(
x1 = p_dbl(-10, 10),
x2 = p_dbl(-5, 5)
)
# set codomain
codomain = ps(
y = p_dbl(tags = "maximize")
)
# create objective
objective = ObjectiveRFun$new(
fun = fun,
domain = domain,
codomain = codomain,
properties = "deterministic"
)
# initialize instance
instance = oi(
objective = objective,
terminator = trm("evals", n_evals = 20)
)
# load optimizer
optimizer = opt("nloptr", algorithm = "NLOPT_LN_BOBYQA")
# trigger optimization
optimizer$optimize(instance)
# all evaluated configurations
instance$archive
# best performing configuration
instance$result
}
#> x1 x2 x_domain y
#> <num> <num> <list> <num>
#> 1: 2 -3 <list[2]> 10