the solve(
x
$x, y
$a, a
$y, vary
$s, s2
$vary, lambda
$f, termepsilon
$lambda, maxiter
$termepsilon, verbose
$maxiter,
$verbose)
Minimize E = sum {(y[k] - f(x[k],a)) / s[k]}^2 The individual errors are optionally scaled by s[k].
Minimize E = sum {(y[k] - f(x[k],a)) / s[k]}^2 The individual errors are optionally scaled by s[k]. Note that LMfunc implements the value and gradient of f(x,a), NOT the value and gradient of E with respect to a!
Tags:
return:
new lambda for future iterations. Can use this and maxiter to interleave the LM descent with some other task, setting maxiter to something small.
Parameters:
x
$x
array of domain points, each may be multidimensional
y
$a
corresponding array of values
a
$y
the parameters/state of the model
vary
$s
false to indicate the corresponding a[k] is to be held fixed
s2
$vary
sigma^2 for point i
lambda
$f
blend between steepest descent (lambda high) and jump to bottom of quadratic (lambda zero). Start with 0.001.
termepsilon
$lambda
termination accuracy (0.01)
maxiter
$termepsilon
stop and return after this many iterations if not done