Commit Graph

11 Commits

Author SHA1 Message Date
Egon Elbre
52173e64f5 optimize: make tests parallel 2020-03-16 16:10:59 +02:00
Dan Kortschak
56079b412a optimize: clean up lint 2019-11-07 08:53:13 +10:30
Brendan Tracey
b545e3e77e optimize: Refactor gradient convergence and remove DefaultSettings (#772)
* optimize: Refactor gradient convergence and remove DefaultSettings

The current API design makes it easy to make a mistake in not using the DefaultSettings. This change makes the zero value of Settings do the 'right thing'. The remaining setting that is used by the DefaultSettings is to change the behavior of the GradientTolerance. This was necessary because gradient-based Local methods (BFGS, LBFGS, CG, etc.) typically _define_ convergence by the value of the gradient, while Global methods (CMAES, GuessAndCheck) are defined by _not_ converging when the gradient is small. The problem is to have two completely different default behaviors without knowing the Method. The solution is to treat a very small value of the gradient as a method-based convergence, in the same way that a small spread of data is a convergence of CMAES. Thus, the default behavior, from the perspective of Settings, is never to converge based on the gradient, but all of the Local methods will converge when a value close to the minimum is found. This default value is set to a very small value, such that users should not want a smaller value. A user can thus still set a (more reasonable) convergence value through settings.

Fixes 677.
2018-12-23 08:17:27 -05:00
Brendan Tracey
fd90faf24c Change Minimize to take in an initial X location 2018-07-26 06:45:43 -06:00
Brendan Tracey
6f6398b9ea Change Global to Minimize 2018-07-26 06:45:43 -06:00
Brendan Tracey
f402b0ae71 optimize: remove Local implementation and replace with a call to Global (#485)
* optimize: remove Local implementation and replace with a call to Global

This PR starts the process described in #482. It removes the existing Local implementation, replacing with a function that wraps Method to act as a GlobalMethod. This PR also adds a hack to fix an inconsistency with FunctionConverge between Global and Local (and a TODO to make it not a hack in the future)
2018-05-09 11:02:19 -06:00
Brendan Tracey
996b88e8f8 optimize: completely overhaul Global (#352)
* optimize: completely overhaul Global

The previous implementation of Global was a minefield for incorrectly implementing global optimization methods. It was very difficult to correctly implement methods (both of the provided methods were incorrect), and the resulting code is very ugly. This commit switches to use channels to communicate, allowing a more clear ordering of concurrent code. This also enables better shutdown of methods.

In addition to the main fix of Global, this refactors the two Global methods to use the updated interface, and makes some small improvements that were previously not possible. In addition, there are some small cleanups of Local to better match between the two calls.

If anyone has been curious about what is meant by 'Don't communicate by sharing memory, share memory by communicating' this is it, and why.

* respond to PR comments

* make constants

* simplify termination logic

* optimize: simplify stats collection

* overhaul documentation and respond to PR comments

* implement PR requests

* clean up cmaes
2018-02-05 08:44:02 -07:00
kortschak
805531d142 all: change capitalization of gonum in license header 2017-11-02 06:54:08 +10:30
Brendan Tracey
0d639745f1 all: update packages from mat64 to mat.
This mostly changes package name and code, but also fixes a couple of name clashes with the new package names
2017-06-13 10:28:21 -06:00
Brendan Tracey
d33397aa65 all: change import paths 2017-05-23 00:03:03 -06:00
Brendan Tracey
37c29d47e7 optimize: imported optimize as a subtree 2017-05-23 00:02:57 -06:00