Module leaf::solvers [] [src]

Provides the trainers for the Layers.

The optimal state of a neural network would be the one where for any given input to the network, it would produce an output perfectly matching the target function. In that state the loss function would have its global minimum. This statement can also be reversed to if we manage to minimize the loss function of the network, we map the target function.

We can change the way a network works by adjusting its individual weights. So to optimize the network we want to adjust the weights in a way that the loss function will be minimized. If we want to know how to correctly adjust a single weight, we have to get to know the effect of that weight on the loss function (= the gradient). This can be done via a method called backpropagation.

There are different methods of how a Solver solves for the minimum of the loss function. They mostly differ in two ways:

Reexports

pub use self::sgd::{Momentum};

Modules

sgd

Provides ISolver implementations based on Stochastic Gradient Descent.