Trait leaf::layer::ILayer [] [src]

pub trait ILayer<B: IBackend>: ComputeOutput<f32, B> + ComputeInputGradient<f32, B> + ComputeParametersGradient<f32, B> {
    fn init(&mut self, backend: Rc<B>) { ... }
    fn reshape(&mut self, backend: Rc<B>, input_data: &mut Vec<ArcLock<SharedTensor<f32>>>, input_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>, weights_data: &mut Vec<ArcLock<SharedTensor<f32>>>, weights_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>, output_data: &mut Vec<ArcLock<SharedTensor<f32>>>, output_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>) { ... }
    fn resize_shared_workspace(&mut self, backend: Rc<B>, workspace: Option<ArcLock<SharedTensor<u8>>>) -> Option<ArcLock<SharedTensor<u8>>> { ... }
    fn forward(&self, backend: &B, input_data: &[ArcLock<SharedTensor<f32>>], weights_data: &[ArcLock<SharedTensor<f32>>], output_data: &mut [ArcLock<SharedTensor<f32>>]) { ... }
    fn backward_input(&self, backend: &B, weights_data: &[ArcLock<SharedTensor<f32>>], output_data: &[ArcLock<SharedTensor<f32>>], output_gradients: &[ArcLock<SharedTensor<f32>>], input_data: &[ArcLock<SharedTensor<f32>>], input_gradients: &mut [ArcLock<SharedTensor<f32>>]) { ... }
    fn backward_parameters(&self, backend: &B, output_data: &[ArcLock<SharedTensor<f32>>], output_gradients: &[ArcLock<SharedTensor<f32>>], input_data: &[ArcLock<SharedTensor<f32>>], weights_gradients: &mut [ArcLock<SharedTensor<f32>>]) { ... }
    fn sync(&self, backend: &B, input_data: &mut [ArcLock<SharedTensor<f32>>], input_gradients: &mut [ArcLock<SharedTensor<f32>>], weights_data: &mut [ArcLock<SharedTensor<f32>>], weights_gradients: &mut [ArcLock<SharedTensor<f32>>], output_data: &mut Vec<ArcLock<SharedTensor<f32>>>, output_gradients: &mut Vec<ArcLock<SharedTensor<f32>>>) { ... }
    fn auto_output_blobs(&self) -> bool { ... }
    fn min_output_blobs(&self) -> usize { ... }
    fn exact_num_output_blobs(&self) -> Option<usize> { ... }
    fn auto_weight_blobs(&self) -> bool { ... }
    fn exact_num_input_blobs(&self) -> Option<usize> { ... }
    fn allow_force_backward(&self, input_id: usize) -> bool { ... }
    fn sync_native(&self) -> bool { ... }
    fn compute_in_place(&self) -> bool { ... }
    fn is_container(&self) -> bool { ... }
    fn loss_weight(&self, output_id: usize) -> Option<f32> { ... }
    fn inputs_data(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>> { ... }
    fn inputs_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>> { ... }
    fn outputs_data(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>> { ... }
    fn outputs_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>> { ... }
    fn learnable_weights(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>> { ... }
    fn learnable_weights_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>> { ... }
    fn learnable_weights_names(&self) -> Option<Vec<String>> { ... }
    fn learnable_weights_lr(&self) -> Option<Vec<Option<f32>>> { ... }
}

A Layer in a Neural Network that can handle forward and backward of a computation step.

Provided Methods

fn init(&mut self, backend: Rc<B>)

Initialize the layer for computation.

Allows for layer-specific one time setup, e.g. precomputing constant values.

fn reshape(&mut self, backend: Rc<B>, input_data: &mut Vec<ArcLock<SharedTensor<f32>>>, input_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>, weights_data: &mut Vec<ArcLock<SharedTensor<f32>>>, weights_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>, output_data: &mut Vec<ArcLock<SharedTensor<f32>>>, output_gradient: &mut Vec<ArcLock<SharedTensor<f32>>>)

Adjust to shapes of the output blobs to fit the shapes of the input blobs.

Should be called during Layer initalization, after init.

Caution: input_data should only be reshaped, but not resized.

fn resize_shared_workspace(&mut self, backend: Rc<B>, workspace: Option<ArcLock<SharedTensor<u8>>>) -> Option<ArcLock<SharedTensor<u8>>>

Adjust size of shared workspace.

Is used by layers that need a workspace. The layer should either:

  • leave the workspace as is if it bigger than required by this layer
  • resize the workspace to the required size if smaller
  • create the workspace if the workspace is None

The reference to the workspace should be saved in the layer.

fn forward(&self, backend: &B, input_data: &[ArcLock<SharedTensor<f32>>], weights_data: &[ArcLock<SharedTensor<f32>>], output_data: &mut [ArcLock<SharedTensor<f32>>])

Compute the feedforward layer output using the provided Backend.

Aquires read locks for the input tensors and write locks for the output tensors to ensure sequential computation, and then passes them to computation method specific function ([forward_cpu][4]).

fn backward_input(&self, backend: &B, weights_data: &[ArcLock<SharedTensor<f32>>], output_data: &[ArcLock<SharedTensor<f32>>], output_gradients: &[ArcLock<SharedTensor<f32>>], input_data: &[ArcLock<SharedTensor<f32>>], input_gradients: &mut [ArcLock<SharedTensor<f32>>])

Compute the backpropagation input gradient using the provided backend.

Aquires write locks for the input blobs to ensure sequential computation, and then do a compute_input_gradient.

fn backward_parameters(&self, backend: &B, output_data: &[ArcLock<SharedTensor<f32>>], output_gradients: &[ArcLock<SharedTensor<f32>>], input_data: &[ArcLock<SharedTensor<f32>>], weights_gradients: &mut [ArcLock<SharedTensor<f32>>])

Compute the backpropagation parameters gradient using the provided backend.

Aquires write locks for the input blobs to ensure sequential computation, and then do a compute_parameters_gradient.

fn sync(&self, backend: &B, input_data: &mut [ArcLock<SharedTensor<f32>>], input_gradients: &mut [ArcLock<SharedTensor<f32>>], weights_data: &mut [ArcLock<SharedTensor<f32>>], weights_gradients: &mut [ArcLock<SharedTensor<f32>>], output_data: &mut Vec<ArcLock<SharedTensor<f32>>>, output_gradients: &mut Vec<ArcLock<SharedTensor<f32>>>)

Synchronize the blobs before doing a forward or backward operation.

This is necessary because the forward_layer and backward_layer methods only immutably borrow the corresponding input blobs and weights which they are not supposed to change. However synchronizing all blobs to the same device may be neccessary for some computations, which can only be done with a mutable borrow.

fn auto_output_blobs(&self) -> bool

Return whether "anonymous" output blobs are created automatically for the layer.

If this method returns true, Network::init will create enough "anonymous" output blobs to fulfill the requirement specified by exact_num_output_blobs or min_output_blobs.

fn min_output_blobs(&self) -> usize

Returns the minimum number of output blobs required by the layer, or 0 if no minimum number is required.

This method should be overridden to return a positive value if your layer expects some minimum number of output blobs.

fn exact_num_output_blobs(&self) -> Option<usize>

Returns the exact number of output blobs required by the layer, or None if no exact number is required.

This method should be overridden to return a positive value if your layer expects some exact number of output blobs.

fn auto_weight_blobs(&self) -> bool

Return whether weight blobs are created automatically for the layer.

If this method returns true, Network::init will create a weight blob for every output blob.

fn exact_num_input_blobs(&self) -> Option<usize>

Returns the exact number of input blobs required by the layer, or None if no exact number is required.

This method should be overridden to return a positive value if your layer expects some exact number of input blobs.

fn allow_force_backward(&self, input_id: usize) -> bool

Return whether to allow force_backward for a given input blob index.

If allow_force_backward(i) == false, we will ignore the force_backward setting and backpropagate to blob i only if it needs gradient information (as is done when force_backward == false).

fn sync_native(&self) -> bool

Return wether a simple native backend should be used to sync instead of the default backend.

If false is returned the default backend will be used, otherwise a new native backend will be created and provided as argument to sync.

fn compute_in_place(&self) -> bool

Return wether the computations of a layer should be done in-place (the output will be written where the input was read from).

Doing computations in place reduces the memory required for layers.

If false is returned the layer behaves as normal, otherwise if a layer is provided a identiacla "input" and "output", it will only be supplied an "output_data" when doing a compute_output.

fn is_container(&self) -> bool

Return wether the layer is a container.

This turns of certain behaviour for containers which would lead to problems: - RwLocks will not be aquired for forward/backward since it would lead to deadlocks.

fn loss_weight(&self, output_id: usize) -> Option<f32>

Return the associated loss weight for a given output blob index.

If loss_weight(i) == None, no loss will be calculated for the output blob.

This is usually overridden by loss layers.

fn inputs_data(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>

Return the input tensors of the layer.

This should only be overridden by container layers, where the tensors are not easily exposable.

fn inputs_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>

Return the gradients of the input tensors of the layer.

This should only be overridden by container layers, where the tensors are not easily exposable.

fn outputs_data(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>

Return the output tensors of the layer.

This should only be overridden by container layers, where the tensors are not easily exposable.

fn outputs_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>

Return the gradients of the output tensors of the layer.

This should only be overridden by container layers, where the tensors are not easily exposable.

fn learnable_weights(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>

Return the learnable weights inside the layer.

This should only be overridden by container layers, where the weights are not easily exposable.

fn learnable_weights_gradients(&self) -> Option<Vec<ArcLock<SharedTensor<f32>>>>

Return the gradients for the learnable weights inside the layer.

This should only be overridden by container layers, where the weights are not easily exposable.

fn learnable_weights_names(&self) -> Option<Vec<String>>

Return the names of the learnable weights inside the layer.

This should only be overridden by container layers, where the weights are not easily exposable.

fn learnable_weights_lr(&self) -> Option<Vec<Option<f32>>>

Return the learning rates for the learnable weights inside the layer.

This should only be overridden by container layers, where the weights are not easily exposable.

Trait Implementations

impl<B: IBackend> Debug for ILayer<B>

fn fmt(&self, f: &mut Formatter) -> Result

Implementors