Struct leaf::layer::Layer [] [src]

pub struct Layer<B: IBackend> {
    pub name: String,
    pub config: Box<LayerConfig>,
    pub worker: Box<ILayer<B>>,
    pub weights_data: Vec<ArcLock<SharedTensor<f32>>>,
    pub weights_gradient: Vec<ArcLock<SharedTensor<f32>>>,
    pub input_blobs_data: Vec<ArcLock<SharedTensor<f32>>>,
    pub input_blobs_gradient: Vec<ArcLock<SharedTensor<f32>>>,
    pub input_blob_names: Vec<String>,
    pub output_blobs_data: Vec<ArcLock<SharedTensor<f32>>>,
    pub output_blobs_gradient: Vec<ArcLock<SharedTensor<f32>>>,
    pub blob_names: HashMap<String, (ArcLock<SharedTensor<f32>>, ArcLock<SharedTensor<f32>>)>,
    // some fields omitted
}

The generic Layer

Fields

name

Identifies the Network

The name is mainly used for logging purposes.

config

The configuration of the Layer

worker

The implementation of the Layer.

This is the part that does most of the work (forward/backward).

weights_data

The vector that stores shared references to the weights in the form of blobs.

weights_gradient

The vector that stores shared references to the weights in the form of blobs.

input_blobs_data

References to all the input blobs of the layer.

input_blobs_gradient

References to all the input blobs of the layer.

input_blob_names

Names for all the input blobs of the layer.

output_blobs_data

References to all the output blobs of the layer.

output_blobs_gradient

References to all the output blobs of the layer.

blob_names

All the blobs of the layer that can be addressed by name.

Does not contain anonymous blobs.

Methods

impl<B: IBackend> Layer<B>

fn connect(&mut self, registry: &mut HashMap<String, (ArcLock<SharedTensor<f32>>, ArcLock<SharedTensor<f32>>)>, weight_registry: &mut HashMap<String, (ArcLock<SharedTensor<f32>>, ArcLock<SharedTensor<f32>>, Option<f32>, Option<f32>)>)

Connect the layer to another layers and set up tensors for intermediate results and weights.

Connects to the outputs provided by other layers via the registry. Adds output blobs to the layer and then adds them to the registry, so the next layers can connect them as their inputs. In the end it initializes the underlying layer implementation.

Called during initialization of containter layers.

fn init_backprop(&mut self, blobs_under_loss: &mut HashSet<String>, blobs_skip_backp: &mut HashSet<String>)

Initializes layer for backpropagation

Go through all the blobs of a layer to determine which blobs contribute to the loss of the next layer. We can skip backward computation for blobs that don't contribute to the loss. If all of the blobs skip backpropagation we set a flag to skip backpropagation of the whole layer.

fn init_force_backward(&mut self)

Set backpropagation flags to force this layer to backpropagate.

Is executed during Network initalization if [NetworkConfig][2].force_backward is true. Forcing backpropagation is useful for debugging.

fn forward(&mut self, inputs: &[ArcLock<SharedTensor<f32>>]) -> Vec<ArcLock<SharedTensor<f32>>>

Uses the underlying layer implementation to compute a forward step.

See ILayer.forward

fn backward(&mut self, output_gradients: &[ArcLock<SharedTensor<f32>>]) -> Vec<ArcLock<SharedTensor<f32>>>

Uses the underlying layer implementation to compute a backward step.

See ILayer.backward

fn backward_input(&mut self, output_gradients: &[ArcLock<SharedTensor<f32>>]) -> Vec<ArcLock<SharedTensor<f32>>>

Calculate the gradient w.r.t. input.

This method is mostly used when doing backpropagation.

fn backward_parameters(&mut self)

Calculate the gradient w.r.t. parameters.

"Parameters" here refers to weights and also possibly bias, depending on the layer.

This method is mostly used when doing backpropagation.

fn synchronize(&self)

Synchronize the layers backend.

fn update_weights<SolverB: IBackend + SolverOps<f32>>(&mut self, backend: &SolverB)

Updates the weights with the weight update computed by the Solver.

Updating the weights is the last step of computing a Solver minibatch. The update value is computed in previous steps according to the learning rate policy

fn clear_weights_gradients(&mut self)

Clears the weights gradients and zero-inits them.

The gradients for the weights accumulate over the backpropagation steps of a Solver minibatch and are cleared between each minibatch to start over with a clean slate.

fn save<P: AsRef<Path>>(&mut self, path: P) -> Result<()>

Serialize the Layer and it's weights to a Cap'n Proto file at the specified path.

You can find the capnp schema here.

let mut net_cfg = SequentialConfig::default();
// ... set up network ...
let cfg = LayerConfig::new("network", net_cfg);

let native_backend = Rc::new(util::native_backend());
let mut layer = Layer::from_config(native_backend, &cfg);
// ... do stuff with the layer ...
// ... and save it
layer.save("mynetwork").unwrap();

fn load<LB: IBackend + LayerOps<f32> + 'static, P: AsRef<Path>>(backend: Rc<LB>, path: P) -> Result<Layer<LB>>

Read a Cap'n Proto file at the specified path and deserialize the Layer inside it.

You can find the capnp schema here.

use collenchyma::prelude::*;

let native_backend = Rc::new(util::native_backend());
// Load layer from file "mynetwork"
let layer = Layer::<Backend<Native>>::load(native_backend, "mynetwork").unwrap();

fn set_weight_propagate_down(&mut self, weight_id: usize, value: bool)

Sets whether the layer should compute gradients w.r.t. a weight at a particular index given by weight_id.

See [weight_propagate_down][1] ./struct.Layer.html

fn is_using_in_place(&self) -> bool

Returns true when the layer is using in-place computation.

For a layer to use in-place computation it needs to support it via compute_in_place and the names of the first input and output tensor have to match.

fn input_blob_names(&self) -> &[String]

Returns the names of all the input blobs.

fn loss(&self, weight_id: usize) -> Option<&f32>

Returns the loss weight associated with the weight blob with id weight_id.

fn learnable_weights_data(&self) -> Vec<ArcLock<SharedTensor<f32>>>

Returns all the learnable weights in the layer.

If the layer is a container layer it will return all the weights of the layers inside it.

fn learnable_weights_gradients(&self) -> Vec<ArcLock<SharedTensor<f32>>>

Returns the gradients for all the learnable weights in the layer.

If the layer is a container layer it will return all the gradients of the layers inside it.

fn learnable_weights_names(&self) -> Vec<String>

Returns the names of all the learnable weights in the layer.

If the layer is a container layer it will return all the names of the layers inside it.

fn learnable_weights_lr(&self) -> Vec<Option<f32>>

Returns the learning rate for all the learnable weights in the layer.

If the layer is a container layer it will return all learning rates of the layers inside it.

impl<B: IBackend + LayerOps<f32> + 'static> Layer<B>

fn from_config(backend: Rc<B>, config: &LayerConfig) -> Layer<B>

Creates a new Layer from a LayerConfig.

Trait Implementations

impl<B: IBackend> Send for Layer<B>

Derived Implementations

impl<B: Debug + IBackend> Debug for Layer<B>

fn fmt(&self, __arg_0: &mut Formatter) -> Result