Trait collenchyma_nn::ReluPointwise [] [src]

pub trait ReluPointwise<F>: NN<F> {
    fn relu_pointwise(&self, x: &mut SharedTensor<F>) -> Result<(), Error>;
    fn relu_pointwise_plain(&self, x: &mut SharedTensor<F>) -> Result<(), Error>;
    fn relu_pointwise_grad(&self, x: &mut SharedTensor<F>, x_diff: &mut SharedTensor<F>) -> Result<(), Error>;
    fn relu_pointwise_grad_plain(&self, x: &SharedTensor<F>, x_diff: &mut SharedTensor<F>) -> Result<(), Error>;
}

Provides the functionality for pointwise ReLU operations (overwrites the input with the result of the operation).

Required Methods

fn relu_pointwise(&self, x: &mut SharedTensor<F>) -> Result<(), Error>

Computes the Rectified linear units over the input Tensor x with complete memory management.

Saves the result back to x.

For a no-memory managed version see relu_pointwise_plain.

fn relu_pointwise_plain(&self, x: &mut SharedTensor<F>) -> Result<(), Error>

Computes the ReLU over the input Tensor x without any memory management.

Saves the result back to x.

Attention:
For a correct computation result, you need to manage the memory allocation and synchronization yourself.
For a memory managed version see relu_pointwise.

fn relu_pointwise_grad(&self, x: &mut SharedTensor<F>, x_diff: &mut SharedTensor<F>) -> Result<(), Error>

Computes the gradient of ReLU over the input Tensor x with complete memory management.

Saves the result back to x_diff.

For a no-memory managed version see relu_pointwise_grad_plain.

fn relu_pointwise_grad_plain(&self, x: &SharedTensor<F>, x_diff: &mut SharedTensor<F>) -> Result<(), Error>

Computes the gradient of ReLU over the input Tensor x without any memory management.

Saves the result back to x_diff.

Attention:
For a correct computation result, you need to manage the memory allocation and synchronization yourself.
For a memory managed version see relu_pointwise_grad.

Implementors