Trait collenchyma_nn::LRN
[−]
[src]
pub trait LRN<F>: NN<F> { fn new_lrn_config(&self, n: u32, alpha: f64, beta: f64, k: f64) -> Result<Self::CLRN, Error>; fn lrn(&self, x: &mut SharedTensor<F>, result: &mut SharedTensor<F>, config: &Self::CLRN) -> Result<(), Error>; fn lrn_plain(&self, x: &SharedTensor<F>, result: &mut SharedTensor<F>, config: &Self::CLRN) -> Result<(), Error>; fn lrn_grad(&self, x: &mut SharedTensor<F>, x_diff: &mut SharedTensor<F>, result: &mut SharedTensor<F>, result_diff: &mut SharedTensor<F>, config: &Self::CLRN) -> Result<(), Error>; fn lrn_grad_plain(&self, x: &SharedTensor<F>, x_diff: &SharedTensor<F>, result: &SharedTensor<F>, result_diff: &mut SharedTensor<F>, config: &Self::CLRN) -> Result<(), Error>; }
Provides the functionality for a Backend to support Local Response Normalization operations.
Required Methods
fn new_lrn_config(&self, n: u32, alpha: f64, beta: f64, k: f64) -> Result<Self::CLRN, Error>
Creates a new (Local Response Normalization) LRNConfig, which needs to be passed to further LRN Operations.
fn lrn(&self, x: &mut SharedTensor<F>, result: &mut SharedTensor<F>, config: &Self::CLRN) -> Result<(), Error>
Computes a LRN over the input Tensor x
with complete memory management.
Saves the result to result
.
For a no-memory managed version see lrn_plain
.
fn lrn_plain(&self, x: &SharedTensor<F>, result: &mut SharedTensor<F>, config: &Self::CLRN) -> Result<(), Error>
Computes the LRN over the input Tensor x
without any memory management.
Saves the result to result
.
Attention:
For a correct computation result, you need to manage the memory allocation and synchronization yourself.
For a memory managed version see lrn
.
fn lrn_grad(&self, x: &mut SharedTensor<F>, x_diff: &mut SharedTensor<F>, result: &mut SharedTensor<F>, result_diff: &mut SharedTensor<F>, config: &Self::CLRN) -> Result<(), Error>
Computes the gradient of a LRN over the input Tensor x
with complete memory management.
Saves the result to result_diff
.
For a no-memory managed version see lrn_grad_plain
.
fn lrn_grad_plain(&self, x: &SharedTensor<F>, x_diff: &SharedTensor<F>, result: &SharedTensor<F>, result_diff: &mut SharedTensor<F>, config: &Self::CLRN) -> Result<(), Error>
Computes the gradient of a LRN over the input Tensor x
without any memory management.
Saves the result to result_diff
.
Attention:
For a correct computation result, you need to manage the memory allocation and synchronization yourself.
For a memory managed version see lrn_grad
.