brevitas.function package¶
Submodules¶
brevitas.function.autograd_ops module¶
-
class
brevitas.function.autograd_ops.
binary_sign_ste_fn
¶ Bases:
torch.autograd.function.Function
Autograd function that implements binary_sign_ste with a straight through estimator
Look at the documentation of
binary_sign_ste()
for further details.-
static
backward
(ctx, grad_y)¶
-
static
forward
(ctx, x)¶
-
static
-
class
brevitas.function.autograd_ops.
ceil_ste_fn
¶ Bases:
torch.autograd.function.Function
Autograd function that implements ceil_ste with a straight through estimator
Look at the documentation of
ceil_ste()
for further details.-
static
backward
(ctx, grad_y)¶
-
static
forward
(ctx, x)¶
-
static
-
class
brevitas.function.autograd_ops.
floor_ste_fn
¶ Bases:
torch.autograd.function.Function
Autograd function that implements floor_ste with a straight through estimator
Look at the documentation of
floor_ste()
for further details.-
static
backward
(ctx, grad_y)¶
-
static
forward
(ctx, x)¶
-
static
-
class
brevitas.function.autograd_ops.
round_ste_fn
¶ Bases:
torch.autograd.function.Function
Autograd function that implements round_ste with a straight through estimator
Look at the documentation of
round_ste()
for further details.-
static
backward
(ctx, grad_y)¶
-
static
forward
(ctx, x)¶
-
static
-
class
brevitas.function.autograd_ops.
scalar_clamp_ste_fn
¶ Bases:
torch.autograd.function.Function
Autograd function that implements scalar_clamp with a straight through estimator
Look at the documentation of
scalar_clamp_ste()
for further details.-
static
backward
(ctx, grad_y)¶
-
static
forward
(ctx, x, min_val, max_val)¶
-
static
-
class
brevitas.function.autograd_ops.
tensor_clamp_ste_fn
¶ Bases:
torch.autograd.function.Function
Autograd function that implements tensor_clamp with a straight through estimator
Look at the documentation of
tensor_clamp_ste()
for further details.-
static
backward
(ctx, grad_y)¶
-
static
forward
(ctx, x, min_val, max_val)¶
-
static
-
class
brevitas.function.autograd_ops.
ternary_sign_ste_fn
¶ Bases:
torch.autograd.function.Function
Autograd function that implements ternary_sign_ste with a straight through estimator
Look at the documentation of
ternary_sign_ste()
for further details.-
static
backward
(ctx, grad_y)¶
-
static
forward
(ctx, x)¶
-
static
brevitas.function.ops module¶
-
brevitas.function.ops.
tensor_clamp
(x, min_val, max_val)¶ - Parameters
x (Tensor) – Tensor on which to apply the clamp operation
min_val (Tensor) – Tensor containing the minimum values for the clamp operation. Must have the same shape of x
max_val (Tensor) – Tensor containing the maximum values for the clamp operation. Must have the same shape of x
- Returns
Tensor for which every element of x is clamped between the corresponding minimum and maximum values.
- Return type
Tensor
-
brevitas.function.ops.
max_uint
(narrow_range, bit_width)¶ Compute the maximum unsigned integer representable
The maximum unsigned integer representable depends on the number of bits, and whether the narrow range setting is used. If so, the maximum value represented is decreased by one unit.
- Parameters
narrow_range (Bool) – Flag that indicates whether to decrease the possible maximum value represented
bit_width (Tensor) – Number of bits available for the representation
- Returns
Maximum unsigned integer that can be represented according to the input parameters
- Return type
Tensor
-
brevitas.function.ops.
max_int
(signed, bit_width)¶ Compute the maximum integer representable
The maximum integer representable depends on the number of bits, and whether the negative numbers are included in the representation. If so, one bit is lost in the computation of the maximum value.
- Parameters
signed (Bool) – Flag that indicates whether negative numbers must be included or not
bit_width (Tensor) – Number of bits available for the representation
- Returns
Maximum integer that can be represented according to the input parameters
- Return type
Tensor
-
brevitas.function.ops.
min_int
(signed, narrow_range, bit_width)¶ Compute the minimum integer representable
The minimum integer representable depends on the number of bits, whether the negative numbers are included in the representation, and whether the narrow range setting is used. For positive-only number, the minimum value will always be zero. If the sign and narrow range flags are both set, then the representation will be such that there is symmetry between positive and negative values. For example, for 3 bit representation, with sign and narrow range, the values representable are in the range [-3, 3]. If the narrow range is not enabled, then the possible values will be in the range [-4, 3].
- Parameters
signed (Bool) – Flag that indicates whether negative numbers must be included or not
narrow_range (Bool) – Flag that indicates whether the narrow range setting is enabled or not
bit_width (Tensor) – Number of bits available for the representation
- Returns
Minimum integer that can be represented according to the input parameters
- Return type
Tensor
brevitas.function.ops_ste module¶
-
brevitas.function.ops_ste.
round_ste
(x)¶ Perform round operation with Straight Trough Estimation (STE) of the Gradient
This operation behaves like an identity on the backward pass. For Pytorch version >= 1.3.0, the STE operator is implemented in C++ using the torch::autograd::Function class and compiled. At execution time, the Just-In-Time (JIT) compiler of Pytorch is used to speed-up the computation. For Pytorch version < 1.3.0, the STE operator is implemented using the torch.autograd.Function class in python, and the JIT cannot be used.
- Parameters
x (Tensor) – Tensor on which to apply the round operation
- Returns
Tensor after applying round operation. When backpropagating through this value, a straight through estimator is applied.
- Return type
Tensor
-
brevitas.function.ops_ste.
ceil_ste
(x)¶ Perform ceil operation with Straight Trough Estimation (STE) of the Gradient
This operation behaves like an identity on the backward pass. For Pytorch version >= 1.3.0, the STE operator is implemented in C++ using the torch::autograd::Function class and compiled. At execution time, the Just-In-Time (JIT) compiler of Pytorch is used to speed-up the computation. For Pytorch version < 1.3.0, the STE operator is implemented using the torch.autograd.Function class in python, and the JIT cannot be used.
- Parameters
x (Tensor) – Tensor on which to apply the ceil operation
- Returns
Tensor after applying ceil operation. When backpropagating through this value, a straight through estimator is applied.
- Return type
Tensor
-
brevitas.function.ops_ste.
floor_ste
(x)¶ Perform floor operation with Straight Trough Estimation (STE) of the Gradient
This operation behaves like an identity on the backward pass. For Pytorch version >= 1.3.0, the STE operator is implemented in C++ using the torch::autograd::Function class and compiled. At execution time, the Just-In-Time (JIT) compiler of Pytorch is used to speed-up the computation. For Pytorch version < 1.3.0, the STE operator is implemented using the torch.autograd.Function class in python, and the JIT cannot be used.
- Parameters
x (Tensor) – Tensor on which to apply the floor operation
- Returns
Tensor after applying floor operation. When backpropagating through this value, a straight through estimator is applied.
- Return type
Tensor
-
brevitas.function.ops_ste.
tensor_clamp_ste
(x, min_val, max_val)¶ Perform tensor-clamp operation with Straight Trough Estimation (STE) of the Gradient
This function accepts two Tensors as min_val and max_val. These Tensors must have the same shape as x, so that each element of x can be clamped according to the correspondent min_val and max_val. This operation behaves like an identity on the backward pass. For Pytorch version >= 1.3.0, the STE operator is implemented in C++ using the torch::autograd::Function class and compiled. At execution time, the Just-In-Time (JIT) compiler of Pytorch is used to speed-up the computation. For Pytorch version < 1.3.0, the STE operator is implemented using the torch.autograd.Function class in python, and the JIT cannot be used.
- Parameters
x (Tensor) – Tensor on which to apply the clamp operation
min_val (Tensor) – Tensor containing the minimum values for the clamp operation. Must have the same shape of x
max_val (Tensor) – Tensor containing the maximum values for the clamp operation. Must have the same shape of x
- Returns
Tensor for which every element of x is clamped between the corresponding minimum and maximum values. When backpropagating through this value, a straight through estimator is applied.
- Return type
Tensor
-
brevitas.function.ops_ste.
scalar_clamp_ste
(x, min_val, max_val)¶ Perform clamp operation with Straight Trough Estimation (STE) of the Gradient
This operation behaves like an identity on the backward pass. For Pytorch version >= 1.3.0, the STE operator is implemented in C++ using the torch::autograd::Function class and compiled. At execution time, the Just-In-Time (JIT) compiler of Pytorch is used to speed-up the computation. For Pytorch version < 1.3.0, the STE operator is implemented using the torch.autograd.Function class in python, and the JIT cannot be used.
- Parameters
x (Tensor) – Tensor on which to apply the clamp operation
min_val (Float) – Scalar containing the minimum value for the clamp operation
max_val (Float) – Scalar containing the maximum value for the clamp operation
- Returns
Tensor for which every element of x is clamped between min_val and max_val. When backpropagating through this value, a straight through estimator is applied.
- Return type
Tensor
-
brevitas.function.ops_ste.
binary_sign_ste
(x)¶ Perform binarization with Straight Trough Estimation (STE) of the Gradient
This operation performs binarization on the input Tensor. The output value will be one for each input value >= 0, otherwise it will be 0. This operation behaves like an identity on the backward pass. For Pytorch version >= 1.3.0, the STE operator is implemented in C++ using the torch::autograd::Function class and compiled. At execution time, the Just-In-Time (JIT) compiler of Pytorch is used to speed-up the computation. For Pytorch version < 1.3.0, the STE operator is implemented using the torch.autograd.Function class in python, and the JIT cannot be used.
- Parameters
x (Tensor) – Tensor on which to apply the binarization operation
- Returns
Tensor after applying binarization. When backpropagating through this value, a straight through estimator is applied.
- Return type
Tensor
-
brevitas.function.ops_ste.
ternary_sign_ste
(x)¶ Perform ternary operator with Straight Trough Estimation (STE) of the Gradient
This operations behaves as the function sign of Pytorch. This operation behaves like an identity on the backward pass. For Pytorch version >= 1.3.0, the STE operator is implemented in C++ using the torch::autograd::Function class and compiled. At execution time, the Just-In-Time (JIT) compiler of Pytorch is used to speed-up the computation. For Pytorch version < 1.3.0, the STE operator is implemented using the torch.autograd.Function class in python, and the JIT cannot be used.
- Parameters
x (Tensor) – Tensor on which to apply the ternary operation
- Returns
Tensor after applying ternary operation. When backpropagating through this value, a straight through estimator is applied.
- Return type
Tensor