Activation Functions

The activation functions introduce the required non-linearities in the neural networks. The non linear transformation is implemented over the input signal and it basically decides whether a neuron should be activated or not.

Function Descriptions

class ztlearn.activations.ActivationFunction(name, activation_dict={})[source]

Bases: object

backward(input_signal)[source]
forward(input_signal)[source]
name
class ztlearn.activations.ELU(activation_dict)[source]

Bases: object

Exponential Linear Units (ELUs)

ELUs are exponential functions which have negative values that allow them to push mean unit activations closer to zero like batch normalization but with lower computational complexity.

References

[1] Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
Parameters:alpha (float32) – controls the value to which an ELU saturates for negative net inputs
activation(input_signal)[source]

ELU activation applied to input provided

Parameters:input_signal (numpy.array) – the input numpy array
Returns:the output of the ELU function applied to the input
Return type:numpy.array
activation_name
derivative(input_signal)[source]

ELU derivative applied to input provided

Parameters:input_signal (numpy.array) – the input numpy array
Returns:the output of the ELU derivative applied to the input
Return type:numpy.array
class ztlearn.activations.ElliotSigmoid(activation_dict)[source]

Bases: object

Elliot Sigmoid Activation Function

Elliot Sigmoid squashes each element of the input from the interval ranging [-inf, inf] to the interval ranging [-1, 1] with an ‘S-shaped’ function. The fucntion is fast to calculate on simple computing hardware as it does not require any exponential or trigonometric functions

References

[1] A better Activation Function for Artificial Neural Networks
activation(input_signal)[source]

ElliotSigmoid activation applied to input provided

Parameters:input_signal (numpy.array) – the input numpy array
Returns:the output of the ElliotSigmoid function applied to the input
Return type:numpy.array
activation_name
derivative(input_signal)[source]

ElliotSigmoid derivative applied to input provided

Parameters:input_signal (numpy.array) – the input numpy array
Returns:the output of the ElliotSigmoid derivative applied to the input
Return type:numpy.array
class ztlearn.activations.LeakyReLU(activation_dict)[source]

Bases: object

LeakyReLU Activation Functions

Leaky ReLUs allow a small non-zero gradient to propagate through the network when the unit is not active hence avoiding bottlenecks that can prevent learning in the Neural Network.

References

[1] Rectifier Nonlinearities Improve Neural Network Acoustic Models
[2] Empirical Evaluation of Rectified Activations in Convolutional Network
Parameters:alpha (float32) – provides for a small non-zero gradient (e.g. 0.01) when the unit is not active.
activation(input_signal)[source]

LeakyReLU activation applied to input provided

Parameters:input_signal (numpy.array) – the input numpy array
Returns:the output of the LeakyReLU function applied to the input
Return type:numpy.array
activation_name
derivative(input_signal)[source]

LeakyReLU derivative applied to input provided

Parameters:input_signal (numpy.array) – the input numpy array
Returns:the output of the LeakyReLU derivative applied to the input
Return type:numpy.array
class ztlearn.activations.Linear(activation_dict)[source]

Bases: object

Linear Activation Function

Linear Activation applies identity operation on your data such that the output data is proportional to the input data. The function always returns the same value that was used as its argument.

References

[1] Identity Function
activation(input_signal)[source]

Linear activation applied to input provided

Parameters:input_signal (numpy.array) – the input numpy array
Returns:the output of the Linear function applied to the input
Return type:numpy.array
activation_name
derivative(input_signal)[source]

Linear derivative applied to input provided

Parameters:input_signal (numpy.array) – the input numpy array
Returns:the output of the Linear derivative applied to the input
Return type:numpy.array
class ztlearn.activations.ReLU(activation_dict)[source]

Bases: object

Rectified Linear Units (ReLUs)

Rectifying neurons are an even better model of biological neurons yielding equal or better performance than hyperbolic tangent networks in-spite of the hard non-linearity and non-differentiability at zero hence creating sparse representations with true zeros which seem remarkably suitable for naturally sparse data.

References

[1] Deep Sparse Rectifier Neural Networks
[2] Delving Deep into Rectifiers
activation(input_signal)[source]

ReLU activation applied to input provided

Parameters:input_signal (numpy.array) – the input numpy array
Returns:the output of the ReLU function applied to the input
Return type:numpy.array
activation_name
derivative(input_signal)[source]

ReLU derivative applied to input provided

Parameters:input_signal (numpy.array) – the input numpy array
Returns:the output of the ReLU derivative applied to the input
Return type:numpy.array
class ztlearn.activations.SELU(activation_dict)[source]

Bases: object

Scaled Exponential Linear Units (SELUs)

SELUs are activations which induce self-normalizing properties and are used in Self-Normalizing Neural Networks (SNNs). SNNs enable high-level abstract representations that tend to automatically converge towards zero mean and unit variance.

References

[1] Self-Normalizing Neural Networks (SELUs)
Parameters:
  • ALPHA (float32) – 1.6732632423543772848170429916717
  • _LAMBDA (float32) – 1.0507009873554804934193349852946
ALPHA = 1.6732632423543772
activation(input_signal)[source]

SELU activation applied to input provided

Parameters:input_signal (numpy.array) – the input numpy array
Returns:the output of the SELU function applied to the input
Return type:numpy.array
activation_name
derivative(input_signal)[source]

SELU derivative applied to input provided

Parameters:input_signal (numpy.array) – the input numpy array
Returns:the output of the SELU derivative applied to the input
Return type:numpy.array
class ztlearn.activations.Sigmoid(activation_dict)[source]

Bases: object

Sigmoid Activation Function

A Sigmoid function, often used as the output activation function for binary classification problems as it outputs values that are in the range (0, 1). Sigmoid functions are real-valued and differentiable, producing a curve that is ‘S-shaped’ and feature one local minimum, and one local maximum

References

[1] The influence of the sigmoid function parameters on the speed of backpropagation learning
activation(input_signal)[source]

Sigmoid activation applied to input provided

Parameters:input_signal (numpy.array) – the input numpy array
Returns:the output of the Sigmoid function applied to the input
Return type:numpy.array
activation_name
derivative(input_signal)[source]

Sigmoid derivative applied to input provided

Parameters:input_signal (numpy.array) – the input numpy array
Returns:the output of the Sigmoid derivative applied to the input
Return type:numpy.array
class ztlearn.activations.SoftPlus(activation_dict)[source]

Bases: object

SoftPlus Activation Function

A Softplus function is a smooth approximation to the rectifier linear units (ReLUs). Near point 0, it is smooth and differentiable and produces outputs in scale of (0, +inf).

References

[1] Incorporating Second-Order Functional Knowledge for Better Option Pricing
activation(input_signal)[source]

SoftPlus activation applied to input provided

Parameters:input_signal (numpy.array) – the input numpy array
Returns:the output of the SoftPlus function applied to the input
Return type:numpy.array
activation_name
derivative(input_signal)[source]

SoftPlus derivative applied to input provided

Parameters:input_signal (numpy.array) – the input numpy array
Returns:the output of the SoftPlus derivative applied to the input
Return type:numpy.array
class ztlearn.activations.Softmax(activation_dict)[source]

Bases: object

Softmax Activation Function

The Softmax Activation Function is a generalization of the logistic function that squashes the outputs of each unit to real values in the range [0, 1] but it also divides each output such that the total sum of the outputs is equal to 1.

References

[1] Softmax Regression
[2] Deep Learning using Linear Support Vector Machines
[3] Probabilistic Interpretation of Feedforward Network Outputs
activation(input_signal)[source]

Softmax activation applied to input provided

Parameters:input_signal (numpy.array) – the input numpy array
Returns:the output of the Softmax function applied to the input
Return type:numpy.array
activation_name
derivative(input_signal)[source]

Softmax derivative applied to input provided

Parameters:input_signal (numpy.array) – the input numpy array
Returns:the output of the Softmax derivative applied to the input
Return type:numpy.array
class ztlearn.activations.TanH(activation_dict)[source]

Bases: object

Tangent Hyperbolic (TanH)

The Tangent Hyperbolic function, a rescaled version of the sigmoid function that produces outputs in scale of [-1, +1]. As an activation function it gives an output for every input value hence making is a continuous function.

References

[1] Hyperbolic Functions
activation(input_signal)[source]

TanH activation applied to input provided

Parameters:input_signal (numpy.array) – the input numpy array
Returns:the output of the TanH function applied to the input
Return type:numpy.array
activation_name
derivative(input_signal)[source]

TanH derivative applied to input provided

Parameters:input_signal (numpy.array) – the input numpy array
Returns:the output of the TanH derivative applied to the input
Return type:numpy.array