Activation Functions¶
The activation functions introduce the required non-linearities in the neural networks. The non linear transformation is implemented over the input signal and it basically decides whether a neuron should be activated or not.
Featured Activations¶
ELU (activation_dict) |
Exponential Linear Units (ELUs) |
SELU (activation_dict) |
Scaled Exponential Linear Units (SELUs) |
ReLU (activation_dict) |
Rectified Linear Units (ReLUs) |
TanH (activation_dict) |
Tangent Hyperbolic (TanH) |
Linear (activation_dict) |
Linear Activation Function |
Sigmoid (activation_dict) |
Sigmoid Activation Function |
Softmax (activation_dict) |
Softmax Activation Function |
SoftPlus (activation_dict) |
SoftPlus Activation Function |
LeakyReLU (activation_dict) |
LeakyReLU Activation Functions |
ElliotSigmoid (activation_dict) |
Elliot Sigmoid Activation Function |
Function Descriptions¶
-
class
ztlearn.activations.
ELU
(activation_dict)[source]¶ Bases:
object
Exponential Linear Units (ELUs)
ELUs are exponential functions which have negative values that allow them to push mean unit activations closer to zero like batch normalization but with lower computational complexity.
References
- [1] Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
- [Djork-Arné Clevert et. al., 2016] https://arxiv.org/abs/1511.07289
- [PDF] https://arxiv.org/pdf/1511.07289.pdf
Parameters: alpha (float32) – controls the value to which an ELU saturates for negative net inputs -
activation
(input_signal)[source]¶ ELU activation applied to input provided
Parameters: input_signal (numpy.array) – the input numpy array Returns: the output of the ELU function applied to the input Return type: numpy.array
-
activation_name
¶
-
class
ztlearn.activations.
ElliotSigmoid
(activation_dict)[source]¶ Bases:
object
Elliot Sigmoid Activation Function
Elliot Sigmoid squashes each element of the input from the interval ranging [-inf, inf] to the interval ranging [-1, 1] with an ‘S-shaped’ function. The fucntion is fast to calculate on simple computing hardware as it does not require any exponential or trigonometric functions
References
- [1] A better Activation Function for Artificial Neural Networks
- [David L. Elliott, et. al., 1993] https://goo.gl/qqBdne
- [PDF] https://goo.gl/fPLPcr
-
activation
(input_signal)[source]¶ ElliotSigmoid activation applied to input provided
Parameters: input_signal (numpy.array) – the input numpy array Returns: the output of the ElliotSigmoid function applied to the input Return type: numpy.array
-
activation_name
¶
-
class
ztlearn.activations.
LeakyReLU
(activation_dict)[source]¶ Bases:
object
LeakyReLU Activation Functions
Leaky ReLUs allow a small non-zero gradient to propagate through the network when the unit is not active hence avoiding bottlenecks that can prevent learning in the Neural Network.
References
- [1] Rectifier Nonlinearities Improve Neural Network Acoustic Models
- [Andrew L. Mass, et. al., 2013] https://goo.gl/k9fhEZ
- [PDF] https://goo.gl/v48yXT
- [2] Empirical Evaluation of Rectified Activations in Convolutional Network
- [Bing Xu, et. al., 2015] https://arxiv.org/abs/1505.00853
- [PDF] https://arxiv.org/pdf/1505.00853.pdf
Parameters: alpha (float32) – provides for a small non-zero gradient (e.g. 0.01) when the unit is not active. -
activation
(input_signal)[source]¶ LeakyReLU activation applied to input provided
Parameters: input_signal (numpy.array) – the input numpy array Returns: the output of the LeakyReLU function applied to the input Return type: numpy.array
-
activation_name
¶
-
class
ztlearn.activations.
Linear
(activation_dict)[source]¶ Bases:
object
Linear Activation Function
Linear Activation applies identity operation on your data such that the output data is proportional to the input data. The function always returns the same value that was used as its argument.
References
- [1] Identity Function
- [Wikipedia Article] https://en.wikipedia.org/wiki/Identity_function
-
activation
(input_signal)[source]¶ Linear activation applied to input provided
Parameters: input_signal (numpy.array) – the input numpy array Returns: the output of the Linear function applied to the input Return type: numpy.array
-
activation_name
¶
-
class
ztlearn.activations.
ReLU
(activation_dict)[source]¶ Bases:
object
Rectified Linear Units (ReLUs)
Rectifying neurons are an even better model of biological neurons yielding equal or better performance than hyperbolic tangent networks in-spite of the hard non-linearity and non-differentiability at zero hence creating sparse representations with true zeros which seem remarkably suitable for naturally sparse data.
References
- [1] Deep Sparse Rectifier Neural Networks
- [Xavier Glorot., et. al., 2011] http://proceedings.mlr.press/v15/glorot11a.html
- [PDF] http://proceedings.mlr.press/v15/glorot11a/glorot11a.pdf
- [2] Delving Deep into Rectifiers
- [Kaiming He, et. al., 2015] https://arxiv.org/abs/1502.01852
- [PDF] https://arxiv.org/pdf/1502.01852.pdf
-
activation
(input_signal)[source]¶ ReLU activation applied to input provided
Parameters: input_signal (numpy.array) – the input numpy array Returns: the output of the ReLU function applied to the input Return type: numpy.array
-
activation_name
¶
-
class
ztlearn.activations.
SELU
(activation_dict)[source]¶ Bases:
object
Scaled Exponential Linear Units (SELUs)
SELUs are activations which induce self-normalizing properties and are used in Self-Normalizing Neural Networks (SNNs). SNNs enable high-level abstract representations that tend to automatically converge towards zero mean and unit variance.
References
- [1] Self-Normalizing Neural Networks (SELUs)
- [Klambauer, G., et. al., 2017] https://arxiv.org/abs/1706.02515
- [PDF] https://arxiv.org/pdf/1706.02515.pdf
Parameters: - ALPHA (float32) – 1.6732632423543772848170429916717
- _LAMBDA (float32) – 1.0507009873554804934193349852946
-
ALPHA
= 1.6732632423543772¶
-
activation
(input_signal)[source]¶ SELU activation applied to input provided
Parameters: input_signal (numpy.array) – the input numpy array Returns: the output of the SELU function applied to the input Return type: numpy.array
-
activation_name
¶
-
class
ztlearn.activations.
Sigmoid
(activation_dict)[source]¶ Bases:
object
Sigmoid Activation Function
A Sigmoid function, often used as the output activation function for binary classification problems as it outputs values that are in the range (0, 1). Sigmoid functions are real-valued and differentiable, producing a curve that is ‘S-shaped’ and feature one local minimum, and one local maximum
References
- [1] The influence of the sigmoid function parameters on the speed of backpropagation learning
- [PDF] https://goo.gl/MavJjj
-
activation
(input_signal)[source]¶ Sigmoid activation applied to input provided
Parameters: input_signal (numpy.array) – the input numpy array Returns: the output of the Sigmoid function applied to the input Return type: numpy.array
-
activation_name
¶
-
class
ztlearn.activations.
SoftPlus
(activation_dict)[source]¶ Bases:
object
SoftPlus Activation Function
A Softplus function is a smooth approximation to the rectifier linear units (ReLUs). Near point 0, it is smooth and differentiable and produces outputs in scale of (0, +inf).
References
- [1] Incorporating Second-Order Functional Knowledge for Better Option Pricing
- [Charles Dugas, et. al., 2001] https://goo.gl/z3jeYc
- [PDF] https://goo.gl/z3jeYc
-
activation
(input_signal)[source]¶ SoftPlus activation applied to input provided
Parameters: input_signal (numpy.array) – the input numpy array Returns: the output of the SoftPlus function applied to the input Return type: numpy.array
-
activation_name
¶
-
class
ztlearn.activations.
Softmax
(activation_dict)[source]¶ Bases:
object
Softmax Activation Function
The Softmax Activation Function is a generalization of the logistic function that squashes the outputs of each unit to real values in the range [0, 1] but it also divides each output such that the total sum of the outputs is equal to 1.
References
- [1] Softmax Regression
- [UFLDL Tutorial] https://goo.gl/1qgqdg
- [2] Deep Learning using Linear Support Vector Machines
- [Yichuan Tang, 2015] https://arxiv.org/abs/1306.0239
- [PDF] https://arxiv.org/pdf/1306.0239.pdf
- [3] Probabilistic Interpretation of Feedforward Network Outputs
- [Mario Costa, 1989] [PDF] https://goo.gl/ZhBY4r
-
activation
(input_signal)[source]¶ Softmax activation applied to input provided
Parameters: input_signal (numpy.array) – the input numpy array Returns: the output of the Softmax function applied to the input Return type: numpy.array
-
activation_name
¶
-
class
ztlearn.activations.
TanH
(activation_dict)[source]¶ Bases:
object
Tangent Hyperbolic (TanH)
The Tangent Hyperbolic function, a rescaled version of the sigmoid function that produces outputs in scale of [-1, +1]. As an activation function it gives an output for every input value hence making is a continuous function.
References
- [1] Hyperbolic Functions
- [Mathematics Education Centre] https://goo.gl/4Dkkrd
- [PDF] https://goo.gl/xPSnif
-
activation
(input_signal)[source]¶ TanH activation applied to input provided
Parameters: input_signal (numpy.array) – the input numpy array Returns: the output of the TanH function applied to the input Return type: numpy.array
-
activation_name
¶