- Activation
- Derivative
- Loss
- LossGradient
- RegressionMetric
- ConfusionMatrix
- ConfusionMatrixMultilabel
- ClassificationMetric
- ClassificationScore
- PrecisionRecall
- ReceiverOperatingCharacteristic
LossGradient
Compute a vector or matrix of loss function gradients.
vector vector::LossGradient(
|
Parameters
vect_true/matrix_true
[in] Vector or matrix of true values.
loss
[in] Loss function from the ENUM_LOSS_FUNCTION enumeration.
axis
[in] ENUM_MATRIX_AXIS enumeration value (AXIS_HORZ — horizontal axis, AXIS_VERT — vertical axis).
...
[in] Additional parameter 'delta' can only be used by the Hubert loss function (LOSS_HUBER)
Return Value
Vector or matrix of loss function gradient values. The gradient is the partial derivative with respect to dx (x is the predicted value) of the loss function at a given point.
Note
Gradients are used in neural networks to adjust the weight matrix weights during backpropagation, when training the model.
A neural network aims at finding the algorithms that minimize the error on the training sample, for which the loss function is used.
Different loss functions are used depending on the problem. For example, Mean Squared Error (MSE) is used for regression problems, and Binary Cross-Entropy (BCE) is used for binary classification purposes.
Example of calculating loss function gradients
matrixf y_true={{ 1, 2, 3, 4 },
|