Metrics module
|
Calculates the binary confusion matrics. |
|
Calculates the specificity. |
|
Calculates the recall. |
|
Calculates the precision. |
|
Calculates the F-beta score. |
|
Calculates the f1_score. |
|
Calculates the balanced accuracy score. |
|
Calculates the geometric mean score. |
|
Calculates the geometric mean score. |
- strlearn.metrics.balanced_accuracy_score(y_true, y_pred)
Calculates the balanced accuracy score.
The balanced accuracy for the multiclass problems is defined as the average of recall obtained on each class. For binary problems it is denoted by the average of recall and specificity (also called true negative rate).
\[BAC = \frac{Recall + Specificity}{2}\]- Parameters:
y_true (array-like, shape (n_samples)) – True labels.
y_pred (array-like, shape (n_samples)) – Predicted labels.
- Return type:
- Returns:
Balanced accuracy score.
- strlearn.metrics.binary_confusion_matrix(y_true, y_pred)
Calculates the binary confusion matrics.
- Parameters:
y_true (array-like, shape (n_samples)) – True labels.
y_pred (array-like, shape (n_samples)) – Predicted labels.
- Return type:
tuple, (TN, FP, FN, TP)
- Returns:
Elements of binary confusion matrix.
- strlearn.metrics.f1_score(y_true, y_pred)
Calculates the f1_score.
The F1 score can be interpreted as a F-beta score, where \(eta\) parameter equals 1. It is a harmonic mean of precision and recall. The formula for the F1 score is
\[F_1 = 2 * \frac{Precision * Recall}{Precision + Recall}\]- Parameters:
y_true (array-like, shape (n_samples)) – True labels.
y_pred (array-like, shape (n_samples)) – Predicted labels.
- Return type:
- Returns:
F1 score.
- strlearn.metrics.fbeta_score(y_true, y_pred, beta)
Calculates the F-beta score.
The F-beta score can be interpreted as a weighted harmonic mean of precision and recall taking both metrics into account and punishing extreme values. The
beta
parameter determines the recall’s weight.beta
< 1 gives more weight to precision, whilebeta
> 1 prefers recall. The formula for the F-beta score is\[F_\beta = (1+\beta^2) * \frac{Precision * Recall}{(\beta^2 * Precision) + Recall}\]
- strlearn.metrics.geometric_mean_score_1(y_true, y_pred)
Calculates the geometric mean score.
The geometric mean (G-mean) tries to maximize the accuracy on each of the classes while keeping these accuracies balanced. For N-class problems it is a N root of the product of class-wise recall. For binary classification G-mean is denoted as the squared root of the product of the recall and specificity.
\[Gmean1 = \sqrt{Recall * Specificity}\]- Parameters:
y_true (array-like, shape (n_samples)) – True labels.
y_pred (array-like, shape (n_samples)) – Predicted labels.
- Return type:
- Returns:
Geometric mean score.
- strlearn.metrics.geometric_mean_score_2(y_true, y_pred)
Calculates the geometric mean score.
The alternative definition of G-mean measure. For binary classification G-mean is denoted as the squared root of the product of the recall and precision.
\[Gmean2 = \sqrt{Recall * Precision}\]- Parameters:
y_true (array-like, shape (n_samples)) – True labels.
y_pred (array-like, shape (n_samples)) – Predicted labels.
- Return type:
- Returns:
Geometric mean score.
- strlearn.metrics.precision(y_true, y_pred)
Calculates the precision.
Precision (also called positive predictive value) expresses the probability of correct detection of positive samples and is denoted as
\[Precision = \frac{tp}{tp + fp}\]- Parameters:
y_true (array-like, shape (n_samples)) – True labels.
y_pred (array-like, shape (n_samples)) – Predicted labels.
- Return type:
- Returns:
Precision score.
- strlearn.metrics.recall(y_true, y_pred)
Calculates the recall.
Recall (also known as sensitivity or true positive rate) represents the classifier’s ability to find all the positive data samples in the dataset (e.g. the minority class instances) and is denoted as
\[Recall = \frac{tp}{tp + fn}\]- Parameters:
y_true (array-like, shape (n_samples)) – True labels.
y_pred (array-like, shape (n_samples)) – Predicted labels.
- Return type:
- Returns:
Recall score.