You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/equations/pymle-equations.tex
+48Lines changed: 48 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -1203,12 +1203,60 @@ \section{Looking at different performance evaluation metrics}
1203
1203
\subsection{Reading a confusion matrix}
1204
1204
\subsection{Optimizing the precision and recall of a classification model}
1205
1205
1206
+
Both the prediction error (ERR) and accuracy (ACC) provide general information about how many samples are misclassi ed. The error can be understood as the sum of all false predictions divided by the number of total predictions, and the accuracy is calculated as the sum of correct predictions divided by the total number of predictions, respectively:
The prediction accuracy can then be calculated directly from the error:
1215
+
1216
+
\[
1217
+
ACC = \frac{TP + TN}{FP + FN + TP + TN} = 1 - ERR
1218
+
\]
1219
+
The true \textit{positive rate} (TPR) and \textit{false positive rate} (FPR) are performance metrics that are especially useful for imbalanced class problems:
1220
+
1221
+
\[
1222
+
FPR = \frac{FP}{N} = \frac{FP}{FP + TN}
1223
+
\]
1224
+
1225
+
\[
1226
+
TPR = \frac{TP}{P} = \frac{TP}{FN+TP}
1227
+
\]
1228
+
1229
+
\textit{Precision (PRE)} and \textit{recall} (REC) are performance metrics that are related to those true positive and true negative rates, and in fact, recall is synonymous to the true positive rate:
1230
+
1231
+
\[
1232
+
PRE = \frac{TP}{TP + FP}
1233
+
\]
1234
+
1235
+
\[
1236
+
REC = TPR = \frac{TP}{P} = \frac{TP}{FN + TP}
1237
+
\]
1238
+
1239
+
In practice, often a combination of precision and recall is used, the so-called \textit{F1-score}:
\subsection{Plotting a receiver operating characteristic}
1211
1246
\subsection{The scoring metrics for multiclass classification}
1247
+
1248
+
he micro-average is calculated from the individual true positives, true negatives, false positives, and false negatives of the system. For example, the micro-average of the precision score in a k-class system can be calculated as follows:
0 commit comments