1 year ago
#357247
Arzental
My data can be approximated with Normal mixture. How can I find the reasons and explain this behaviour?
I use DeLonge method to compare two ROC AUCS. The result of it is Z-score.
Both ROC AUCs obtained from LDA (linear discriminant analysis) from sklearn
package. The first one uses eigen
solver inside LDA and the second one uses svd
solver.
The dotted line is my data. The red line is N(0, 1)
Note: there is a minor jump at the point Z = 0.
Z = 0 means that classifiers did their job equally.
Z > 0 (Z < 0) means that the first (second) classifier did its job better.
Corresponding histogram:
This graphic shows the resuls of classification of 1 iteration (it won't be noticeable when compare 2 classifiers with this kind of plot so I just insert one). The amount of normal observations are 4 times of the amount of anemia observations. The variance of anemia observations are 10 times of the variance of normal observations.
The question in the title. How can I point at some facts and/or reasons that will explain the behaviour of my Z-score data (Normal mixture with separation in point Z = 0)?
python
normal-distribution
auc
mixture-model
linear-discriminant
0 Answers
Your Answer