1 year ago

#164877

test-img

Dex_Kivuli

XGBoost for precision

I'm using XGBoost for binary classification. The standard/default loss function (binary logistic) considers all classifications (both in the positive and negative classes) for performance.

All I care about is precision. I don't mind if it makes a very small number of classifications, as long as it maximises it's strike rate of getting it right. So I'd like a loss function/evaluation metric combination that doesn't care about missed opportunities at all (ie. false negatives, or true negatives), and only seeks to maximise true positives (and minimise false positives).

I have a relatively balanced panel.

Is there a straightforward way to do this in xgboost (either through existing hyperparameters, or through a new loss function)? If there is a better loss/objective function (and gradient/hessian), is there a paper or reference for this?

machine-learning

xgboost

loss-function

precision-recall

objective-function

0 Answers

Your Answer

Accepted video resources