machine learning - XGBoost error fluctuates and the model doesn't seem to converge -
recently, have been working on predicting using xgboost. first test-drove xgboost on portion (4,000,000, .npy) of dataset , works well. yet after switched complete 1 (7,000,000, .svm), showed weird pattern of error follows:
[0] train-error:12.822 val-error:12.4942 [1] train-error:1.02848 val-error:1.02711 [2] train-error:12.8268 val-error:12.4991 [3] train-error:1.01773 val-error:1.01609 [4] train-error:12.8218 val-error:12.4925 [5] train-error:1.0205 val-error:1.01982 [6] train-error:12.803 val-error:12.4753 [7] train-error:1.0421 val-error:1.04024 [8] train-error:12.7632 val-error:12.4369 [9] train-error:1.08154 val-error:1.07835 [10] train-error:12.7387 val-error:12.4139 [11] train-error:1.11096 val-error:1.10667 [12] train-error:12.7433 val-error:12.4177 [13] train-error:1.10388 val-error:1.09992 [14] train-error:12.7509 val-error:12.4244 [15] train-error:1.09414 val-error:1.09195 [16] train-error:12.757 val-error:12.4301 [17] train-error:1.08932 val-error:1.08618 [18] train-error:12.7628 val-error:12.4366 [19] train-error:1.07646 val-error:1.07292 [20] train-error:12.7759 val-error:12.4507
i'm wondering if normal? if not, might causes?
ps. it's regression problem , use custom objective (mape) , evaluation func:
def mapeobj(preds,dtrain): gaps = dtrain.get_label() grad = np.sign(preds-gaps) / gaps hess = 1 / gaps grad[(gaps==0)] = 0 hess[(gaps==0)] = 0 return grad, hess def evalmape(preds, dtrain): gaps = dtrain.get_label() err = abs(gaps - preds) / gaps err[(gaps==0)] = 0 err = np.mean(err) return 'error', err
Comments
Post a Comment