Model Diagnostic - R squared or MAPE?
We have now settled that MAPE or Mean Absolute Percent Error is the measure for forecast performance for a planner/Division at the end of the month. It is more popularly used as a cross-sectional measure across multiple items and products to come up with one metric to denote forecast performance.
However, what should we use to assess the quality of the statistical models? Here I am trying to deal with the measure for diagnostics. When you have a forecast model for a product or customer, what measure would you use to determine if the model is a good fit.
We can use the Mean Absolute Deviation or MAD which is closely related MAPE. However, most software tools and applications propose R-squared or its derivate measure root mean squared error as the measure of choice.
The big “subtle” confusion is the definition and interpretation of MAPE. In academics, we commonly understand MAPE is the average of the percent errors. But business planners are puzzled by this behavior of averaging percent numbers without regard to scale. So most discussions and use of MAPE in the industry is always volume-weighted. MAPE = Weighted Mean Absolute Error percent.
So more specifically with reference to the model fit in a time series context, what is your recommendation? To rephrase the question, let us say we have come up with a Holt-Winters model that seemingly represents best the history. To assess this model and compare it with others, what should we use?
1. MAPE = An average of the percent errors of Abs(A-F)/A?
2. Root Mean Squared Error?
It appears that RMSE would be a better metric given it punishes bigger deviations more so due to the squaring of the error. The only downside is it is not relative since it is an absolute number. The best bet is to compare the RMSEs of different models and pick the one that is reasonably smaller without overfitting……
Leave a Reply