The most commonly used Demand Metrics in the profession are:
Forecast Attainment- How much of the forecast that was actually attained? In essence, a comparison of Sales to Forecast set for the period.
Forecast Bias- Sum of signed forecast errors over either actuals or forecast
Mean Absolute Percent Error- The traditional MAPE used by academics to infer the quality of the model or Model Fit.
Weighted Absolute Percent Error- The Classic Weighted MAPE used to measure the SKU level forecast error in most supply chains - volume-weighted.
Mean Percent Error- An average of individual SKU level MAPEs; not a very useful measure.
Root Mean Squared Error- Average of squared errors is a more rigorous measure since it weighs high errors heavily.
Rolling out-of-sample errors- You calculate the average error of the same forecast at different lags using different hold-out samples in each run of the forecast.
Actuals or Forecast?
The Appropriate Baseline measure for MAPE
The denominator for MAPE has often been debated. Why do we recommend using the Actual Demand instead of the forecast as the denominator?
Traditionally, Forecast used to be the baseline measure since senior management was interested in how actual sales compared to forecast. However, as a performance measure, this can create some subtle bias, especially if used to measure how a deviation compares to the expectation.
The forecast will be the baseline measurement if our only goal is to beat the forecast.
For example, if the Sales personnel are incentivized by how much they beat the forecast target by, then, of course, we want to use the forecast as the denominator. But this hardly does any good to any supply chain. Beating the forecast by a whisper is good, but not by a mile.
So we want a measure that emphasizes the magnitude of the error rather than how it compares to a baseline. In a low error business, the denominator becomes a moot point. Since Actuals will be close to forecast, the bias introduced is of second-order importance.
However, if the error has some magnitude, there is a potential to tease the forecast error measured by what is called as denominator management. When the error is divided by forecast, it may introduce some forecast bias usually to artificially over-forecast.
At the margin, over-forecasting will reduce the percentage error and increase accuracy. So when in doubt, the forecast will be high.
You may want to observe by constructing a case study using your current clients. Observe the clients that use Forecast in the denominator and observe how often there is over forecasting.
So it is better to divide by actuals since the actual demand is under no one's control. Although this may lead to some under-forecasting bias which is not as severe.
Thus, traditionally we divide the error by actual demand to arrive at the classic MAPE.
If you have a question or comment