Most planning tools have so much complexity that planners give up on using them. Most of the planning software have given up on decent, time-tested models that have worked for the planning community over the past several decades.
Instead, they now play a big brother approach so there is only one forecast model available. The computational method used is a black box and no details will be disclosed in the user documentation. Conveniently, they will call this as Ai so no details need to be disclosed to the user. Previously, this used to be termed as a proprietary algorithm.
For example if you take demand forecasting software, some of the popular SCM tools in the market provide a predetermined Holt Winters method with the parameters already hard coded into the Engine. Besides this, this particular software tool just gives you a simple moving average model. If your data does not work with this one Exponential smoothing model with prespecified parameters, then all you have is the SMA model where thankfully the system allows you to pick the numbers of months to calculate the moving average from.
I am puzzled why the methods are not an open book in the major SCM tools. Overall Stat and Quant industry has made a lot of advancements in the last few years with Machine Learning, computational advances in complex stat methods, Bayesian techniques etc. And Python libraries make them easily available to anyone that wants to use them.
Given all this, the only reason why the big multi-million dollar SCM tools in the market hide behind the proprietary methods or the Ai Label is just to increase the price tag and offer inferior computational methods. The results of the methods can be so suboptimal that it needs constant planner intervention to correct them. May be planners like such tools because they feel needed and secure in their jobs.
There is a third angle to all this - tools that have some decent algorithms suffer poor implementation by integrators who do not understand the difference between mean and median.
At Valtitude, we have been rescuing SAP IBP implementations over the last few years. IBP has been continuously adding more models and functionality but jettisoned some good functionality that was found in APO. SAP IBP is one of the few exceptions where they continuously add open book models and spell out what they do clearly (although some parameter settings need some correction).
Regardless, the consultants from the big integrators use their primary weapon - the "Best Fit" model that runs a hodge podge and ends up picking garbage as the final result. At times the planners can do better using the crystal ball to get a better forecast or just type in their own forecast. The problem with the best fit is that it may pick different models every month. This introduces a lot of Forecast volatility for the supply chain.
At the end of the day, a forecast is about an uncertain future - tools that claim to give you a decent and most probable forecast will do better than tools or experts that claim to give you the most accurate forecast or the most accurate distribution of the future. They will end up costing you a whole lot of money and time with very little tangible results.
There are few good exceptions and one of them is ForecastPro - we have used this software for many of our small clients with great results. Another exception is PlanVida - our own SCM and Demand Forecasting tool that incorporates R and Python to develop advanced stat models and machine learning based demand forecasts. This set of software platforms do not make those lofty claims and practice humility in what they promise to the demand planner.
Finally, a demand plan is driven by human judgement. All of Economics is about human behavior. Anyone that tells you this is a natural phenomenon that is perfectly forecastable is really conning you - you need to find out when the magician put the rabbit into his hat!!
If you want us to review your planning software and incorporate better planning algorithms, give us a call. You can contact us at
Leave a Reply