Mike Gilliland: The BFD

Mike Gilliland AKA: The BFD

Forecasting performance objectives are usually set in one of three ways:

  • Relative to “best-in-class” industry benchmarks.
  • Improvement over prior year performance.
  • Arbitrarily – based on what management wants or needs to happen.

All three are wrong.

  • There are many perils in relying on industry benchmarks to set your own organization’s performance objectives, the most important of which is relevance. The organization with best-in-class forecast accuracy probably achieves this because they have the easiest to forecast demand.

If your organization does not enjoy similarly favorable “forecastability” of its demand patterns, then there is little hope of achieving best-in-class performance.  Setting unreachable goals just demoralizes the forecasting staff, and encourages them to cheat.

  • In general, improvement over prior performance is an appropriate objective. However, we must be wary of the context in which that improvement is measured.

If there are no substantive changes in the forecastability of demand patterns from year to year, then improvement in forecast accuracy (or at least, not doing worse!) is a reasonable objective.  However, what if forecastability changes?  This occurs when demand patterns change, either organically (without our intervention), or due to our own sales, marketing, and financial practices.

For example, switching a product from everyday low pricing (EDLP – where prices remain constant) to hi-lo pricing (where temporary price cuts create spikes in demand), would greatly increase demand volatility, and reduce forecastability.  If a product were under EDLP in 2011 and hi-lo pricing in 2012, we would actually expect reduced forecast accuracy in 2012.  Insisting on improved accuracy after such a change would be unreasonable.

  • Forecasting performance objectives must be based on what is reasonable to expect, given the nature of demand patterns.  Simply pulling a number out of the air, such as “MAPE for all products must be <20%” is inappropriate and irresponsible.

What if demand patterns are highly volatile, and 20% MAPE is not achievable – then you give the forecasters every reason to give up or find a way (by gaming the metrics) to achieve the goal.  Or, perhaps demand patterns are very easy to forecast and the goal can be reached by just using a naïve model? How hard are your forecasters going to work to improve accuracy if they can beat the goal by doing nothing?

My Gift: Your 2012 Forecasting Performance Objective

As pathetic as this may sound, perhaps the only reasonable objective for 2012 forecasting performance is to beat a naïve model (or at least do no worse), and continuously improve the process.

Improvement can be demonstrated by reducing the error and bias in the forecast, increasing the Forecast Value Added, and becoming more efficient at executing the forecasting process (spending fewer resources).

If you can achieve 20% MAPE by using automated statistical forecasting software – or by using an elaborate collaborative and consensus process occupying all of your sales reps, planners, customers, suppliers, and executive staff for several hours every month – which is the better way to go?

Why Management May Hate This Gift

So for your 2012 performance objective, what MAPE (or whatever other particular metric you use) must you achieve?  Sorry, I cannot tell you.  (That is the part that management hates.) Your goal is to do no worse than a naïve model in 2012, and we won’t know how well a naïve model does until the end of 2012.

You must first choose an appropriate naïve model (e.g., random walk, seasonal random walk, etc.).  Then you must track your organization’s forecasting performance each period and compare that to what your naïve model achieved.  Whichever does better in any particular week or month doesn’t matter – short term results can be due to chance.  But by the end of the year, you should be able to draw one of three conclusions:

  1. We forecast worse than a naïve model.
  2. We forecast about the same as a naïve model.
  3. We forecast better than a naïve model.

Conclusion 2 means that your process is in a statistical dead heat with the naïve model – you cannot reject the null hypothesis that there is no difference between the two approaches.  If you are committing a lot of resources to forecasting, you may want to redirect those resources to more productive activities.

If you achieve 3, congratulations, your forecasting process is doing its job.  Take pride – as this can be surprisingly more difficult than it appears it should be.

If you achieve 1, then welcome to the unfortunate reality of business forecasting – where organizations are often best at making a tough problem worse.

Happy holidays…

Bio: Michael Gilliland is Product Marketing Manager at SAS, and author of The Business Forecasting Deal.  Mike is a frequent contributor to the Journal of Business Forecasting, and (along with Ryan Rickard of Newell Rubbermaid) will be delivering a half-day workshop “What Management Must Know About Forecasting” at the IBF Supply Chain Forecasting Conference in Scottsdale, AZ, February 26-28, 2012.  Conference attendees will receive a free signed copy of his book.  You can follow Mike’s blog, The Business Forecasting Deal, at blogs.sas.com/content/forecasting.  Furthermore, Mike will be contributing on a monthly basis for IBF’s demand-planning.com blog.  Please submit your topic ideas to the IBF here: info(at)ibf.org

Hear Mike Speak at IBF’s: 

IBF's Supply Chain Forecasting & Planning Conference