A person’s mettle is measured by what he or she undertakes and values during lean times. Historians may well say of the Internal Nineties that the mistakes we made then were (1) focusing on a “U.S.-only” view of the world as a whole and (2) confusing prosperity with competence. Let’s focus on the confusion of prosperity and competence for a moment.
Good sales cover a world of evils. Not that retail sales have dropped off to Depression-era levels, but capital and equity markets have suffered from the shifting of equity targeting a prop-up of the euro, discomfort over the state of Southwest Asia, and overblown Enron and WorldCom stories that fill the airwaves like a 1970s AM radio hit. One of the chief tools employed by retailers during down times is the productivity improvement program (or performance measurement or enhancement initiative).
Making a production
Retail performance measurement typically means a market niche populated by a handful of software developers and a small group of productivity-minded consultants focusing on an hourly worker’s effectiveness in terms of the net units produced in a standard working day. Such programs will often pay for themselves before their costs are even billed to the benefiting operation. A typical approach involves the actual clock measurement by an auditor of the work performed by hourly employees. These measurements are then compiled into a variety of performance standards touted as “work measures,” “engineered standards,” or “time and motion standards.” Companies then hold employees responsible for attaining the performance levels dictated by these measures.
But performance measurement is more than just holding accountable those who do not own a share of the store. As capital market disciplines increase and accountability becomes as much a fixture on the landscape of American business as ISO 9000 certification was in the 1990s, we will begin to see an increase in standards defined as appropriate performance measures. Not that this discipline will abandon strict productivity goals, but measurements will expand to include management return and fixed expense goals as well as cost performance within different channels of distribution in an operation.
Roger Cunningham is a partner in supply chain consultancy DCB and Company Inc., located in Marietta, GA. He can be contacted at [email protected].
Some of the specific benchmarks that Wall Street will demand during the next 20 years include:
Return on equivalent asset value
Return on capital depreciation
Return on direct labor
This refers to the contribution margin attributable to work processed by a given asset, regardless of whether that asset is carried on the company’s books. This filters synthetic leases, rents, and third-party arrangements as a way of increasing traditional return-on-asset performance techniques.
Return on fixed expense/fixed expense ratio
Return on management
Return on indirect labor
Delivery cost ratio (etc.)
Typically, fixed and ancillary expenses comprise 15% to 35% of an operations P&L expense. Managers handle these budgets by the broad stroke of taking last year’s actual and upping the expense via some scale versus business volume growth from year to year. More sophisticated methods of estimating and validating these large expense categories will be targeted over the next 20 years.
Operating margin per unit by channel
Cost per unit by channel
Cost per level of service delivered
It costs a great deal more to handle a stored chair than it does to handle a cross-docked case of apparel. The margin each of these categories achieves is different, and is sensitive to handling service levels and delivery methods to differing degrees. A typical distribution center simply lumps all costs into a single bucket and divides them into an overall cost per unit handled. This measure is extremely deceptive and difficult to benchmark.
Collaborative benchmarking
Some smart company out there right now is developing a tool that allows a subscribing client to log into its own performance management application, with apples-to-apples terminology and comparative measures, and compare — through a browser interface — its own indicators with those of a pool of anonymous participants. If this client were to check its pick-to-carton performance against that of similar pick-to-carton operators, the result would be a quick measure for the client as well as a level of certainty that what is described as “pick-to-carton” is a consistent definition. Perhaps external agencies can pay a fee to gain anonymous access to such data. Perhaps this package already exists. Ah, but that is another story.