Member Login

News

ARTICLE: From Gut Feel to Evidence: Making the Case for Technology Adoption Through Quality Economics

POSTED: 04 Dec, 2025

By Munia Ahamed, UTS PhD Researcher, Australian Cobotics Centre

Technology adoption decisions in manufacturing are often characterised by a tension between perceived opportunity and perceived risk. This is particularly true for small and medium enterprises considering investments in collaborative robotics and automated quality systems, where the upfront costs are concrete, but the returns can feel uncertain.

Research consistently identifies this uncertainty as a key barrier. A 2024 UK government review of advanced technology adoption found that financial barriers—particularly difficulties justifying investment decisions due to uncertain returns—ranked among the most frequently cited obstacles for manufacturers [1], [2], [3]. Similar patterns emerge in studies of Australian SMEs, where decision-makers report hesitancy in embracing new technologies despite recognising their potential benefits.
This article argues that one way to address this challenge is through a more rigorous application of quality cost economics—a well-established body of theory that provides frameworks for quantifying the costs of defects, rework, and quality failures. By grounding technology adoption decisions in these frameworks, manufacturers can move from intuition-based decision-making toward evidence-based investment analysis.

The Economics of Quality: Theoretical Foundations
The concept of quality costing has a long history in operations management. Joseph Juran introduced the notion of the “cost of poor quality” in his 1951 Quality Control Handbook, arguing that organisations inevitably pay for quality—either through prevention and detection, or through the consequences of failure. His work established that appraisal and failure costs are typically much higher than prevention costs, suggesting that investment in getting things right the first time yields significant returns.

Philip Crosby extended this thinking in the 1970s and 1980s with his influential argument that “quality is free”. Crosby’s position was not that quality improvement carries no cost, but rather that the price of nonconformance—scrap, rework, warranty claims, lost customers—far exceeds the price of conformance. His research suggested that well-run quality programs could yield gains of 20 to 25 percent of revenues, with the cost of nonconformance reducible by half within 12 to 18 months of systematic effort.

Armand Feigenbaum’s Prevention-Appraisal-Failure (PAF) model provides a useful taxonomy for categorising quality costs. Prevention costs include activities designed to avoid defects occurring in the first place—process design, training, quality planning. Appraisal costs cover inspection, testing, and measurement activities. Failure costs, both internal (scrap, rework) and external (warranty, returns, reputation damage), represent the consequences of quality problems that were not prevented or detected.

In practical terms, this means that spending more on preventing defects upfront usually reduces the overall cost of quality problems later. Failure costs tend to decrease faster than prevention costs increase, so the total quality cost goes down. This insight remains as relevant today as when it was first articulated, and it provides a theoretical basis for evaluating investments in quality-enhancing technologies.

The Economics of Detection Timing
A related concept concerns the timing of defect detection. The 1:10:100 rule, attributed to George Labovitz and Yu Sang Chang (1992), captures the exponential escalation of costs as defects progress through the value chain. In its simplest form, the rule suggests that addressing a problem at its source costs one unit; finding and correcting it later in the process costs ten units; and dealing with it after it reaches the customer costs one hundred units.

While the specific ratios vary by context, the underlying principle is well-supported: defects that escape early detection accumulate additional processing costs, and defects that reach customers incur costs that extend beyond direct remediation to include relationship damage, complaint handling, and potential regulatory consequences.

This principle has direct relevance to technology adoption decisions. Automated inspection systems and vision-guided collaborative robots do not merely accelerate quality checking—they fundamentally alter when inspection occurs. Real-time, in-process detection catches problems before they accumulate downstream costs, shifting the organisation’s quality cost profile in favourable directions.

Barriers to Evidence-Based Decision Making
If the theoretical case for quality investment is strong, why do manufacturers—particularly SMEs—struggle to act on it? The literature identifies several contributing factors.
First, quality costs are often poorly measured. While direct costs like scrap and rework may be tracked, hidden costs—inspection time, schedule disruption, expedited shipping to replace defective goods—frequently go unrecorded. Without accurate baseline data, it becomes difficult to project returns on quality-enhancing investments.

Second, uncertainty about technology performance creates decision paralysis. Studies of SME technology adoption consistently find that decision-makers hesitate when they cannot point to demonstrated results in comparable contexts. This creates a circular problem: evidence is needed to justify investment, but evidence comes from having invested.

Third, competing priorities and resource constraints mean that quality investments must compete with other demands on limited capital. In this environment, investments with uncertain or difficult-to-quantify returns tend to be deferred in favour of more immediately tangible needs.

ROI Calculators as Analytical Tools

One response to these challenges is the development of structured ROI calculators tailored to specific technology investments. When well-designed, such tools serve several functions beyond simply generating a payback estimate.

First, they impose discipline on baseline measurement. To complete the calculator, users must quantify current defect rates, rework costs, and inspection time—data that many organisations have not systematically collected. The process of gathering this information often yields insights independent of any technology decision.

Second, they make assumptions explicit. A good ROI model does not obscure uncertainty; it surfaces it. Users can see what improvement rates are assumed, what cost factors are included, and how sensitive the conclusions are to different inputs. This transparency supports more informed discussion among stakeholders.

Third, they provide a framework for comparing alternatives. By standardising how costs and benefits are categorised, calculators enable like-for-like comparison of different technology options or implementation approaches.

The value of such tools lies not in their precision—all projections involve uncertainty—but in their capacity to structure thinking and ground decisions in operational data rather than vendor claims or general optimism.

Practical Recommendations

For manufacturers seeking to apply quality cost economics and the 1:10:100 principle to their technology decisions, several practical steps can strengthen the quality of investment analysis.
Establish quality cost baselines. Before evaluating any technology investment, spend time measuring what is currently unmeasured: rework hours, scrap rates, inspection time, defect escape rates. Even approximate figures provide a foundation for analysis that intuition cannot.

Map defect origins and detection points. Understanding where in the process problems arise—and where they are currently caught—identifies the opportunities for earlier detection. The gap between origin and detection represents accumulated cost that prevention or earlier inspection could avoid.
Use sensitivity analysis. Rather than seeking a single ROI figure, explore how conclusions change under different assumptions. What defect reduction would be needed for the investment to break even? How does the payback period shift if improvement is 20% less than projected? This approach acknowledges uncertainty while still supporting decision-making.

Consider pilot implementations. Where full-scale investment feels premature, smaller-scale trials with defined metrics can generate context-specific evidence. This reduces risk while building organisational capability and confidence.

The Path Forward
The theoretical foundations for quality cost analysis are well-established, with decades of research supporting the economic logic of prevention over detection and early detection over late. What is often lacking is the practical application of these frameworks to specific technology adoption decisions.
ROI calculators, when grounded in quality economics and used as analytical tools rather than sales devices, can help bridge this gap. They provide a structured means of translating established theory into operational decision-making, replacing intuition with evidence and making the case for investment in terms that resonate with resource-constrained decision-makers.

For Australian manufacturing to remain globally competitive, we need to accelerate thoughtful adoption of collaborative robotics and quality automation. Fact-based decision tools are one contribution toward that goal.

We welcome discussion on this topic. How has your organisation approached the challenge of justifying technology investments? What frameworks or tools have proven useful?

References
[1] Make UK and RSM UK, Investment Monitor 2024: Using Data to Drive Manufacturing Productivity. London, UK: Make UK, 2024.
[2] Make UK and BDO LLP, Manufacturing Outlook: 2024 Quarter 4. London, UK: Make UK, 2024.
[3] UK Government, Invest 2035: The UK’s Modern Industrial Strategy — Green Paper. London, UK: HM Government, Oct. 2024.

 

About the author

Start date: January 2023 Expected end date: July 2026   Munia is a PhD researcher in the Quality Assurance and Compliance research program at the Australian Cobotics Centre (Program 4). Munia will be working on research to embed Human Factors in COBOT Era to Support Quality Assurance and reliab ... more